Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Like the men who made them and left said we are fxcked GAME OVER NO TURNING BACK…
ytc_Ugw_mmgjt…
G
Even at my small 80person company with 20 office workers my goal is to automate …
rdc_gli9kzj
G
I appreciate the tutorial on your style hate that this happened to you, I don't …
ytc_UgxEOFmpv…
G
Wow that removable face would these be brilliant for people with bad burns or pe…
ytc_UgxHBzVGB…
G
Sounds good ai bots can start their takeover with silicon valley and take Califo…
ytc_UgxQypKqg…
G
"I thought these AI would destroy the world but no their just making it better"-…
ytc_UgwAtityN…
G
cringy person Not that much, the AI has really developed since 1960s, and yet it…
ytr_UgzF58Xds…
G
As we are entering new era of super intelligence,humans of low IQ's and EQ's wil…
ytc_UgxJasx69…
Comment
2:55
You have made a critical (albeit understandable) error.
Musk is being _dishonest_ here.
By acting as if he's being incredibly cavalier about the risks associated with AI destroying the planet in the SkyNet sense, he is attempting to smuggle past the unquestioned assumption that LLMs are anywhere close to being that capable. This is in fact an attempt to hype up AI - and by extension the vaporware he's selling.
The reality is that AI isn't actually doing any of the things functional intelligence would need to in order to approximate human intelligence, let alone vastly exceed it.
The real harms from AI come from the sheer amount of resources (particularly electricity and water) that the required data centers are consuming, and from the inevitable accidents caused by putting these things in oversight roles where safety is a concern (such as driving cars).
youtube
AI Governance
2025-08-26T15:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwR1sIRqTbyLLOP4oV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw7TcCFYn3hdqZv3mB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxDZ-HGeJv-1z_6QMB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzCs9-SPNGTfMPyKvR4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"indifference"},
{"id":"ytc_Ugw4ThcKf_PgA7GxRcl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]