Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
“Cruisin’ on down Main Street, you’re relaxed and feelin’ good…”
When they star…
ytc_UgwZYfy9x…
G
Once AI completely takes over, "money" will have reduced or non-existent value. …
ytc_UgzoIsbRy…
G
I think the ants are the ones we need to worry about especially those red fire a…
ytc_UgxCQWcSL…
G
Does it come with big tits and a beer opener ? It's for a friend...…
ytc_Ugz37N2Id…
G
You are clearly warning humanity that we are all in mortal danger because human'…
ytc_Ugy0tHKF7…
G
I feel like companies are just way too short-sighted putting AI into more and mo…
ytc_UgzjWekZL…
G
I thought doomsday would be just civilians, militias, military and drones but n…
ytc_Ugx2A2U03…
G
LLMs generate too much slop. This is partly because they are trained on slop, an…
ytc_UgzZMDmPF…
Comment
I've seen and read quite a few articles on this issue but one thing that I've never seen addressed is that these critiques warn of an AI takeover or malevolent AI actions. But these are all human emotions. Is it not a leap to just assume that AI will operate on negative human emotional motivations (greed, lust, etc)?
youtube
AI Governance
2025-09-15T01:4…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwbMd_M1sSoolQPNj54AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwfI2BkiJS_EPG9bKF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyp0skps8O7ekhM49V4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyUjECTsD7m0g4hcHF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwnbfHTu_bs86A5k7Z4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwU2XA9gbQbjYxF4A14AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwH4BbmsWCSSUE-Tgp4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyih_iBofxWp_rdxbd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwjbwJIa_GeaHmdjct4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwaIMZYMBRyEBzgaxx4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"regulate","emotion":"approval"}
]