Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why not just kill ai or limit its training if you want to save humanity…
ytc_UgyUl4H3O…
G
true. i like ai art and its applications, not all of it of course and as to how …
ytr_UgxkRQ-GR…
G
Orange Flame Just the fact that it could kill us easily is what is wrong with it…
ytr_Uggpj4TNs…
G
What will be of value? I’m 53, I think for me, one of the most important parts o…
ytc_UgyUIIZYZ…
G
Been seeing these predictions since the 80s. AI will never supercede human intel…
ytc_UgygMyzRe…
G
If you're watching this entire podcast with your mouth agape, butterflies turnin…
ytc_UgyeFrY_L…
G
we are building our owners with the ai we will be wiped out by them and they hav…
ytc_UgwG1x0xz…
G
The last robot is me looking at my mom and dad fighting💀
But robot team cause my…
ytc_UgykEj0qp…
Comment
No. The “thinking process” they use isn’t what they’re actually doing. Anthropic’s Golden Gate Bridge experiment shows this.
youtube
AI Moral Status
2025-11-21T08:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxBOPUgAxtDXo-wByp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwokc-KpVgo6CRpy6d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy6Ka-D95OSbmQsMuR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzgU2qTaZL7F-Jrnqh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwYHHy5gvVceMr3wSV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxgusHR0AKOCY2nerF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzNgO0hiXfGxYnYIsB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzI_6kpd0xiTB8iXuh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugym50IIHEPf7O5tOqN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwBljTBFUwkasW5CmV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"}
]