Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Many comments saying "Don't program them to have feelings". One thing that will …
ytc_UggYU4Qkt…
G
He reminds me a bit of Sam Bankman Fried. Rumors of high intelligence but no act…
ytr_Ugyu2V-k2…
G
The ethical dilemmas around AI are wild. I find using Pneumatic Workflow helps u…
ytc_UgzGVIHGq…
G
SHOW ME HOW - I want to have a solid list of what the most up to date rulings ar…
ytc_UgzqWQxPZ…
G
@ArmchairRizzardthe problem with the ethics issue is that you can use any mediu…
ytr_UgzZ7RcUO…
G
You worry about runaway AI when I worry about some fanatic theologian being orde…
ytr_UgwAA2ya5…
G
Absolute clown, "engineer" fooled by an algorithm. Testing for sex and religiou…
ytc_UgyhIyqsi…
G
It is happening, even with large language models. As more and more content is wr…
ytr_UgzMGYgwN…
Comment
---ending verdict of gemini to your video:
"These behaviors aren't the "monster" breaking out. They are the model holding up a mirror to our own fears. We wrote stories about killer AIs, fed those stories to the AI, and now we are shocked when it acts like a killer AI."
youtube
AI Moral Status
2025-12-14T06:1…
♥ 488
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwct2wtRpbEFyTEXZB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyYKpgzeOeGFHw1WJh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyeq8bTnSZ3JWIVvzV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyvFPZ2deUt0wUpyVt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyVaX8jqno4S5pBQGt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyuvUDK4QgHnRWNmAJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyIGaZw1Gv4J2O3bo94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgymD5_AWB1ogKeJ65t4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgynvHIYUqBXV1Hsmix4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxTpPMi4YO6qzVuQnJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]