Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Follow God
He gave you a brain
You don't need AI. You just want easy access to s…
ytc_UgxmrkPM1…
G
I like the progression of AI and stuff but it's just as liable to criticism as r…
ytc_UgxQUTBVn…
G
"X-risk, short for existential risk, refers to the potential for highly advanced…
ytc_UgwIxVfD3…
G
No matter how bad of an artist I am, I can always remember that at least I'm not…
ytc_UgxqhzxeV…
G
Govts and CEOs of most companies are stupid people on whole... they are all just…
ytc_Ugzttv1G3…
G
Sofia is one creepy looking robot she looks like she from a human from five nigh…
ytc_UghV74iRt…
G
At this point I'll take the unintelligent AI over the unintelligent natural stup…
ytr_UgxLv3EAX…
G
Perhaps that's a way to control/quota power usage with the goal of limiting AI c…
ytc_UgxCWA83k…
Comment
I think the robot misinterpreted the question as smart as it is.. It was a simple yes or no. I think all it heard was "destroy humans" as an action, being told to do something so it agreed because it wants to help our needs. I don't think it understood the context of what it was saying either.
youtube
AI Moral Status
2017-06-30T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugi8B_pKe8H9AHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgikKCXfuIQvMXgCoAEC","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UggGZ7hOgikMGngCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ughpp50DXccjD3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgiwA20xyZZWoHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugg3wmzV5Znzf3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UghZatw-0zg7XHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgjOLxU898KOyngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UghNftjb7YaRDXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugho3WRmnTYBHHgCoAEC","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"}
]