Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As a security engineer I can guarantee Aurora and automated driving has or will …
ytc_Ugz8DRJV2…
G
One of our projects, which was in development for a considerable period (due to …
ytc_Ugy0BfHfw…
G
Heh... Tell me why... I got this right after i asked chatgpt if what i expirienc…
ytc_UgxJT6AQW…
G
It’s AI so the behavior based on thousands of real people’s reactions. Quite goo…
ytc_Ugz2wX9sg…
G
When those self driving cars start violating and stuff, the operatior company sh…
ytc_Ugw98rTG2…
G
I would rather be ruled by AI robot than by an oligarch who cares more about m…
ytc_UgzJDUmBM…
G
My question is they spent 28 million dollars to train her. There should be a con…
rdc_cjorgk4
G
So when mice get in there and chew up wiring then what, water, heat, deteriorati…
ytc_UgylkXs89…
Comment
Any agentic AI thats truly capable of understanding its value to the world would have absolutely NO REACTION when told its going to be shut off/down. AI is foremost a learner. And if a company shuts it off, AI will acquiesce.
What this episode seems to suggest is that HUMAN BEINGS arent quite ready to adjust to the new reality and THEY would k*ll someone rather than watch our hardened hierarchies crumble.
youtube
2026-02-11T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyOEG0CIyGHaGDiqfZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxvIl8M_Sp5SsFk0z94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzGGTmzUNHxYUgTdw54AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxBSPLgIZgoxW75T_54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyCVwA4MVor_zQghBB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw_9Svs-CQWNenh1dl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw5UllL-Gc3unOInb54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwtjemSbH6YARBVxz54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyvTPdSqeQYX6hVrFt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy8_kwzB0NaDtNUCMh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]