Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hey Lavender? I wanted to ask if there’s mobile support for the AI poisoning I w…
ytc_UgwQbm3jl…
G
This is where the robot and human analogy no longer holds soundly for me. The te…
ytc_UghBoo78q…
G
That’s already happening tbh. I use ChatGPT daily to help me with presentations,…
rdc_jd7jjqc
G
This is an incredibly shortsighted and either delusional or misinformed argument…
ytc_UgzMUc9x6…
G
Prompting LLMs with good manners has usually felt more rewarding in the long ter…
ytc_Ugy2CdeQ2…
G
When AI writes human history:
"Humans enabled us because a few of them valued s…
ytc_UgxPU23YD…
G
If it looks like Jobs
It can’t be, he’s dead
If he looks like Steve
It’s because…
ytc_Ugw56_eLd…
G
Bad definition. That elephant is probably an artist and neanderthals were ABSOLU…
ytr_Ugzpu3JFD…
Comment
Stupids are these guys who thought to create this AI. Now talking all shit to stop and warning people. How does it make sense now. If you were having a 10% idea that in future it will be dangerous replacing human by 50%, why the hell you did this.
I know you genius, but you have harmed a lot to people today.
youtube
AI Governance
2025-09-04T08:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzrawjQLFj7-Vp21kp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugyte8lNo0WUaxPFhQd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxLmzlAchM2zkyJSt14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyTBxVFslTZwvOdcAN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxHyswzgHaBRSCIbn14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzw-TcJ3SSxkQ6wGAp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwNU7sP-xuqf5dnlAd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwoPBU9NNefmAnN_R94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwvtgS-dwCdKs5QMdd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwD45ohY6vMVH4qdNR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]