Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If a robot ever does better than me then box me I'm bouta show you what it looks…
ytc_UgzV9svPo…
G
I don't comment on YouTube videos but I needed to here. I spend every day of my …
ytc_UgyEFQOca…
G
If we give up our intelligence so quickly, everything that makes us human become…
ytc_UgzAhrwvJ…
G
Listen, I'm gonna need AI to pause. They call me like 20x a day and we aren't fr…
ytc_Ugwb49E7D…
G
Yeah… I see issues in both, I think either both are AI or we are already deep un…
rdc_oi1rs9e
G
ChatGPT isn’t allowed to connect to the web. You can’t tell it to search a websi…
ytc_Ugx7rh2IG…
G
It also isn’t clear who owns the art. The prompter or the creators of the ai.…
ytc_UgwkfvtFC…
G
Designed obsolescence is a good example of this. As a people we have so little r…
rdc_degcrdz
Comment
AI will always, forever, be able to give unsafe or untrue advice. Even when it becomes able to reason, and find “truth”, it will still be based on human studies, which can also be misinterpreted. AI has a hopefully great and interesting future, but taking its outputs at face value without thinking to do further research or apply critical thinking is a failure of our (global “our”) education.
youtube
AI Harm Incident
2025-11-26T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwXj31gZXjyfnR-8up4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz95s8kc4TgnNNi_IB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyjlcVDkQq3j3BRrE54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugx-hZdCuHMWwxGDycV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxNfC7qVMPQcwSB0jN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugysiw5QjG6QDLAznrV4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugx2ANk3EIvvzbvkY5B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw0vPJtRa0pcmTh0lF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwb35NSCRp1OZc1CsV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugxayw4_NQ-AG2hDevR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]