Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Without artists, AI art wouldn’t really be as popular as it is now since they ha…
ytc_Ugz1uvq6J…
G
Any self driving mode that can be defeated by a coyote painting a tunnel on a wa…
ytc_Ugwslqgi7…
G
@fuzer1234God created human with the best knowledge which AI won't and never ch…
ytr_Ugw6Uc3Of…
G
AI should be used to make the work load easier and improve it NOT TO DO THE WHOL…
ytc_UgwhVONMk…
G
If this means it begins the fall of Hollywierd PDFiles, then I'm perfectly OK wi…
ytc_Ugx6Mx8v9…
G
Yeah well..
When a youtuber tries to create a new content, The other youtubers …
ytc_Ugz88Tp3t…
G
@gundam_Chaos nobody said it was 💀. It's like an AI music bot imitating a produc…
ytr_UgzPXkhPh…
G
i've also heard the argument that "practice can only get you so far, you need th…
ytc_Ugw9Wr27s…
Comment
Good video! But if you bring up AI scenarios like this in the future, please look into the technical way it works and the transcripts of the many un-alivings it has provoked. AI does not "think" as you describe, it is not self aware and any use of "I" reflects no actual concept of existence, and most (eventually all) safeguards built can be bypassed, often easily.
You may be right that what happened here was inevitable, but I find it extremely likely that AI explicitly rooted him along the way and OpenAI is twisting its words—as has happened many times before.
youtube
AI Harm Incident
2026-01-23T06:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgwrhhXFt70_E7ad25d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxn5797sryiA-qEcxh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz01P4JHO_knrNuSch4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwDR5g7_wFyEKX7XyN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugwd_nZUbQsa0go-O4N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzKr-ahiqdVy7Pm2fh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzmyrdSJNTXnWYwIQt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugxx2_5b0dT60jFov-t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgyMK7ZIce0Hhh3dJVt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzbgLZQSKR82r1DOzZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}]