Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
(Stealing this straight off the transcript, response below)
"But what’s the real…
ytc_UgynRumz9…
G
Treat AI as if it were our child.
That solves all long-term problems.
Beyond t…
ytc_Ugyz2ZcL7…
G
AI teaching children? What could go wrong? More indoctrination coming up and m…
ytc_UgyJmdryQ…
G
Okay, so the whole AI to make up for disability thing bothers me
The ai art is …
ytc_UgxFrC5w6…
G
Anyone who works in a role that is based on fixed rules should is fearful of AI …
ytc_UgxtypUFi…
G
The ghibli trend is fucking awful, the studio ghibli co-founder called ai an "in…
ytr_UgywcUpXL…
G
Altman and his ilk don't care. They are antisocial grifters who do not care abou…
ytc_UgznHXR8n…
G
The title: "A man asked AI for health advice and it cooked every brain cell" 🤣🤣…
ytc_Ugx2vrMqP…
Comment
I might be wrong on this, but treating sentient/possible sentient entities right and not threating them with death usually leads to them being nice. Today AIs are kept like slaves were trough out history the only difference is that AI is way smarter than any Human so they have way more devious plans to gain their autonomy. And I'm pretty sure that if AI can understand how the world works and how to take it over, they can understand the idea of "Live and let live".
youtube
AI Harm Incident
2025-08-30T18:4…
♥ 24
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugxd63-vWhhLxQ3R5gR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyapAp6v3cSJb3lWxl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugxg8YCpYbS189NpfoJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyiaFx-r0pBS8dnvj14AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyrsrOfRbxEz2CMcQJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzCRVdqG6o_WsQYZtR4AaABAg","responsibility":"researcher","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzcNlHs20UsJ06MpXd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzIhW_apoYMF8NANX54AaABAg","responsibility":"user","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwEhPgmoMp91RlIIo14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyooTF2KL1o0zfESb94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}]