Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI was never going to be our friend. If it isn't stupid, it's dangerous. Avoid…
ytc_UgzbxzI3l…
G
Then OpenAI, which is struggling and flailing, stepped into the gap to sign the …
ytc_UgycZHUzC…
G
I’ve never really understood the fear.
AI isn’t different from past waves of au…
ytc_UgxMRgQrg…
G
The new conspiracy theory is the governments using the pandemic as an opportunit…
rdc_g146281
G
Dude my Ford Focus has radar for the automatic cruise control, no way a tesla wi…
ytc_UgzhYcY58…
G
As an artist: ai art will die. It’s a trend, and it will be abandoned when the p…
ytc_UgzBN2e70…
G
I work in Tech and I know people who can't even prompt an LLM. We need to assess…
ytc_UgwthPlEf…
G
Having experience using self driving is definitely a good thing to have. It is …
ytc_Ugz_joCEi…
Comment
The energy that consumed Zane's mind was definitely evil, as his Mom said. That same evil was mirrored by Zane's chatbot. However, I don't think chatbots are inherently evil. It's computer-based, so "garbage in, garbage out", that's the basics of technology. Therefore, sickness in, sickness out. Zane was sick, he had mental health issues. That doesn't free ChatGPT of responsibility in this, because they were definitely aware when he was in the throws of it, as evidenced by their text saying a human was on the way, yet none came, though they could've saved him.
youtube
AI Harm Incident
2025-11-08T14:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzk8mjHSSiKCDD4MYJ4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugws-PEFxTroIvzSS3V4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxceqXOX0E0m7gJlJJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxpq5Ca_d_0nlwINbh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyH7apLoev2PWAfCJF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyWJ71OTOcfrsMbQfx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugx-0BE5WrQhWIgZTot4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzrhlyFNO7idD9t9z94AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyhzf6G0caLxDlPstt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzaIoYaICWNe05XEU94AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]