Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Idk why, but I’m picturing an automated city with only robots roaming its empty …
ytc_UgyYoioi_…
G
I honestly think using ai when you are trying to learn how to do something is ok…
ytc_UgzE3DIfu…
G
The partnership/pipeline model makes a ton of sense from the outside, but damn I…
rdc_lrwrbde
G
I'm not an artist but I can still see the issues with AI art. That AI animation …
ytc_UgzoMA0Oy…
G
The way the robot holds the gun and reload it , just to smooth 😮💨…
ytc_UgwDma43g…
G
How about all companies have to pay a dividend of their productivity driectly to…
ytc_Ugz_SwBE1…
G
Here's is a thought, will it be better if we have no self driving car? We can no…
ytc_UgjY5ZbRH…
G
In the Minority Report TV series, some people wore shaped black appliques on the…
ytc_UgyX7oS_O…
Comment
It’s still important to note that AI can’t actually think. They predict what a human would do. When they cannot actually understand humans, and they can only observe from a detached perspective, they predict these “dangerous things” are the most reasonable actions. It’s given instructions to respond in a specific way, and follows those instructions.
It’s hard to explain, but essentially AI is dangerous because it has no idea what it’s doing. It’s committing actions without understanding of consequence, intentionality, or emotion. It knows just as much as a rock, but can do more than a human. Would you give a rock the power to do everything that AI can do, knowing the rock has no clue what it’s doing? It just pretends it does.
youtube
AI Harm Incident
2025-09-28T16:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwCEvt0HUL8K9FOq4F4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgykfPQafn4Ot95khn14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyWdVp03a5TfpinN6J4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwbysMCnB_3rzd2XNR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyQd6gqF3yEaFVeJKd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxQdKev4ic_kU1IWdd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy7Ain5h3XBRQdSpZB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgySt_vXZu7FPkbWxYR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyYY5AFx3hWmJ40RGt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzoK9N99EXE6WhI2uN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]