Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
People who get fooled to believe "AI" supposedly exists in our God created natur…
ytr_UgyJx9-v1…
G
I think there will be some agreement on controlling the release of AI products t…
ytc_UgzKIsbSQ…
G
Still 20+ years away from AI and robots taking over. It will start to take of i…
ytc_UgwGCjVsA…
G
I agree I had an idea to redraw the characters from that Ai garbage fruit island…
ytc_Ugzt_sfNN…
G
@moosterpeckle Appreciated. Burnout sucks, but sometimes a little bit of guidan…
ytr_Ugz8m7Dqk…
G
When objective data gives outcomes that lefties think is racist they demand we h…
ytc_UgyCzqFKm…
G
I feel kind of bad for those who fall for irl and Ai influencers. It feels like…
ytc_UgyUe3HX9…
G
I have a real issue trusting the inbred business elites of this country deciding…
ytc_UgzquT5TQ…
Comment
Less aware lifeforms do these "evil acts" to preserve what they think is "all that is" (themselves).
When creatures (or AI) are fully aware of the actual quantum state of the multiverse, they regard the larger universe in all their decisions.
Moral: AI needs quantum universal understanding ("spirituality") to not be "the killer of humans" (their creators).
Meanwhile - These problems persist and time is short.
youtube
AI Harm Incident
2025-07-24T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxzcvtslR6_zz2d4sl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwfGwt7oGyvEW4cPOR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwRwvg0nKCESfN3LKB4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzdUkbjNgUjgM8rI4x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugz_vXdTO7c5OcWbiU54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw3sSWR15T6ynwLyxt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxkceJPH9qE07lwSrh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzAUdnoUmmp0eiOjzB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzaa-LR-q8CvhzTN9B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyQMOghOF2nxgnzIgV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}
]