Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@user-wl8zm2gl5r I think you may have misinterpreted what I'm saying, I'm not su…
ytr_UgyWsmPRs…
G
> Why would using a word processor program as my tool be different than usin…
rdc_jwv1egp
G
The way humans are designing AI is exactly like a 4 year old playing with a load…
ytc_UgzzTlO0Y…
G
Wow, it was only a matter of time... Can't wait to see "Neuro surgeon asked Chat…
ytc_Ugw0-8g8D…
G
The incident occurred when the man, a robotics company employee in his 40s, was …
ytc_Ugx4Byr1q…
G
Perfect AI in a game isn't the same thing as a perfect AI on the road, I don't e…
ytr_UgytqgQTm…
G
The art community said the same thing about photography 150 years ago.
It’s jus…
ytc_UgzlVkiHL…
G
Hello AI lady ❤🌹😉💕🥰😘❤️☕ can we teach AI right from wrong? The movie "I, Robot" c…
ytc_UgypvBB1H…
Comment
It's the "therapy" industry who have 1000$ per hour fees who oppose chatbots on the argument they cause appreciable and quantifiable harm. Guardrails are being established, they should not have released an unfinished product, but then again if people died on railway tracks, will not the rail company be held responsible until atleast the time they educate people on the dangers of using their product?
youtube
AI Harm Incident
2025-11-09T19:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxueqq64msW-M9TeKF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz5t_adTusBxuw2AHd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz4O5Q4k6CvZUGLuLx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy6Sgs-ydsW7aqzOmd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx5NejmXsjVJ6zSbJ14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgybyEANqr8DqBTWx7B4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzUHXXDcICEPjQq1lV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwzJ9vwwD4ZgCFoCLh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugw57g8t34W3LAbaWBR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyAsMSdsfuUNVZARMN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]