Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So overdramatic. The experts saying we're all doomed, so dangerous, this will de…
ytc_UgwpqiSJs…
G
Hi there! In the video, Sophia explains that her name means wisdom in Greek, and…
ytr_UgwXG_Oia…
G
You guys are kidding me right, the ai bubble is popping, these llm’s are not imp…
ytc_Ugwim9XQC…
G
Conclusion of this video:
Hinton paints a future where AI could uplift humanity…
ytc_UgxIJ9KWw…
G
We appreciate your thoughts on artificial intelligence. On the AITube channel, w…
ytr_UgwRIgn7S…
G
If your job can be so easily replaced by AI, it will. I fear for all the email s…
rdc_lgr5l8m
G
Calling it art is the biggests stretch in the world. AI art is immoral, its unet…
ytc_UgxDWvAbu…
G
I am a professional traditional artist that just recently got into digital art a…
ytc_UgzBz_CWg…
Comment
Generally I don't think companies should be held responsible for people being stupid. It's just when one can start seeing a trend and they still have not acted they should be at fault.
That being said, I think AI already has a trend and they just fix each single issue instead of the fundimental one. Where most AI models will often be wrong, and all AI models will occasionally be wrong, the companies should be responsible to inform their users this is the case. Which is something I've never seen them do unless certain keywords are triggered.
youtube
AI Harm Incident
2025-11-28T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwzF9zeZ06MzGAwk2l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxU-_9auTarJyXTjXh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw1bf7r9wcDPUdHz3l4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwslIcni6KMOtM3v3x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwdF9lGGKSwKB4v_Kh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgxfiYEqf2_b_2QxdKx4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyuxSjCKl-aGtGGSfF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx3Pi4FxBsxJhhLs8Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwul8TcYLbk1-zVyV14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyhRNyKSG8tULfGYG14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]