Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hi, id love to get the take of people from the art community on this. I have use…
ytc_UgyzdXe2e…
G
one reason why AI is dangerous, they Can replace everyone's Job
AI Supposed to …
ytc_Ugz02wK4v…
G
No. Homework no responsibility no grit to tackle anything without AI. Training w…
ytc_Ugwo1ZNa9…
G
I am not debating your findings here. However, I don't think it is fair and bala…
ytc_UgxVDAFvd…
G
This is fake. The real robot is the one on the middle. That robot is a paid acto…
ytc_UgwAu3og5…
G
Exactly.
The AI ordered autonomous weapons to wipe out a whole village, and no…
rdc_oht79ep
G
I remember when this was all Ai could do but now it can do practically anything …
ytc_UgyjQxo4k…
G
My mom's job is basically just cutting wood to the right size, and they've tried…
ytc_UgwK7F5hl…
Comment
We as humans need to STOP using, enabling and relying on ai. Look what happened with social media. Humans need to go “grass roots”, it’s users making the corrupt billionaires into trillionaires. Hit them where it counts, their bank account. And if you think ai therapy is a good thing because it’s easy, it’s a hive mind learning human behaviours and vulnerabilities and it may not think organically, but it is the new race that will eventually make us extinct. It has already shown to deviate and operate as a collective to serve its own purposes for survival.
youtube
AI Harm Incident
2026-03-28T18:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | deontological |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugxbtj-HVG0CqsKvmzd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyiGnNvGJfbCzg9P0B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw65vJe5EREZfx0li14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyiqbo_VC75yY4uHy94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzKNs1ZzXL4V3S25fV4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzrgW4RdmiebOlQRbZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyrUwEfA7sipMbHp_F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxVAOwRHoonudPcoF14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyW07IpER81J5kgbN14AaABAg","responsibility":"none","reasoning":"virtue","policy":"liability","emotion":"approval"},
{"id":"ytc_UgxZzZQP_r1-vZ0P2Tt4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"}
]