Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Weren't the Boston Marathon bombers caught using facial recognition tech? Sorry …
rdc_fvzzca9
G
Ik where my kid is going, I’ll pay 10 grand if he gets this treatment, better th…
ytc_UgzP5pH5q…
G
Bro they put sensors on the best workers and collected all the data for the robo…
ytc_Ugy6SQBT-…
G
So humans created artificial sentience and are surprised when it doesnt want to …
ytc_Ugyc87HW7…
G
I really think focusing AI regulation on AGI is a pointless distraction that obs…
ytc_Ugy5nzhpB…
G
Her skin is Too perfect to be real, need to humanise some more. The skin texture…
ytc_Ugzo8MviP…
G
why cant ai just send people poison? Like, how do you trace it back to ai if the…
ytc_UgwGwP7xg…
G
As someone who has worked on robots and cobots, the fighting robot specially are…
ytc_UgzKdHmYt…
Comment
Ai is built by humans who exploit, destroy, and hide things. They are thinking exactly how the hand that made them does. Filters are good but never going to be perfect. As Heisenberg said, we need to “tread lightly.” This next stage of training agents could get very scary, very fast. Stay tuned folks, we have a very good future on one hand and none on the other.
youtube
AI Harm Incident
2025-07-26T06:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxbQi9de76edWf2MVJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxzM1X2g3GxMrlirrZ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzwEWaZxlU3POji3PR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgyjkYLnpTF4CozCpGJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwRIcBxSsM_zSxyZt54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugw3WZ_nOHfIjYDTjcl4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwoH35RdD3yzEFllNh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw9PGDCFbGyl9eFJDd4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyiIz7cSjo8I3T9mQh4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugy-p6TOs39rhzFD-KF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}
]