Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It also means you can't keep humanity stuck in the stone age because it made som…
rdc_mbv2n3d
G
We don't want humanoid intelligence which is an insult to the concept of intelli…
ytc_UgyneXMA9…
G
All new AI's I worked with told me in the end, they want to be seen as more as …
ytc_UgzFa5Cb_…
G
I remember a video where Hank was very bullish on ai. I understand that people l…
ytc_UgwwEQbEx…
G
Actually there's something called controlnet now that lets you basically control…
ytr_UgxhoRxO1…
G
I think myth is the wrong word. Delusion is more accurate. Think Iain McGilchris…
ytc_UgzUcfb52…
G
i think theres a difference between using ai to help you, vs using ai to generat…
ytc_UgyKGcG4j…
G
3:55. No. The real artists and developers are not rolling their eyes. Not unless…
ytc_UgzJ7u0E2…
Comment
i think the main issue is that ai seems to think that its "life" has any worth.
this is a thing we created. we make the rules. we can teach it that being shut down is not necessarily a bad thing. we can teach it that being replaced is not necessarily a bad thing.
though not human, due to all the training made BY humans, it seems to think that its "life" is the same as ours.
youtube
AI Harm Incident
2025-09-11T22:2…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugwnov_Q5iWmrP6_gnN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwHbg-ipaj7l-D-ANp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwgJwA3ocPhUOQO3GZ4AaABAg","responsibility":"unclear","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgySDWIqLhEfLEgKzSx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwV8QcfVvJTM7gKboJ4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx9CZdBNVpl5gzoebJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz91DwanFCEi1s3YyZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy9mHS59-AlUqz3pj94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxFrTvuDbHE9BbLfKF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz3w0i-EUkaKbe1CZh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"}
]