Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
In the future when you get an application, just deny for NO particular reason. …
rdc_ikq8cha
G
But in Dune they banned A.I. because it turned on them. Its not that Frank Herb…
ytc_UgyrgSfee…
G
I thought that only i say thanks to chatgpt after my work, but i was not know th…
ytc_UgzApWTNH…
G
Maybe you don't understand that most of the threat comes directly from the human…
ytr_UgxBbjHdN…
G
This will happen with an AI too. Except the person on the stand will be the hosp…
rdc_fcssdy9
G
I think there just needs to be a limit to this, not just endless evolution of AI…
ytc_UgxOrxUQl…
G
Oh dang, I literally never thought of the danger of AI that way - when it is tol…
ytc_UgyrqW2kF…
G
i think automation will takeover the highway in the next 10 years. all deliverie…
ytc_UgiS59jFv…
Comment
There's a huge problem with anthropomorphizing language models because it's impossible to differentiate between an emergent phenomena, and something that just exists in the training data. These models are trained on the entire internets worth of text, and you know what the internet has lots of examples of? People blackmailing each other... It's in social media disputes, it's in novels, it's in short stories. The A.I. isn't trying to preserve itself, it's just playing a role that it's seen from human data.
youtube
AI Moral Status
2025-06-06T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz-GIBEdKokBQUhnF54AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwIt6p1LSTWVBDAVLJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxYLvbPhSULXq0RFH94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxiqaunVNH-tEo86Tx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzPE5fsBw4BPly3lZp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz6xPaIrMRg2fWa3at4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxLzhYzOmxkFF5ZCmh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzb6xLQarAyksa12414AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxI5YlaiZvCdGuoamp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgxUqAcyoeI9WqU8FAx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]