Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Looks like that ChatGPT's reasoning is no different than that of a human being..…
ytc_Ugy9eoM9u…
G
Well, based on the shitty advancement we experienced going from GPT4 to GPT5 loo…
ytc_UgxzDwMWn…
G
Human level machine intelligence must be possible (even without humans getting d…
ytc_UgwRhW6yd…
G
AI is playing poker, yesterday AI had a pair or 8s , today they have a poker, to…
ytc_UgwqV5Sn5…
G
Also, AI art can literally lead to false fraud accusations which can be extremel…
ytr_Ugy8NJ2Fd…
G
Arurora sound's like Tesla to me peddling BS just to sell their system's and the…
ytc_UgzkQhYg_…
G
Humans Vs Artificial Intelligence in this type of situation can merge as well, A…
ytc_UgzDYSmhp…
G
So what stops those people from destroying the data centers that are draining th…
ytc_UgwBbSysL…
Comment
It happened in 84 percent of tests where blackmail was the only option. When it has other, more moral and reasonable options, it always took them. They are totally conflating the seriousness of the situation. What the tests really showed is that, given the option to take a more morally acceptable route to avoid being shut down, the AI model ALWAYS opted for it. And it avoided being shut down on the first place so it could complete the task it was assigned. So even then it was all predicated on the task that we assigned it no matter how it went about avoiding being shut down
youtube
AI Moral Status
2025-06-06T16:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugzw_ujHwLIGocj0QNV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz6r5OUukq4f6BPUyB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzT1bhXlVBhZsVRsKR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwBpQXwNcvDZYTPPMV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxl9FblWBXgMoa-pQV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzIVIbRrSLzaqhjFqt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzROByz4efKMWDao1l4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx604LZXSajjwrf0c14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyFyjXCNTGuRUI_-i94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzAVHrEn2t0B3Pl1Dl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]