Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When the AI purposely answered test questions incorrectly. The problem is that the engineers, by saying that they would replace the AI if it answered correctly, inadvertently set the "fail" perimeter as "answered correctly". The AI just did what it was told to do.
youtube AI Moral Status 2026-03-02T17:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugz86s2QFPS-hKYIJjV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugyynuw930sIpEvB8c94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyoUlSbaAt-W9OIyhp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyiFYVU0bGYFXPyrgB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyQGW8VNDrxXy1OnG94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_UgyIrTnRiR256mBIfhV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxucZMERxkle9Caal94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzMWWCYvt50UGk_oER4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwIslLOYeVfkJw7Zsl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwdW6wFrGoEbleaLDJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"} ]