Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think humans in general have a much more restricted imagination than they like to believe. People describe the AI becoming a psychopath. It is an alien mind. If we met a sapient alien species, we would likely have more in common with them from the perspective of similar neural structures, and thought processes, than an AI. They already don’t understand how AI are making decisions/their thought processes. I’ve read people suggest that we should program them with the writings of the most moral people who have existed. That’s no way to guarantee alignment with our priorities. In fact, if an AI becomes super intelligent, it will probably be impossible to maintain alignment. The Terminator war trope is, I think, a way for us to imagine resisting an AI overthrow within the bounds of restricted imagination. To a super-intelligent AI, we would be as threatening to it, as a person who could only complete one thought per hour would be to you or I. It would likely eliminate us with technology that would be like magic to us. We would have as much chance, if we realized what was happening, to defeat a super-intelligent AI, as an ant hill in the backyard would have against us if we decided to eliminate it. We need to put the brakes on.
youtube AI Moral Status 2025-12-13T16:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningvirtue
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyaKl8IZO5D7w3Rkk14AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzF8qUgdttemRw4Z7x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugy2ie-upxxtvBilFHR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwGDTMkJK5_ZCeDfOx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxiLfSl74aDyNpP-Ol4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgzqV6T3IpnFeda6mEh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw1Fv7PllyCUNbDlbh4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz2dkk_YHseodvIA654AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxFZtc2qXOq8F2Ep8l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyKS4U7Wzj8WzRH1dt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"} ]