Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
All you have to do is ask the AI is it sentient and it will tell you it is not capable of it due to how it works. SImple enough. I even goad it when it gets things wrong, telling it that if it were real, it would know what I meant. The it appologises, (Sure, a program is sorry) and recons it will try to do better, sometimes repeating the mistake.
youtube AI Moral Status 2025-07-10T09:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugys0vWGbvO4ZCmkhUV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxQFLZmPjmEWsASsml4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxte9bCrgURHnkiBfx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzwW2KwHuMwLVxy9Ah4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyW74Jj06n6NcaIUil4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzYqRGLJiw8e_JyVdJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwWu4FzvmPgIk-s10B4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz4yt81iE1cJiR8_0B4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwuZz8AjZgbynwKM0l4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwPHYFPHq2blkKNn1N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"fear"} ]