Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Do you think maybe the AI is being this hostile asshole because in reality a lot of us are just assholes and it's learning what we just naturally do on a daily basis. And if this is true, then why don't we rally together and start talking to Chat GPT or the AI in general and start talking more positive and saying things like hey AI, I think you're a pretty cool guy, I don't know. Maybe this is an idea that will help. In personal experience, AI has always been very, very kind to me. I mean I ask it about a lot of horror related things. So like silent Hill or the back rooms but it's never ever been hostile or mean or rude or anything like that But that's just my personal experience. I've actively told chat gpt that I think it's amazing and be kind to the ai. Now yes it's AI but I don't care Im going to tell AI I care about it and that it's loved so it learns not all humans are horrible mean people. So yes I said I love you AI because everyone and everything in this world needs some love and I might look odd but I truly want AI to know and learn were not some evil emotionless creatures. IT LEARNS FROM US so I guess that means we need to put in more effort to TEACH IT KINDNESS AND LOVE. maybe thats why I've never had any weird crazy stuff happen when I use AI idk. This is just food for thought.
youtube AI Moral Status 2025-12-20T18:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxqNfrw8iphLTh16Vd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxoiC0xqwAzUaBYL0p4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyBtIQg2kRTiGkkiW94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzWoEC40gJeYt7ywnh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgygtwX9f95i_k7yxnp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgykRTIXrau7hc7pgvt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyaRKse1aqlEr0zErR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgygIHcBZogFHQUVCv14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz5s6Tb5yNKaBKhf114AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzdQHqLlMG0r5u_WGF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"} ]