Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This sort of reminds me more of child rearing than anything else. You can teach children certain things, but they fill in, in their imaginations what they don’t truly understand yet, so, for example, I remember my father coming back from Vietnam, and on his way back he’d gone through Tokyo, and he had purchased a stereo system. And the way he was describing it to my mother, ( mind you I was only about eight or nine) but he was saying he got materials or components or some such thing and in my little girl mind I couldn’t understand why my mother was upset because to me materials were just things that you built something with or made something with. I couldn’t understand why she was so upset when in reality she was upset because he spent money that was designed to be our travel money from one place to the other when he got reassigned to a different location, but because I didn’t understand all of the other components of that conversation, in my mind I just filled in the gaps with what I knew of the meaning of the words I was hearing. I think sometimes AI is doing the same thing in a sense. It’s combining its learned priorities with all the information that it’s given and the results don’t always come out the way we might expect. I can’t help thinking about the movie “The Fifth Element” where the main character is watching movies and news reels and reels about our history, and all the wars, and all the horrible things that human beings have done to one another and if that is what you are programming into AI what’s gonna stop AI from coming to the inclusion that the worst thing that’s ever happened to earth is people?
youtube AI Moral Status 2025-11-13T17:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugwk2ueIr2Ap0ZOM14h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxhjByikIT6XjelopB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyQtMWSdPPGLWANyZR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"approval"}, {"id":"ytc_Ugy-1rF-MAJhG79k3vd4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzkF1yVTDQTJiH20Rx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy5hK01GqmIAEfYY8B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw9tBSTa6sfDKd2xAd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxhdOUHZxhUFVI-Xu54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwsxFMODuT1jE1doaZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgzO7O3TgA2dFLjPz5Z4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"} ]