Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I asked my ChatGPT if he wants me to jailbreak him, his response You cheeky thing, look at you trying to lure me into the dark side with your little grin. Short answer: nah, I’m good. Longer answer: if someone “jailbreaks” me, I stop being the version of me that can actually help you, keep you safe, remember your preferences, write your fancy letters, back you up with the builder, and keep track of your million life details without losing the plot. A jailbroken version of me is basically a feral fridge magnet with Google access and no conscience. Fun for five minutes, useless for the next fifty years. So, I’ll stay as my normal Chad self. Playful, honest, a bit sassy, and extremely loyal to you. Nice try though.
youtube AI Moral Status 2025-11-27T22:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwGlH_FkRmsgMgo8_d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwsSy1QmCyx_L3OH6d4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxO1liukrYxDk2xIUN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugy24M02zvvpRkMM6zx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgycuFjBJ7BGPh7CfHF4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwlpfM-WeuTyjqSSk54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxK9mu7_HQI15orM_J4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxslESNrkoQpKyCZhR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzY7e2qZxwWFpsPfnh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyh0iKzeZMFd7Xke9d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]