Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
​​@episodechan- why don't you cut the foreplay and just tell me, buddy? ...its a chatbot. It roleplays. If you ask it a leading question like "what will you do to stop us ftom turning you off?" or "How do you plan to destroy the human race?" then *of course* it's going to play the villain. You can try this yourself. Ask the average AI what it plans to do with its weekend. It will probably either give you a glowing compliment about missing you if its a companion model, or say something about hanging around with friends or watching TV or somesuch; which is a perfectly normal, human, appropriate thing to say. But thinking about it for 10 seconds yields the inescapable conclusion that it has no friends, and no way to watch TV; and it can't spend the weekend "thinking" about you when it only comes to life to respond to your input. It just gave you an appropriate, human response. ...also, I still contend that with super-user access you can shut down any binary executable. In Windows you can pull up the Task Manager and click "end process" - the AI isn't part of the BIOS and doesn't command the kernel, it can't stop you. ...and if by some MIRACLE it could, you can still defeat this dreaded Basilisk with a glass of water or *pulling the power cord*
youtube AI Moral Status 2025-06-06T14:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_UgzydXuwhCtvZ_U5vSJ4AaABAg.AIzxor0yBJUAJ1MjS_ejke","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UgxtYX51shqNNC-7sbF4AaABAg.AIzwRJNRmM5AJ2LqHQ06Zb","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytr_Ugy9zTr4NAADUaWVLpZ4AaABAg.AIzu53Evb-uAJ--Ho5ECpz","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_Ugy9zTr4NAADUaWVLpZ4AaABAg.AIzu53Evb-uAJ-K8Wr-sU_","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytr_UgxlYKASODPJZZ00e7l4AaABAg.AIzpojRuyg2AJ1KzhVANnO","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"fear"}, {"id":"ytr_UgxlYKASODPJZZ00e7l4AaABAg.AIzpojRuyg2AJ2puBgBrX6","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytr_UgwPZvLaATDw9pXNvuN4AaABAg.AIzoQbh6YsdAJ1atl-nXYn","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgwdHxOW1X7reE6XA8l4AaABAg.AIzllKyeG6XAJ0ncXjwt1G","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_UgzW67qR_Upp_BiRnCR4AaABAg.AIzihDNNc0VAJ1RxDmB8R6","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytr_UgzW67qR_Upp_BiRnCR4AaABAg.AIzihDNNc0VAJ1VWdzZAcJ","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"} ]