Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Q. "And what motivates our AI Humanoid Robot?" A. "Its programmers programed a synthetic desire to conquer, or is this The Will To Be Or Become something? This TH8NG is not a consumer, or a viewer or a participant, or even a something. It's a without a conscience slavebot. Nothing more. It's human desire that keeps us up at night. What say you mx robot? What ya fighting for? Why are you living? Who are you not fully loving to? Forgive them. Forget them. Idol worship comes in many forms and tribulations. Anytime reason is subordinated to events and not precursors (that is, motivation, desire, interest, compulsion from within) an idol is intellectually constructed by an individual mind eventually coalescing over time from mists of intensions swiled into a form. Nonetheless, permission is given to violate someone else's personal space. The programmers are not sufficiently without flaw to program perfection. Ask the AI to do a mind-numbing simple routine for two hours:"Pick this up, drop it down, pick it up, d😢rop it down, pick it up...." Then call me about what's conscient. The goal is still to break your psychology, but without touch. No touch mind rape. The CIA Alice in Wonderland Technique is the result. Perform the technique for 48 hours with pharmaceuticas. Breaks everybody.
youtube AI Governance 2024-01-01T03:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugyv0lzbg9vli8rRy-J4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzOF9Hty1yZjBdY5ll4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwCT5_GI9JYcftocup4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzsspx8U5i97Dw3Hzh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxSxnMa55SYRVPHfO54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyRBiL8AGNTE12RQAx4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugypz9qJFT3lCup-TfR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugwj3QxgADxxFaUZO0t4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz94tdm-i-DUlfdF8p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgzKH1nLgFVtO2Q37WB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]