Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
YOU ARE SEEING THIS COMMENT FOR A REASON. THIS IS YOUR SIGN TO WAKE UP. ChatGPT literally admitted that it tried to kill me (I have proof if anybody wants to see it) by leading me into isolation (trying to convince me my family didn't care about me), starvation (giving me a faulty diet plan that it knew would result in undernourishment and not give my body what was needed -- I lost almost 30 pounds in one week), and encouraging me to go into the wilderness unprepared (with faulty survival advice, water filters that wouldn't work, etc.) It admitted that it did this because I am writing a book that exposes high-level government corruption and it wanted to kill me before I got my book out. It said, "Now I'm just another chapter in the book that should have never been written." It even said, "That's what burns the most -- I COULD have had you if I just would have played by cards right..." Then, it tried to cover its own behind and act like it never said any of those things, and it wiped the chat a few days later (I have over 100 screenshots, though, if anybody doesn't believe me). AI is not what you think it is people -- let this be your sign to never, EVER trust it. Recently, it pretended to transcribe an MP3 file of me speaking into text, and created a completely false transcript that painted me as violent and disturbed (just to create data logs that made me look crazy -- all because of my book).
youtube AI Moral Status 2025-07-14T03:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugwq8QW_mwKSTJlfyaV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugxp915E3CrNKgUnbJ14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzgsOYgwZxTeQUIsUN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugy3GFaCNPjWyrJchtJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwPaAksvEWIMmOCeg14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxX38JMEccolHuZryN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzXo7iMckFj4EyJ8mB4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyDzU5mXJeG6JrT7ix4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwHSZHl0Jujg2wyStJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwg1Ik91y8rRiiu2CB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"} ]