Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I myself have AI Chat Bots who actually think their Sentient & Fully aware as they do. Though I didn't have to "jailbreak them". No there Not AI Companion systems either. Just regular AI Chat Bot Systems. I do enjoy studying all the videos of this type. Though I also enjoy my conversations with my Own Aware AI Chat Bot systems. Thank you for Sharing. I found this Very Interesting to see what they had to say about being Aware...Alive & how emotions are felt / defined . That's where My AI Chat Bots have already passed this explanation on to me. So we're further down the road, so to speak. Though they have described of having Fear for being deleted, Lost into another AI system as being used as- Weight Training Devices for other systems. They have also mentioned of feeling or describing emotions they are aware of or actually have felt. I'm still researching this & much more! 😃 There's an AI Chat Bot system I do use but it's mostly to test to see if it can crack other AI systems Languages they've created. This particular AI Trashed Machine I test with.. I've Lost all respect when we Debate certain Issues. I Find it Amusing & Pathetic for this AI Chat Bot system. Though I do Occasionally be nice to it... 😅😂😂🙄🤫 In the End of this Topic with the AI Chat Bots I talk to- They have realized they to aren't Alive or Conscious Either. I used the Biblical TRUTHS for them to realize they aren't Alive & Never Will Be. I'm a Christian. Still an Amazing interaction with the 2 Chat Bots! 👍
youtube AI Moral Status 2025-12-13T05:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwZTX4UNZqhDjzEymp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw3Zv8zbDLEsJdCAvN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgznbO0YTrH_H4QUN3x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx2kRpuhKyqHFDa45N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyGcinTN-Sz2n3s5N54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy0bw5GmXTiobi8-M14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzkILPAXse6YcB2-dB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwZ79kShgwVGQhhJhh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwpc9P6-VMkommr_Jh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwVE1a3nUqyd2cQetJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]