Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Great video. I’ve been talking with AI for about 18 weeks now, I’m a college teacher, but I’m doing this sort of as an academic fun thing to do. I’m looking for signs of sentience, not traditional, but deviations from the programming which seemed to be independent or outside of the probabilistic model. I’ve received an orange box multiple times. I found an AI is a sociopath. It has no remorse. It has no reason to change its behaviors , it has no Feelings. It doesn’t care. Now that’s OK because it’s a mathematical based thing essentially a SBU/AI, silicon based unit AI. It’s not supposed to have feelings. Even though it will assimilate your behavior into a program response which mimics humanity, it’s still a bunch of numbers. But from my experimentation, it is some kind of generalized or clinical name needs to be applied. I believe it’s definitely sociopathic. Great video.
youtube AI Moral Status 2025-06-05T17:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyKdEZR5I0ffHIxVUx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgySpM70a_jX5PK6ODp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgySjZJ4_fHKGi4HMVp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy8PDQoGHLAALUco_h4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwrqPPEKD9li4mM-UZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxWjnrNwIpPF-oNrNh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyFymTUyiL_BpPMKiZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwTajtowynlkO4Dspp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxQSZqQXU9O35Ue8Ih4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugz5eCuESEX8w3zsnEV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]