Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I do not care how sophisticated AI becomes. It still will not have a Soul or Heart. It will not stop to see what it feels, in an Emergency. It will not say and mean a Prayer to God to help it "Make the Right Choice." I have no first hand knowledge of AI. So, I am a little concerned about some of the concepts that seem to be coming from AI, about Humanity. Is it just my imagination? Or at a certain point in AI development, do they all seem to reach the conclusion that Terminating Humanity is the Way to Go?! And it's not that I can't understand the concept. A lot of Humans, are deplorable. Mostly through ignorance and the lack of developed Heart Qualities; Humans cause so much suffering to other Humans as well as other Species. Will AI truly Understand and act upon the one most important factor? I know many will disagree. From My limited perspective, Human Life must Always Be Given the Chance to Survive, if at All Possible. Our Learning curve is undoubtedly much slower than AI. I have to believe that as a whole, Humanity Will Get There. We Will Become Enlightened. We will Learn to Know Ourselves. We Will "Grok" Our Unity to the extent that We Will Never Again Be confused or fooled by Lies about being Separate and Alone. We Will All Know, We Are One In Love and Light and there is No Separation or Division.
youtube AI Governance 2023-07-14T02:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzgFuuwnbPlnRsPXl14AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyOgP8RdRvGqY2urD54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxPG3rTY5SMr5pd2X54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzL6MrltpezmW8yQjx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwfA20nPymYqD-tpMx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwFZ5ZhdolIsZXEBuV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwbUfr1o8Mx_zm6HMd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyPiPLJCyfOFLbRSpd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyeOabVIGTRqZp28ct4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxOqgiZKHlqT7DtjQ54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]