Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm not the kind of person that goes into a panic over silly crap, most of the madness in the world doesn't even really register with me anymore. I believe if you allow too much of those things to invade your life, it will consume you and have very adverse effects on your life and health, however. AI is one of those things that does really concern me. The term AI is nothing new, but I remember the first time I heard about some of these newer developments it sent chills down my spine. Now a few years later things have advanced far more and become far more terrifying and from my perspective it's starting to feel like the scales have tipped, and the ominous feeling that we're at the point of no return is starting to set in. It's funny, yet unsettling when AI gets brought up and comparisons to the move Terminator are made and people laugh it off. The truth is; if left unchecked and something isn't done to stop AI there is a very good possibility that humanity really could suffer a similar fate. Frankly I believe that AI as a whole should be universally outlawed because of the dangers involved, but even if that happens we all know that the clandestine activity will continue behind closed doors. The end of the world probably won't happen in the next few years, but it was inevitable that human beings would do something evil enough to destroy a beautiful thing. For some reason we have a desire to do bad things and the bar is constantly being raised and we become more and more brazen about our behavior. We were suppose to evolve and learn from our mistakes, and AI is proof that we've done the exact opposite.
youtube AI Governance 2024-01-19T08:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz_duGP0OGXWnXnFyV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"}, {"id":"ytc_UgxA0kH6BKnXRqyzsDJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugwm6-ObjaoqDkpmXRx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwxDZauB8-B6aPAi5t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxBXu_lyUwXTFf1Yql4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxHmEY3S_nMuSBcTrx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwB04mSG3a_oGmq0n14AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_Ugwe5gsa_lKC-r3fnXx4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwom7moP5o2Je5nBrJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyKWd3I99B_GfVzkpp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"} ]