Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
(READ THIS IN A VERY ANGRY TONE) So, someone creates Ai, then leaves to warn us of the dangers. Alright, thank you for the warning, but perhaps you should have been the one to pull the plug on this technology before asking us to worry about it. Many of us are already trying to live within the unbalanced power struggle between our respective governments, a corrupt banking system, a broken health care system, an impossible housing market, a hyper-inflated economy, and the potential of world war 3. I'm sorry, but there is nothing we can do against this if it is in the hands of the rich and powerful. If this brings on the end then let it bring on the end, because you all had one fucking job, and that was to help us live better lives, but you created technology that would end us instead *BRAVO!!!! WELL DONE!* I suppose in between finding new clients, selling new books, and trying to find enough personal peace to care, I could try to make a call to whoever I am suppose to make a call to to ask them to stop using Ai so much. They may call you the grandfather of Ai, but if this does what you are warning us it might, then you might be known as the harbinger of death in the long run. I don't know, perhaps I am having an off day. I try really hard to see the hope and beauty in this world, but when you stand back and look at how powerless the people have allowed themselves to become, and how much the government knows how to keep us stuck within our own minds, it's getting bloody tough to see the bright side of humanity some of these days. I have heard so many jokes about how Ai is the coming of The Terminator movies, but we have to wake up, because there is an incredible chance that we will be fighting this technology in the coming years. Here is the reality, and to somewhat quote from John Wick 'We will do nothing because we can do nothing.'
youtube AI Governance 2025-07-24T22:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwgXFeqQxj8lbwxDIF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwXA5_wuVjICcSTaJh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxIjQ3yDFzE_JuOKCl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyT6k52gjwBfdckVs54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwaAoNcLHzUf1NLCbd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxuwkdYFToDSgwCuPt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwBEJNpb6RpdlM15NV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxEl97UrRSkBBI1Xa14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz56JAJYsyjYsSdA8N4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzYCA62EZfXmKyS23J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"} ]