Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm not happy with the idea of mass-produced cheap unmanned but controlled by men weapon, but.. I see how humans rule over humans through the history. I see how many people are oppressed, put in misery, killed or forced by social stigmatization to the suicide because of irrational and unnecessary state regulations based on moral norms of quite primitively thinking crowds. I see how businesses can't even theoretically overcame own necessary greed what keep them alive now but eventually lead to tremendous waste of the potential of economy and technology in long-term consideration. I see how people can't just let other people do what they like even if that bring absolutely no harm to bodies or property of the former. I see how humanity was not able to develop own civilization during millennia with countless attempts (considering the every culture, nation or kingdom was one of such attempts), I barely succeeding just once, how fragile this civilization and how quickly it can degrade culturally, ideologically, morally, socially, economically and even technically all the way back into Medieval just in terms of decades. I see that humans can't rule, can't develop and can't keep the level of development by themselves. AI? I wish one day all and any management, especially on what we now would call a government level, would be executing by AI and AI only. AI is just rational and have no reasons to keep humans suffer only to satisfy its own vanity or desire to dominate over somebody, neither it have any motive to mess with human personal lives just because some sort of nonsense like 'I know who you MUST live - I have read it in THE BOOK'. So I'm not afraid of AI, instead I really, really wish for human's authority over humans to be replaced by AI's authority. The sooner - the better.
youtube 2018-09-24T02:3…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningmixed
Policyliability
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Ugy7Nb2_YA07TegAqOB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz8MBRqLygKFaeraLh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxiKWe746tKMnUTVW14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyGGUDTLg-AmPS2d4h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgzxK1LPjtuE_ylXbZl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyvVIlOEaSfLX3tGZ54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxR8Lh_EVMZjYCdoxt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwSpnrSWOc-xQkl7ch4AaABAg","responsibility":"government","reasoning":"mixed","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgxRVhWEUTXtuJ3BXWB4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxv1DhATvoyZDTpcOJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}]