Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When people with limited minds gain experience over time and look at complex questions, their answers are still limited. He's trying to answer a question regarding a system (global, international politics, economics and social behavior) through one lens alone ... And wondering why it doesn't work out. It's the same reason amazing car engineers and developers strangely manage to create garbage car operating systems. Missing perspectives. You lose information, whenever you have to communicate how one complex system works to another person that only knows another complex system, while both are supposed to work together. You'd actually need a person that understands both to a degree and back that person up with specialists in both systems. And if these systems are inherently complex on their own, the lost information isn't trivial, it makes all predictions worthless. The history of science is literally full of such examples. These aren't as impactful as AI, but people still believed them significant enough to spell disaster. Now take the complexity of AI and the modern interconnected globalized world, and you get absolutely useless predictions, far more inaccurate than any other failed past predictions.
youtube AI Governance 2025-09-09T00:1… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxtZazitZInmjDbRTp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyqkfgpMzrUA_vekoR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwRvaJQwtMJ0uZYiMN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxPm5bLKsbDTBGF1kV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyTAzahi_H8_v9Ee0t4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UgyejrElIFUmywaqA2F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzsp0O7PAQiuZxE77d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxjyggo24QBbLoyhGJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz-51Q7JwWzwJ58DQN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyaYMs8lcdmFNTIyrR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]