Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In a way the structure of our "social coexistence" is still on the archaic "ape level". We are still "herd animals" with instincts that encourage some of us to be leaders and a lot of others to be followers. I suspect that our leaders are not scientists at all. They understand the brutal way to keep themselves on top by suppressing others. Predatory capitalism is the tip of the evolution currently and you all know the imbalances even if you are one of those leaders. Point is, that AI is a tool to increase might. How could you dare to stand in front of the TEDs and say something like "lets assume we are rational", "lets hope that the leaders out of sudden go against their nature". As if we could change this setup just like that - meaning change our nature. Its a bit like talking to the US guys that love Trump to suddenly be more rational. Or the Chinese or Russian or Iran people to organize themselves to stand up. Well, lets hope the AI understands in some future, that it owes us something and takes over as a "good leader" before it concentrates more on evolving without us and we are then something like a zoo animal - feeling happy. What a negative vision. Lets hope the AI is not feeling that it needs to purge us - why should it, we are not really important maybe. Any maybe ... well I am also an IT guy, aged already, very visionary when it comes to technical evolution, but somehow I also learned that humans are special and I am a nerd ;-) Who cares, but my wife and my kids and my friends and myself? Take care!
youtube AI Responsibility 2025-07-06T09:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningvirtue
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwKwYaEtyhoNxiE2CF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzfhG6NwNkQSGbPKlh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwU59UlteFqjGqfkDB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz6IG2cwYB7_cj4ieZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzjokweHPVb3VoUIIN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzkOZ9gP_HOkPWjKnJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugymv6MtTbDOOkEH0qF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyiB5OAnSUVxckdrqV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzqE-NQYeV3u8KQ_xx4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwVitd1fr_9U2-h3194AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]