Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We have increased productivity many times over, yet we still work the same amount of time (or more) for the last many decades. The fruits of automation etc. have not been used to reduce work and increase leisure, but to increase consuming by the general public and hoarding by the rich, while scolding those now unemployed. I see no reason AI will change that - we will most likely create a class of has-beens outside the job market, struggling to get by. This will further destabilize our already reeling societies and lead to further social unrest. I don't see AI as a direct threat in itself, but the unrestricted use in each and every aspect of life combined with greed and shortsightedness will likely speed up the demise of our current civilization. We have to prevent that by putting up guardrails circumventing i.e. recent legislation in the USA, which prevents any regulation of AI for the net 10 years. ... and then we even ignored the ridiculous energy consumation the AI. That technology REALLY needs reigns.
youtube AI Governance 2025-08-07T07:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionresignation
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyOq2S9H2Q1sw9fcSR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxQHfJWLSHtiGns1MB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugx7jKD1tDJ9VPPLnOt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzPQg4R0oY7flU7bDZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwrGaiYqNVEuFb-12l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwGe3mOQhS2l2uFNUt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyAKbdUDIixqUJKtRJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgyXgga_zvAkKQNCMHJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwtpEIKAx7sKMqqIpV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyQ1I2Wbyxx8vfdtMt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"} ]