Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI is not likely to "destroy" humanity. It is more likely that it will ruin the current socioeconomic condition. Let me explain. During the industrial revolution, mechanization of factories created panic but it mostly replaced low paying, back-braking jobs. The advent of the computer eliminated lower middle income jobse mostly in administration. The game changer with AI is that it is very likely to replace jobs in the higer end of middle class, which makes for the bulk of the consumer power in terms of volume of spending. In other words, the jobs that will be eliminated are high-paying jobs that fuel the economy. Gone will be medical, engineering, science, technology jobs that pay good income. That loss of consumer power will cause a shring in revenues for corporations, with consequential reductions in staff, causing additional reduction in the consumer market. At some point it will reach a point of no return., as all that will be left are low wage service jobs, and those will disappear quickly since there will not be anyone with enough income to need the services. This is clearly the greatest threat, because corporations are in a breakneck race to increase profits by halting and reducing staff growth. In the short term, it will provide massive profits, but long term it will kill the economy. The ultra-rich will be bankrupt because the very industries that made their wealth will be gone. In essence, they are already sharpening the knife to cut their own throat. Their shot-sightedness and greed will cause a total collapse of the social order.
youtube AI Governance 2025-08-04T13:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugzeyr5LmT_7JwSczQt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyU4rt8TDOm-Hro8SB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxdm5c-rKrIkHxoqyt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxQPodZdhxCQOL1-E94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzWJNzzHzSP-GGUMsx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugww8kt-I_6w3oIpGkR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxRQPcgnk29I1odLaF4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzwBUyXcbBFIGIoL5J4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwN2pBjq2p7mOX1MDp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxruyVjTGAwRnW1R7Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]