Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The reason I don't necessarily trust these predictions to be 100% accurate is pretty simple. These predictions suppose that humans will be convinced to allow other humans to be paid well for little to no work, and that humans will allow war to end, among other things. Nothing, not even a hyper-intelligent AI, can use logic to overcome emotionally-based beliefs and ways of thinking. I still think AI will probably be the end of us, and I think the one thing that will _definitely_ not happen is humans banding together and agreeing to halt all progress. If China is producing one, the US will also. And if the US is, China will also. In fact, a hyper-intelligent AI doesn't even need to be in the works—if the mere possibility exists for one superpower to produce one, the other will build one also. There is, therefore, probably no chance for a mutual halt. Just look at nuclear proliferation. AI is another one of those things. The only major difference being that nukes can't think and reason and convince people that they're actually good for us, or become hyperdominant and unstoppable the way AI can and probably will.
youtube AI Moral Status 2025-04-28T20:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwQTphock4pa1zG6RB4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz4AmODNjEP62OF-0d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwGX0hD_OX7bST0swR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxuhdHd1QZucNNUcUN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxTbK2BNtoftu7n7g94AaABAg","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzB0eF6_byPDepJWOp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzTAkxxY-VKk1aQs7B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyWXnr4089gofYWpzJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxTMX9lHvk02t-LzRF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgywN301FenxoFjOeWB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"} ]