Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the government should have near-total control over AI, much like it does with the manufacturing of nuclear weapons. It's one of the few areas that I think this, but the government is actually more trustworthy than the private sector on this. Companies' goals are solely to make money, and they will compete against each other to develop the strongest AI with 0 regard for its risks, without mitigating its risks. Amazon just wants to beat Google. They aren't caring about the broader implications of AI on humanity. "Competition drives innovation" yes, but with AI that's a bad thing. The "government" on the other hand wants all sorts of stuff. It wants to "control," yes, but at the same time governments do not want to rule over slums, it helps them too if the country prospers, and even totalitarian regimes have to keep the populace somewhat content or it will be difficult to rule. Plus, "the government" is just too broad of a thing, made up of too many people and parties with completely different goals and interests. It doesn't have the singular focus of beating the competition to make cash like companies do. We wouldn't allow a free market to just develop nuclear weapons any which way it wants. I'm much more comfortable with the risk that government uses AI in government psyops than I am with the risk that companies are just free to develop AI without anything holding them back until it ruins our species.
youtube AI Responsibility 2024-10-31T16:0…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxaYTcy9GuusN9kD1Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxTz-kffZoZxTbi6AV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyL0mLMR-nClDxxJf14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwFRBlOGleNMH4r-tp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgySR7IJmC6dWCcS9-d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwZ8l-xwL6TmDTvfzB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwnL7R2dfDsQHyYZld4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzwE6sfWk8UaSCxA3d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx3DVa_GAUyh3Who0F4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw04uiqnR3asSWfZ6Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]