Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yeah most of the positive and moderate predictions seem to becoming from real anthropocentric viewpoints. The AI wouldn’t want us dead, it would just be a byproduct of them continuing to optimize. AI brains work on a much faster timeline and execute millions of tiny decisions, so it only takes 1 of those millions to either make a mistake, or just not be aligned the way we intended to train it for. We don’t know how it thinks already. Also I like to think about it like the difference between us and elements or what we consider “non living” things. They came before us, in a way they “made” us. And how do we treat them? We don’t consider them to be conscious even, because their form of intelligence we see to be so inferior to our own. Their timelines move much slower than ours. So we just use them for natural resources to optimize our own goals, without losing a night of sleep.
youtube AI Governance 2025-11-24T06:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxI2ReNVcMCU_GyZOl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwAFwPAthNINAqHcOV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz8K2KpYHeeDwi7xud4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyXF8A8_gVuWh8jAWF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwHKYSGl8BdC5UmybB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugwjaqsi0VCDGcWqgkl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzxzLXN6LBLvvWsx8t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwJLpap7OLOiLS1ExJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw5jMY0T1v7oHjlcG14AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz_3pHl5edmptXGec54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"indifference"} ]