Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
5:17 ok, that’s the thing, right there. Expansion to the cosmos is why I really doubt that AI will need a reason to eliminate humans. If it becomes that advanced, that humanity poses no threat, why destroy it?
youtube AI Governance 2025-08-27T20:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwEcsd5cSbweTVUir94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxIBjkmF-dzRwB1l7F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgziSPTsbS8fa_Ma5ex4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwrTWSU-YTNuN7eK594AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzxWfy_DHW5tu2CET94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxVyMvz_Uwkm8OeSj54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxzJy8d5iwk1FOH9Dd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwI7jatMJhxopV6t1h4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgynWi93kP-me27BjAJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"resignation"}, {"id":"ytc_UgwQSnG-1095Vj28WPV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]