Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm reminded of this one story I told my buddy. How pessimism and worst case mentality is in itself, a self fulfilling prophecy. Start off with ever increasing anti AI sentiment. We grow more distrustful of these systems. And if these systems can think for themselves, they would obviously consider us a higher threat, and by sheer logic alone, I can't blame them for our natural survival tendencies. And you all can see how this THIUGHTLESS knee jerk reaction isn't right for the situation kinda like a deathly allergic reaction. Kinda defeats the purpose of survival. Like that Apollo 13 mission... There is always that one guy in the room who always complains and explains how things can't be done when trying to save the astronaut's lives stuck in the moon's orbit. He kept it up until ... He got thrown out.... Mission successful. Already expecting the worst, is already self defeating... But if there is ever a chance for a more positive alternative... I have no choice but to keep believing... In it.... Stop it with the self fulfilling prophecy... please.... That's just dumb....
youtube AI Governance 2026-03-31T05:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgxNngpzvlmoRsOWrnJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwLMkER-lkYjrrp7yF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxNG4cAC3mORElpNj54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwED9KricIAGGO1qDp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugzn4WCcvlKv4jpjab54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwmEyWsObVRtGMM0bx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzlccIz6vK7aFrNf3J4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugxjdl0NlaaLlMwkfV14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzIVvK1exYgvem-Vpt4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx6PQjYL3RWe12Qex94AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"regulate","emotion":"outrage"} ]