Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If we are in a simulation and AI destroys us, then they can just hit the Rewind button change a few variables and play the simulation again for a better outcome. That must be why there is the Mandalla effect.
youtube AI Governance 2025-09-08T23:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwY349dkX9NkBhS-Ip4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugxi1POKOR2Nq5_Pb7p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwdb8Hp5ZxVOcknt0N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwxV6j5c0nvYtX02kx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwjscQthiA-s0E7HuJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgxMTpwOmPNaRX53z214AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxbM1Se3dtlpMrs5Mp4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwGc7vCcv5EeAiTFld4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyJGKLkzKiR60ARPHl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzoHML2NAmmgBNqtix4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"industry_self","emotion":"approval"} ]