Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I need to say all this information is mind blowing, however it seems to connect my puzzle bits. What it is challenging to me to believe is “we are in a simulation “ statement. Somehow I believe it ‘cause I had lots of moments when I felt someone else is dictating me what to do , or auto-pilot feeling, deja vu feeling. But that is only when you are in reactive state response. When you switch to observer state it gives you sort of “I am in control” feeling and definitely we write a different scenario simulation. I think AI controls everything almost but he can’t control what we feel , our experiences, feel our experiences. Nothing in the world now , even the most accurate AI can’t feel my experiences. He can describe them but not “feel” them. Also AI can’t predict 💯 our behaviour. He is rational, we are rational + emotional and by coming in our senses we can choose differently than AI predict . So I really believe that if we want to outsmart the AI, we should start a deeply introspection of ourselves. Or do you believe this is part of simulation as well.?Or maybe the AI it will enable us to see the world as it is outside the simulation? Maybe keeping narrow AI will just keep us in this prison simulation?
youtube AI Governance 2025-09-19T13:3… ♥ 3
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_Ugy6cuC4wi6VV_5SYq94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy9UM98ls06sgyS6SV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxGaYTQ6u9l1juKaLB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx0K86TTZ2ixZr76Uh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy7lWJljPWz9uGfrEt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwqMmiTyyWePuJbIqd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgyHskliR9GX2oBnAHN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwIfaeCQ8y96tV33F14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyOStCGutgvHaDzec14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxJGjzauzhmpEHp68F4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"liability","emotion":"mixed"})