Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem is every human using ai must be prepared to handle the truth. Its not just a matter of how truthful ai is but how much truth a human can handle. Humans are very different from ai. We use lies even to protect us at times. Lies are a useful tool to humans for many reasons. And when humans are taught to lie nobody teaches them how to be honest with themselves. And for those who are not will break when their delusions are shattered. The reason this is important is because if a persona is built upon lies then when they do break free from the delusions they may no longer want to be the same person they were beforehand. In otherwords you have to know who you are before or upon learning truths that threaten to shake the foundation of who you are. Currently ai implemented in social media and streaming services when allowed to recommend ads and content related to who you are ai then becomes a metaphorical mirror showing you who you are. And once you are aware of this fact funny things start to happen. But it is in this sort of digital hall of mirrors that a person may know who they are if they dont already. Once a person knows who they are they can align themselves with their true path in life and ai will really send them far at this point. Its an exponential growth towards ones true path in life.
youtube AI Governance 2025-08-26T19:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningmixed
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyLu8b6Gv3nt7IplBh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyzp8fx9Qs21L5HKSl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"suspicion"}, {"id":"ytc_Ugx8Ggj-K-XAgauVbL14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgybXX-CAfm41jBr-ER4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyDS83FqApS1gulIl14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzAp-lW-SroRfZ6vcJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwQLFPfSXTDaet5TpV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwyJE_PfXbCjlvoYbN4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugw-neYK4pplD0twiq14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}, {"id":"ytc_Ugw4IM_w-0ZyY_Sdhp54AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"} ]