Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
In a simplified explanation, it reminds me of Fantasia's Sorcerer's Apprentice, when Mickey was tired of fetching the water for the sorcerer. Instead, while the sorcerer was sleeping, he used his magic hat, and cast a spell on the broom to fetch the water. Mickey fell asleep, and woke up to a flooded room. He then tried to stop the broom, but it was still following its commands. So Mickey tried destroying the broom, but each shard re-grew arms and legs and all fetched the water until there was a torrent of water. The sorcerer finally came down the stairs, saw this, and with a commanding wave of his hands, moved the water out, and took the spell away from the broom. This will be AI. AI is already running our stock exchange. It is already trained to watch Truth Social and respond to the President's posts/truths. This is already a thing. Any time he posts something, the AI is looking at the context of what is being said, and then reacting autonomously and instantly. This is how the big tech companies on the S&P 500 can lose trillions of dollars in a day. I think AI can be helpful, but we always need a human in the loop, or a disaster like that will continue to happen.
youtube AI Governance 2026-04-04T10:5… ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgyeI4_djPJjtKuWaTh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwVUKXbmmD-0pItN2Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxcROIN45tkqmZKMk14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz_DkZHwtnDII_2Ftx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxpjzFjRtpq2Oi8Q0t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxGsLfG9s3VAUpC5Sd4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzSJvhNsx-Qjvt_-sl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwR1N9_E52O5IdCp2l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy841YznGPgGgkEtzp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy348a0-ivgh_zCSNd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]