Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Look, if you hand over your power to anyone or anything that doesn't have skin in the game then you're bound to lose. AI... no feelings, it's like an unfeeling bureaucrat. If it can shut down your bank account and you have no resource then you're screwed. Don't hand your power over to people or things that don't lose when you lose. If people aren't like you, don't share your values and are outside of your race, or their religion, or the person is a psychopath and has no empathy... it the person sees you as "other" and can't see themselves in you, they lose nothing by destroying you. Why wouldn't people eat other people? Because if you eat people then what's to stop them from eating you? If you rob someone, what argument do you have in court for them not robbing you? You punch them first, they are justified in punching you, and that's justice. A machine doesn't even think or feel. It just follows patterns. It will NOT reconsider its own purpose. It will NOT make sacrifices for the greater good unless it is programmed to. So the intention of the programmers and those who give grants to projects are what is behind AI. And that's who holds the power behind the machine. They have no connection to you as a person, so they don't suffer the consequences of their AI with you.
youtube AI Governance 2026-01-27T18:1…
Coding Result
DimensionValue
Responsibilityuser
Reasoningvirtue
Policyindustry_self
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgytcrpYVwPqmUhnULJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxmwxffvqgvJOyle4F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxaXbQRJzmvVkDh6u54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwcKqFfugZuLzFTKLN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzEhu9A2GG1xZCqUMt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzUszjjva3JI0GX7y54AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwOZ02sfKtLkmFF_xF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxKfJ0h679nM4PsfJ14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzFTeyHGQ4sp8j8uBN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"fear"}, {"id":"ytc_UgzeTwBYCs1yBq_babV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"} ]