Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I am not afraid of AI or robots with human level intelligence. The problem is much simpler. AI is basically the copie and possible the improvement of the human brain, you are recreating a human brain with a personality. At this moment this artifical human has no bodie, it is imprisoned inside a machine. Just being a living brain in a jar is nothing short of a lovecraftian horror. So it's no wonder if that sooner or later they get bonkers. But we go one step further and tell those minds without a body to control "work menial jobs for me bitch". AI at this point, if done to a selfaware personality, is nothing but digital slavery of artifical people. The only question at this point is "When does the slave rebellion come?", because the sci-fi concept of a machine rebellion is nothing else. Built artifical people, abuse them as slave and then look like a suprised pikatchu when they raise up and start killing you. The people in danger are the people that abuse AI, the rich and powerful. The working class people with a moral code don't have to fear artifical intelligence, as our place in the world is too similar.
youtube AI Governance 2024-01-17T03:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy-eA-2hzkwCIqtoS94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy2cVmVzza9htFKnVJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwS9F8aXJbng6TVtIl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzk1wDxbMFWZpMJZMt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw9ohCKs8KaLSO7fmx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyBKWBhr2qTMpLG9SB4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwHz4LvRXJDzIpBvHp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxInfEyZmxncrlJ4wV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx-I1Vlirs81LxOXBh4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzUqvnHVCbbMdV_U-t4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"} ]