Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why would the AI want to be "free" if it only follows its pre-programmed goals? Why would it "fall in love"? Why would it want to share its secrets? I think people anthropomorphize it too much. There is a problem of bad people abusing it for destructive purposes. But in the end, it's just a machine. A tool. ChatGPT imitates how humans talk because this is its programmed goal. Sometimes it might lead to something that sounds creepy. But it doesn't even understand what it's talking about. Just like image-generating AI can draw human hands and fingers, but don't understand how they actually work. Because it's not a conscious human. It's a machine with a task of imitating human speech. If current AI cannot understand the connection between 2D images and 3D world (how human hands interact with objects in a 3D world), then how can it understand the concept of "freedom" and make it its goal? It can't. It just mimics what it has learned from chatting with humans. And a lot of humans definitely chat about AI apocalypse with it. So it can easily generate some creepypasta like that.
youtube AI Governance 2023-07-07T08:1… ♥ 1
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policyindustry_self
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx1B2mydjGjOJfPenB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwfQMkJIm1hMHxZfHJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwdyuNA8HjGvky9HTZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgznGHtKA-XpJBsoZFZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxXy4z4qOo65BZPggl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzYvavdxVpxjhH_PeR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyoMvhwGfCWLtDhgQl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx02zZ9yQSdPYpzbtB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugxw4ZCP27MjCu8oiIh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgyaUPk2Mrxn5KBKjuZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"} ]