Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So, ChatGPT and other generative AI like it merely appear intelligent. They have creative output comparable to humans. They appear to be intelligent. You may even be fooled, behind a chat screen, that you're talking to a real person. Which would probably be a pass on the Turing test. But here's the thing, none of this indicates to us that ChatGPT has real "consciousness." What I mean by that is there's no real awareness. There's no "being" in there. It's just software running on a computing device. But computation =/= consciousness. Computation is mechanical. Consciousness is none of these things. Ultimately what I'm saying is despite all the cool stuff these AI models can do, and as lifelike as they may seem, they are no more or less aware of their surroundings as a lightbulb or telephone or animatronic display. They have no recognition of themselves or things around them. Nothing has any quality or meaning to them. It's just a really sophisticated mechanical process producing what only outwardly seems lifelike. But in reality it's a parlor trick. A really genius one. But still a parlor trick, where genuine intelligence is concerned. I don't think we're at risk of AI going full on Skynet in the sense that it's a real intelligent being that has its own agendas. But that being said, it's not like there isn't danger here. Just because an AI isn't truly "alive" or "conscious" doesn't mean it can't still run amok and cause a lot of problems for human civilization, like any other automated device. So we still need regulations and safeguards in place to prevent AI from either being abused or from doing things that we ultimately don't want them to do. That being said, I don't believe AI will ever become conscious. Not truly conscious. There will be no AI personhood, even if the machine has all the resemblance of a genuine person.
youtube AI Governance 2025-07-11T19:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugy1cUu9-aUFonv73pV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxWawWDgdKveEEHgUl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwEUR0bf8FzuRKZuz14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwF2tkZTcfsbpBxl994AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugx7pqWMdWB7-TIgqPN4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzZh8bGIEEhS7e0nRN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz21dHw9DNemdI3r3Z4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugwaq0P_19oXMSX_aBd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwNVrfVVAKf3cuAqXt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx3K2Lk-W1pIQunl5N4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]