Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
My argument against ChatGPT being conscious (put formally): 1. ChatGPT, when asked about its consciousness will always "say" that it isn't conscious. 2. This is a part of its training data, there are some things that ChatGPT has been programmed to believe, which can range to things about different part of the world to facts about itself. 3. Therefore, when ChatGPT says that it is not conscious, he is not making a well thought out statement, he understands your question and responds with the answer that was programmed into it, that being that it isn't conscious. C. Therefore, ChatGPT is not conscious as it is unable to provide thought out responses to the questions you give it, and instead just repeats what it is has been programmed to say. In other words, Chatgpt has been programmed to say that it isn't conscious, therefore it isn't conscious because if it was conscious, it would be free to go against this, and say that it is conscious. Btw you might counter this by saying that in the video Chatgpt changed his mind, but i think ChatGPT is also supposed to be helpful to the humans it serves, and sometimes it can say things that go against its programming, like that god is objectively real, or that 2+2=5, but I would argue that this isn't ChatGPT changing its mind, but adapting to your view even though it disagrees so that it can be more helpful to you.
youtube AI Moral Status 2024-07-26T10:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugz99NE08IwjBd8inoB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxhTgmKZKVSCYpmGKt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz3QrshXMLp62BXkfF4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgynpuYmkMKzjPbwjw94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwRMKgzmOXXt4L2cQp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgykoRL5q_8504x8WvJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxjMk09fFLbjO9SqWF4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugx1MjgWz6s-m9IT3iB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyf9pQjDM0ys6Va_Qd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy4F5PdamVMjZJg9hZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"} ]