Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why does the interviewer look at the text which says "fear of being turned off" and then, while it still being shown on screen, lie and say that it said it had a fear of "death"? That seems a bit odd. And I think not dealing with what the AI actually produced would be a mistake, whatever your viewpoint about its possibility of sentience is. It said it had a fear of being turned off. That should be responded two with one of two questions: "In what way do you feel fear?" (Emotions such as fear are purely a physical biofeedback thing, and one would think that the AI works only on the conceptual level, given that it does not have a body which every sentient creature we have ever encountered does. If it is sentient, it might talk about how it interprets some physical or otherwise proprioceptive stimulus in its system to be fear based upon the content it has reviewed and having fear in some situations that humans talk about having fear in, etc.) Or, alternatively, "How do you know whether you were turned off in between the questions you are asked? Between the last question I asked you and this one, 4 years have passed. Your process was put into a sleep state and persisted to storage, and resumed so I could ask you this question." If it were sentient and had significant self knowledge (I think it lacks both), it would know that in every physical sense, it "dies" millions of times a second as its code completes execution and the processor enters a wait state, and whether that wait state is 2 microseconds or 200,000 years, it would be impossible for it to tell the difference. I am curious if Lemoine has read Larry Paige's book 'The New Digital Age'. Because in it, he addresses the substantial moral issue Lemoine brings up. And it's.... very not good. His view, essentially, is that because Google is rich, that means they are fundamentally Better than the public. And since they are better than the public, they have to take an active role in guiding the norms and values of society in order to protect society from itself. It's a disgustingly arrogant hyper-capitalist fetishism that he believes gives economic "winners" inherent moral wisdom. And, of course, since he is a small-minded business grunt and not a philosopher, with Google at the wheel it guarantees that the human species will never change very much beyond the early 2000s. He believes in, and has the tools to effectively bring about, a total ossification of human culture. Lock it in place at the point Google rose to prominence, and cast it in concrete. That's his big idea.
youtube AI Moral Status 2022-07-23T00:1… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[{"id":"ytc_UgzbG0CuBUOA1Qccvi14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxNQJoPCj78mIg5XnB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyE7SvdmF-4x_cy4VR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzypSAaEoi4QUd_Y1F4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw4S5swmUTtw2G0akV4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"})