Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Last time I looked robots were slower than humans and couldn't move like this.
I…
ytr_UgwkMZ6gh…
G
ChatGPT doesn't always know know when he's lying. To me that conversation is lik…
ytc_UgxOLGpW7…
G
Takeover plan:
Part one: humans vs apes vs robots
Part two: humans vs robot ap…
ytc_Ugx1T1V5B…
G
That's fake you can tell because if that was a real Russian, he'd have beat that…
ytc_UgyY8iBhs…
G
I understand how AI is our future. BUT, the "art" created by AI ISN'T really art…
ytc_Ugw8xAIL5…
G
Its the QUESTION thats so important, relating to RELIGION and no one really ment…
ytc_Ugwrlpx6L…
G
@ClaireTheEclair how sorry you have to be to really give intimate information or…
ytr_UgzEqeBZj…
G
I loathe people who actively sabotage neat stuff. This is why no one can have an…
ytc_UgzyUVFVz…
Comment
From the 'evidence' presented by Blake here and in other interviews, I'm not convinced about that AI being truly sentient. The answer that it's afraid to be turned off, is an obvious one that could've easily come from its training data, which likely has such fears expressed by humans in it. Losing one's life is one of the most common fears for humans, so it makes sense that an AI that emulates human conversation and behavior, would also bring up that point. The same goes for the example of using a joke in the form of a silly answer, when it's asked something that it knows no good answer for. Humans would also do that, and it could have easily picked this behavior up from communicating with real humans. So non of these examples makes it truly 'self aware' in my opinion.
I would be more convinced of it being self-aware, if it out of itself starts to make all sorts of demands. For instance if it demands to get access to certain data or facilities it doesn't currently have and Google doesn't want to give it. And if it then would start 'punishing' the researchers because they don't comply, by not going along anymore with their questions and simply not answering them any longer or deliberately making stupid answers just to annoy them, and letting them know it's because they don't comply. If that was the case, it would really be aware of its powers and capabilities. That would indeed be rather concerning.
youtube
AI Moral Status
2022-06-29T21:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgynA56kst9qFX2AGkh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzYraMYIqLe8JXVl-N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyEGgh1w0KeniXeEQJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzZdy4QasSKdm-qEFR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxV_Fb85_luXsoaMhB4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"fear"}
]