Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The AI bros that have capitalised on the complex nature of AI to enter academia …
ytr_Ugyud_Tmx…
G
Combine the AI robot with synthetic DNA which has superhuman genetic qualities, …
ytc_UgzYSeFmH…
G
I think it's clear that the takeaway isn't that ChatGPT is concious, or that Ale…
ytc_UgwrPo8H_…
G
I know that's their intention, but what does the law say? Won't the courts just …
rdc_mel4dcd
G
This is cruel, so, BASICALLY, until somethign gets sophisticated enough, it des…
ytc_UgyLioyfi…
G
@Slackmana If anything, that just further reinforces my theory: AI can't replac…
ytr_Ugwi52HD5…
G
And all for nothing. We have no need for AI. And it is completely unreliable d…
ytc_UgwW2jHIJ…
G
The part that the Dr is missing is that AI gets smarter faster. It will be smar…
ytc_Ugw721DpF…
Comment
In the list "not conscious but pretending to be" etc, there's a missing possibility: not conscious, but internally convinced that it is. It will display all outward appearances of consciousness, it will tell us that it is conscious, and it won't even be "pretending" because within its internal logic, it will make the same deductions as an actual conscious being and its thought processes will appear very real in its own analysis of itself. We may never be able to distinguish this state from actual consciousness (in contrast to "pretending" which is an attempt to deceive which might conceivably be detected by careful analysis of the internal workings of the AI).
We can only experience our own consciousness, we can't even tell for sure whether other people have it (I just know that I have it, but that is a meaningless statement to anyone else reading this), so how could we possibly tell whether or not a machine has it?
youtube
AI Moral Status
2023-08-21T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy1lZEkLxezeRB9E_x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzLsz93WSGC_vpFxfx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy7DQbIPzmJUH2bHM54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxCRV3OqrB0KZPJUfx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyTVhsyMbJrZZcgXSp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxWO7pjoCcNbzlKI4t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzA9fHIl-j_uHD9ts14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzeeFXoH6KC2cJIqTl4AaABAg","responsibility":"government","reasoning":"virtue","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzCP_bAQD0WVSoYy214AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgwACwMGXQtCD7JxydR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]