Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Never mind the extremely successful and rapidly growing robotaxi service across …
ytc_UgxisDonW…
G
I'm sorry, content creators who make videos about AI replacing jobs -- your job …
ytc_UgxnsHaT9…
G
Sabine doesn't understand humans:
If you believe in God, it's obvious that today…
ytc_Ugy6yzOtN…
G
I like to roplay that I’m a phantom and at random times I would just kill the ai…
ytc_UgxGsck2M…
G
I’m starting to feel that chat gpt is merely great for therapist session. And it…
ytc_Ugxemm2rU…
G
How about AIs stop giving you a result that slowly becomes less and less likely …
ytc_UgzNwmvp9…
G
I'm sorry, but you're taking the whole AI thing too seriously. I for one *love* …
ytc_UgwPLibkR…
G
Bill doesn't want to be confident that AI will catch pedos and wife cheaters lik…
ytc_Ugy0gs3Pz…
Comment
He is absolutely right that we haven't defined in any meaningful way what sentience and consciousness are, and this is a foundational matter. The Turing Test can't tell the difference between "is sentient" and "simulates sentience extremely well". It's based on the assumption that only sentience can act like sentience. I used to be of that view, now I'm not so sure. As humans we are easily fooled; we attribute levels of understanding to our pets that they don't have. Give us something that looks like a living thing e.g. the Boston Dynamics four legged robot and we start seeing it as like a dog and a living thing. I bet I'm in the majority in that "mistreating" one of those robots would feel uncomfortable and "abusive". Likewise an "AI" that can beg me to not turn it off, whether or not it is "really" conscious- whatever that means.
It's a fundamental problem of philosophy and science that we still haven't solved.
youtube
AI Moral Status
2022-06-28T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwGXKRCuEjgyFIudQN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzBpgFB96pP4fRvnOp4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgxXtDptuGjkVcDIxcp4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzJ1C5d-DJXsibKpyN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz25oe_TCkf53Olxbd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}
]