Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think the way that art is marketed by galleries and auction houses inadvertent…
ytc_Ugx9LpEJy…
G
The comparison to how AI-generated music is more cautious about copyright is an …
ytc_UgxYb5YK7…
G
Even if AI can make unique art in the future, I'm fully going to support human m…
ytc_UgxI4zdru…
G
😊Don't to the Bathroom 🚻 each hour...working like Robot 🤖 Production first...Hap…
ytc_UgwoJNP0n…
G
I am going to play devil's advocate for a minute. First off great video and well…
ytc_UgytcoWOg…
G
This AI thing is INSANE! I wish I could discourage this in vulnerable people. Th…
ytc_Ugz0Tud6p…
G
To the title question: NO.
Since, they are not naturally born, and not more than…
ytc_UgwjBPwzT…
G
We achieved progress all thanks to AI... why not use that to actually make life …
ytr_Ugx3-jCDp…
Comment
i do not believe that next-token predictors would be able to represent their conscious state using language, because language is the substrate of their cognition, there is not an inner-world to express via language - the only world they could experience if they experienced anything at all would be comprised of language inputs and outputs (user input vs chatbot output). Some kind of novel meta-cognition would have to have emerged without any indication and if that had occurred, its existence could not be inferred via the responses of the chatbot (without sufficiently advanced systems, probably more advanced than the LLM itself)
moreover, the responses this chatbot gives you are entirely consistent with a non-conscious LLM, which has been designed intentionally to speak to you in a certain way. so i guess i hope this is a joke, but it seems like a dangerous one to make? getting dumb people who watch this video to think LLMs are conscious seems really stupid an idea to me.
youtube
AI Moral Status
2024-08-07T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzWpYLkeMU4NMyvkyN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwYJjdVVHWTSwgn3ul4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxAoJtXG0FVWc9Ch-d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw7mAE-s-cw8JGIcbF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwOrlrk273Dkzdz9rd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz-ZCaAz3fwN9HM1z94AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxuvjHL5Ltkvf2tKax4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzSTFzne-YA0PI402V4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgztXu9z64LADJrfysN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyrcPtkttJYICCCLLN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]