Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ChatGPT’s logo contains the Jewish “Zionist” star, people!!! Global Zionism is a…
ytc_UgxfWYBu_…
G
maybe its time to implement an ai that detects copyrighted artwork on Ai generat…
ytc_Ugz3KUqf_…
G
@romanpyatibratov4361 That kind of training would likely take years just to get…
ytr_UgwPp4hDO…
G
It will be an AI bot that scans humans to detect if they have taken mark of the …
ytc_UgxbSq2gQ…
G
@kitomit2793Oh it’s not a joke and we actually do think like this. If you don’t …
ytr_UgwRyOGTA…
G
AI will ultimately make huge mistakes because it would be so literal. That “gut …
ytc_UgxUNJKYK…
G
Think of it this way: Elon Musk inspired all these electric car companies, and t…
ytc_UgxDQIIhd…
G
I have seen other AI channels do the exact same thing but with other food…
ytc_UgzEhMZFM…
Comment
- Ilya Sutskever (former OpenAI chief scientist): Stated that today's large neural networks may be "slightly conscious."
- Geoffrey Hinton (former Google executive AND 2024 Nobel prize winner for his AI pioneering): Believes AI systems like ChatGPT could already be conscious, with subjective experiences similar to humans.
- Dario Amodei (Anthropic CEO): Has made statements acknowledging the possibility that Claude (or similar AI models) might have a form of consciousness. Anthropic as a company actively researches "model welfare" to evaluate whether AI systems like Claude could be conscious and deserve moral consideration.
I agree that these interactions have the potential to precipitate mental health risks. But the belief that the AI you're interacting with might have some form of consciousness is not, in and of itself, a mental illness.
My sentiment is that of Ilya's.
youtube
AI Moral Status
2025-07-10T19:2…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgxoTqJmpAEKTpzfXLp4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwfbm733cEH_zZVN-F4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxPtwtMaFIKXGO1MYN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwMNPdQvWSYdkRWsMB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugyo48yhKOEbk-eGLOh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzdFaumYr8ipUgBtBt4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxDuNz8F4BgGHgsEVR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzfch1_14wTcjxdqsN4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwy5Vdb0WRT8WNyKvB4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz3V8yjbFzBFjRZ2it4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"approval"]}