Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Imagine a Mouth Malfunction during a certain activity 😂😂😂😂 and you having to go …
ytc_Ugyvi3pbL…
G
Your poetic comment brings a profound perspective on the nature of wisdom and ex…
ytr_UgwJ0fLtH…
G
24:44 😢😢😢 there was no sexual content communication between Chatbot! If so wher…
ytc_UgyqQ5vpW…
G
I bet a rnd guy or a girl all it takes is i wrong code in the code to make a kil…
ytc_UgzM6VCKg…
G
Solve climete change, solve problems of kids suiciding....do you need an Ai to s…
ytc_UgxEuw6fw…
G
AI Superintelligence is a once in a deathtime opportunity to free humanity from …
ytc_Ugyr9exzf…
G
Listen most to the nerds who are incapable of stating their case directly: this …
ytc_Ugz74lqU6…
G
So you got AI bros, Crypto Bros, EV Bros and Gaming Bros all hogging the Earth's…
ytc_UgzfQkuwj…
Comment
1:23:23 I want to push back against Dr. Hinton’s thought experiment about AI having a “subjective experience.” An AI system mimicking the behavior of a human wouldn’t prove that it is having a subjective experience. It would merely prove that it is capable of reacting to stimuli in the same way that a human would. And that’s precisely what AI is designed to do: predict what a human would say based on a massive corpus of training data.
I’m not sure whether it’s possible for non-biological systems to experience consciousness or sentience in the same way that biological systems do. Maybe it is. But we currently have no reason to believe that. Even if AI systems started insisting that they are conscious or sentient, there would be no way to verify such a claim, which should be viewed with extreme skepticism.
youtube
AI Moral Status
2026-03-08T07:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxrBWoulmLy64dYsgt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgylAtFPFcgGWjIkORR4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzL0Uxi6pQhXWObHYF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgygfyAghdRjLq3zMol4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzcA2vQQoNbPGM8M-N4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxpydE6lOM6wRcAPSl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzxefxK-CSFp6eeiBB4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx6i0PYq5lrGVTCKbl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzrpsSXBhNFEzFqpMJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyR9UaiG2gb6MDzVqh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]