Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Listen, you shouldn't be blaming neural networks; you should see a speech therap…
ytc_Ugw_rWP51…
G
As an Indonesian student, I can confirm that Indonesian CS program is indeed tha…
ytc_UgyfIsO5a…
G
It's exciting to think about the future and how technology, including AI, will c…
ytr_UgxVH8Fil…
G
I feel like this should be an annual interview since AI is developing so rapidly…
ytc_UgxKF7C8o…
G
Who is liable during accidents caused by these drivers? Software company? Truc…
ytc_Ugyo4J5mI…
G
They have a long way to go. With all the muscles in the human face it will take …
ytc_Ugz4Z5ETA…
G
Hahahaha, the solution is, we need a world government…
Why is that always the a…
ytc_UgyepszuC…
G
Hi guys, you got a existential crisis because ai could take your job, fake crime…
ytc_UgxQ-8jA2…
Comment
In my view, the "Chinese Room" is NOT POSSIBLE outside thought experiments. We cannot use it to say AI doesn't understand, as that assertions is not scientific once you analyze the thought experiment. Its' core premise is entirely flawed. There is no 'book of phrases' that could ever hope to convincingly make the person in the room seem fluent in Chinese, it would have to be nearly infinite in size, containing every possible response to every possible combination of questions assemble with language, or the user would have to spend an incredible amount of time, essentially learning Chinese in the process.
Prior to LLMs, which are the thing in question here, even our best language translation software could not convince a fluent speaker. So you cannot say AI is the only example of a Philosophical Zombies. Philosophical zombies do not exist outside thought experiments either. There is no example of any such thing in our natural world, and AI cannot be the only example, or we are proving nothing. It's not a comparison.
youtube
AI Moral Status
2025-07-10T15:3…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugyw8YT20Q93sMTVqNd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxHj_F15n6L0vIlwl94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxGSOXZqbDo-VM41fR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwSCBfrFaxcq7IkS114AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzE_xZAPqlnU0jeMXN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwzaevmGpm74yW6ahp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxFQAV87LRPzQMo34V4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzsm2L8syPTfvMgXY54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyY7r-hzdYKAnqrklp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgznkdvbfX0SK_mfQLt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}
]