Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The fact that hardly any of the proponents of AI were unwilling to change their …
ytc_UgxdXMqKu…
G
You lost the debate.
You forgot and AI forgot to mention, the ""israelis"" cur…
ytc_UgxfM9LIN…
G
And an AI needs a data centre the size of New York that uses as much fresh water…
ytr_UgyS0ICsN…
G
We won’t reach WALL·E because of greed and corruption. We will have a dramatic i…
ytc_UgygBX7HU…
G
I had a conversation with a trucker at a bar a few years back telling him that a…
ytc_UgwiQRr6V…
G
ChatGPT for idiots.
To all smart folks out there:
Use it as a mirror, not as a t…
ytc_UgxkzsVFK…
G
suppose companies start collecting data sets more ethically but still there's a …
ytc_UgyvO9r1K…
G
All these AI Doomers are a psyop to make AI seem more powerful than it is. You c…
ytc_UgzvXk7US…
Comment
I'm endlessly frustrated by science guys constantly dismissing 'philosophers,' as if the scientific method were not itself a philosophical exercise, and as if the very questions you're dismissing weren't core to the ones you're asking. Do we have a single, cohesive definition of what intelligence even is? Can you reliably explain the difference between the intelligence of predictive text and the predictive elements of a primate brain? Can you say, with any degree of certainty, what we would need to see to know whether an AI is truly intelligent?
Here you are, having this conversation about the likelihood of LLMs developing true intelligence while avoiding defining what that is, so what is the use of the conversation? You refuse to engage with the philosophical element, so now there's no other possible outcome besides a shrug.
youtube
AI Moral Status
2025-10-31T05:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwVA8nMnvbtaBkl1zt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxsWyUB95SEhWn4JeZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxVl_ePAJpVw42M4k54AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxT4R5RhN6d7vWn3eB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxVoBgKgc3vBJ2NKkB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwoxI7YRZHVy2XR6jl4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxc4S8u6T9BmYwz50F4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxhvE96GGj2KI86ul94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwIskV34Cxf46XfY7N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz4pkgpv4bNlAGUchF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]