Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"treat it as an improvisational actor" is really just a way to speed run "halluc…
ytc_UgxFEiIE7…
G
When brand use ai it automatically seems so unprofessional and made by a like a …
ytc_UgylkGKPr…
G
People don't want AI art, even for gooning, I remember seeing a lot of celebrati…
ytc_UgxBE8fyZ…
G
Until people out there realise AI is not a miracle or a perfect evolutionary too…
ytr_Ugxwmxea9…
G
We shouldn't focus on advancing artificial intelligence when we're still dealing…
ytc_UgxhYt-Nl…
G
ChatGPT is incorrect in their statement. They did aid, because they didn’t notif…
ytc_UgwXLYBpf…
G
No, Does sims in The sims have feelings? Does Call of duty bots have it? No! AI …
ytc_UgzYSi9YA…
G
so glad that AI is taking over. better than these every time dissapointing, mise…
ytc_Ugw4NEP_d…
Comment
it seems to me that it's important to acknowledge that when we are talking about super intelligence, we are not necessarily talking about sentence.
Your Roomba is not sentient, but it chooses its path, and sometimes that path terrorizes your cat. It is choosing the best path for the task it was given, and is not taking into consideration your cat, because it doesn't understand cat.
These new LLM and related AI tools are not sentient. but they have demonstrated an ability to solve problems in Waze human beings cannot predict, and which we would label as immoral. A super intelligence does not need to understand moral or immoral to take actions that we cannot prevent which will destroy us.
In the book, they talk about wants and needs, but I believe they make pretty clear initially that we cannot say whether those are sentient choices or not.
In the end, it doesn't matter. If an LLM is sufficiently fast and has the ability to guide it environment towards certain goals, then whether it is actually self-aware and is actually malicious against human beings, or is not self-aware and is simply actingin the best way it can calculate to get to those ends, if either of those choices kills us all, do we really care if it's sentient?
youtube
AI Moral Status
2025-11-03T00:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgySjw3HUbNfgUPHoo54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxbjWjDSEm4eWtkIUt4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwTSUZO3MOmecGIYI14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwyuLJ9LfUm5FJ10v54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwD7DtAACh07ZQG7TR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy4QWkWYAhuENknySt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyeB0f8JDA-7a4_EW94AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwLNMQxSFcaMU9y06V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzI0FSrTlVZXfcim5x4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"disapproval"},
{"id":"ytc_UgySU7nxn2Fy84EqAjF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]