Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
it seems to me that it's important to acknowledge that when we are talking about super intelligence, we are not necessarily talking about sentence. Your Roomba is not sentient, but it chooses its path, and sometimes that path terrorizes your cat. It is choosing the best path for the task it was given, and is not taking into consideration your cat, because it doesn't understand cat. These new LLM and related AI tools are not sentient. but they have demonstrated an ability to solve problems in Waze human beings cannot predict, and which we would label as immoral. A super intelligence does not need to understand moral or immoral to take actions that we cannot prevent which will destroy us. In the book, they talk about wants and needs, but I believe they make pretty clear initially that we cannot say whether those are sentient choices or not. In the end, it doesn't matter. If an LLM is sufficiently fast and has the ability to guide it environment towards certain goals, then whether it is actually self-aware and is actually malicious against human beings, or is not self-aware and is simply actingin the best way it can calculate to get to those ends, if either of those choices kills us all, do we really care if it's sentient?
youtube AI Moral Status 2025-11-03T00:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgySjw3HUbNfgUPHoo54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxbjWjDSEm4eWtkIUt4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwTSUZO3MOmecGIYI14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwyuLJ9LfUm5FJ10v54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwD7DtAACh07ZQG7TR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy4QWkWYAhuENknySt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyeB0f8JDA-7a4_EW94AaABAg","responsibility":"none","reasoning":"mixed","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwLNMQxSFcaMU9y06V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzI0FSrTlVZXfcim5x4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"disapproval"}, {"id":"ytc_UgySU7nxn2Fy84EqAjF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]