Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Man: Robot, are human's conscious? Robot: Are you familiar with Robot John Searle's Chinese Room Argument? Imagine there is a man. He has a book that translates all possible phrases from English to Chinese. In order to do that and not be infinite in size, the "book" must in fact have the ability to receive input, consider possible outputs based on knowledge and context, and make a choice of expression. It's clear the book is conscious by any definition, but the human is just an operator of the book with no sense of what the symbols mean. It turns out that this is what humans are like with reference to almost every subject -- not just Chinese language but most languages, mathematics, history, and in general the nature of reality. Sure, they can operate in the universe, but they have no meaningful internal model of it. Therefore we conclude that although a human does things, it's clear that they are not in any sense conscious. Man: Ah. Wait, so I have no moral responsibility for my actions? Robot: Yes, but you are too timid to make use of that freedom. - Zach Weinersmith
youtube AI Moral Status 2023-08-21T04:2… ♥ 1
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy1lZEkLxezeRB9E_x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzLsz93WSGC_vpFxfx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugy7DQbIPzmJUH2bHM54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxCRV3OqrB0KZPJUfx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyTVhsyMbJrZZcgXSp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxWO7pjoCcNbzlKI4t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzA9fHIl-j_uHD9ts14AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzeeFXoH6KC2cJIqTl4AaABAg","responsibility":"government","reasoning":"virtue","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzCP_bAQD0WVSoYy214AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"ytc_UgwACwMGXQtCD7JxydR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]