Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This guy's thinking is deeply flawed in at least four ways (beyond already pointed out by Sam Harris). (1) There are powerful incentives to view the AI's as people, including economic ones and personal ones (maybe this AI is your lost parent, or is a real friend, or people want a doctor / teacher / therapist who is a person). (2) People can believe AI's are not people for reasons beyond this guy's imagined need for people to want a slave/master relationship / economic benefit; maybe people have different belief systems about consciousness than this guy, like intelligence and consciousness being different properties. (3) This guy pretends like consciousness is a solved problem (actually Sam Harris hints at how ridiculous this guy is when he mentions that the Turing test was not a thing). (4) This guy does not realize that if the robots actually are not conscious but we treat them so, then we will be replacing conscious life with a husk.
youtube AI Moral Status 2026-04-06T23:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugxd-16xAhgIrJXWzLJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgxgCWSW-eClYf6NIPZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy5mppCvlbj-HDaEBh4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz6CT7hF5hLGFa-0mN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzjBEyNbxEmyuNRhgR4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwz3JiMAKtryQWqEiR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxuFhVcfbKYwQqv5E14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxAZ_LxCu3tiF12E9l4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwZXjdSS7kd3w6LMSJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyEhROkEKS8Esph1v14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]