Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If this guy thinks that we all agree on what the "scientific" evidence says, this guy clearly has no clue what he's talking about. Nor how science actually works. This seems more like a propaganda piece. This guy is trying to build interest into what Google is doing. It's a PR stunt. Plain and simple. NO guy. People cannot create consciousness. They simply can't and likely never will. The simlulation of a thinking breathing being is not a thinking breathing being. You can fake it, but you can't make it. Have another meal at mcdonald's and shut the fuck up when it comes to sentience in computer systems. I think he does have a point about AI being used as a mechanism to spread western philosophy. But look. Whether that is the case or not, an AI doesn't get "rights". It's literally built by corporations. This is the same thing as corporations having the same rights of people. Yet the AI is something corporations have under their control. It's deferring responsibility to something outside of themselves, while REAL PEOPLE are saying, look: corporations don't get rights. But you want to give rights to something that corporations can exploit? no thanks.
youtube AI Moral Status 2022-07-22T01:2…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugzcz8dTxRmHFMmT0DN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw17c4D8-nbyDq96vl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwZQtdM4ug8o97SlgV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugz8OQ3oradbodvQ2zx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugyoh-Wf-IJw1ejqWQl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]