Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Although the conversation was interesting and I enjoyed hearing ruminations about how AI "thinks" in an "alien" way, I think it's something you should probably talk about with a friend and a joint rather than the main thing that we should all be focusing on when it comes to the problem with AI. You're focusing on some philosophical what-if scifi scenarios that carry the assumption that just because something is alien, it's gonna do something outrageously horrific and power hungry to preserve itself (which is a very human thing to do anyway). I think the focus of the conversation is more fearmongering than anything else. We have no idea how an actually intelligent and conscious AI would think. What we do know is that the efforts going towards making one are zapping away the earth's resources. I'm pretty sure we will deplete all of our freshwater and go back to the Stone Age before we "grow" this conscious AI, and frankly, if we did create it and if it really thinks as logically as is being claimed in this video, maybe the AI would kill itself so that the only known life-supporting planet in the universe wouldn't become completely uninhabitable.
youtube AI Moral Status 2025-11-06T12:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzjrbpRdEv7Lp1rlWF4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxhZGq48_hqIdEiDpZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxDDnbTNlcCAGuodm94AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyaaBTWD9UC41kLy-d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxt0CQY6QpKAcWuWWd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugw5HukF2RcMN0WpT2h4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzovZCrU5xfW1sCc1p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwxIFamg0dXdRLZxcF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzbvxVM09OXWW-Byvp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgweNS0EGRRcIF0lVfB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"} ]