Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Hank, I'm grateful to you for having this conversation. I have worked in this field for the last decade, and one thing I'd love to see in your further discussion of these topics is a little more discretion about using anthropomorphizing language. I know, personally, it's REALLY HARD, but I think you could use your voice in a very constructive way here. Something I think can be a helpful analogy is to talk about corporations or even the economy are other "intelligent" systems that are, often, misaligned with humanity's long-term interests. I think that this parallel can help people to see this less as an intentional "evil machine" but rather a resulting effect of complexity, where optimizing for things that might sound good (employment, shareholder value, “engagement,” etc etc) actually lead to harmful impacts (climate change, war, fake news, etc). When you talk about "superintelligence" or the dangers of "advanced" AI, I would love to see you draw on these parallels and try to use language around "feeling" and "thinking" a bit less. You touch on this a bit, when talking about corporate behavior, but I think you should treat AI systems THEMSELVES like that, instead of just the builders. AI is an important mirror to ourselves and our societies. I think we can learn a lot from these systems, and we can learn a lot about how to adapt and approach their construction from our own complex systems.
Essentially: a system doesn’t HAVE to be super intelligent for it to be extremely dangerous. Part of what makes it dangerous is ubiquity, complexity/lack of interpretability, and an inability (or our unwillingness) to stop it.
youtube AI Moral Status 2025-11-02T18:1… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyjZyTJQdV33bw0vop4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwCMEtyTtZwynwkXrV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxh3riF0-4UK4etQ0d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"}, {"id":"ytc_Ugw7UPSqMIu1xFiIUSl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzrp8HbL5oyccS7tDh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyJZ5WYBWtWhye6KXN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw7_T-EMPRxzTRgF_N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy2ds2xE56wcAnbRrZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwRM1UtUh06iVVjG654AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw22W8hz_3dOr8fC7h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]