Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I might initially seem to be looking at this topic too simply, but from my perspective, doesn’t the concept of total human extinction in an AI-driven future carry some major contradictions? Think about it, if factory workers, or even all workers across industries, were entirely replaced by robots, and eventually humanity ceased to exist… then what exactly would remain for these systems to serve? Almost every company or economic system, no matter how automated, ultimately depends on some form of human consumption, whether it's food, products, entertainment, or services. Without people, there’s no demand, no culture, no meaning behind production. That’s why I’m skeptical about the more extreme AI extinction narratives. It often feels more like a mix of sensationalism and a strategic form of fear, maybe even a way to slow down AI progress. I’m not saying we shouldn't be cautious, but the idea of AI replacing humanity entirely seems to overlook the basic logic that our systems, economic, technological, and even philosophical, are built around human presence and participation. Without us, there’s nothing for any of it to matter to. I see AI in a mostly positive light, especially when it comes to its potential impact on society. Sure, I have some concerns when it comes to the creative side of things, like music and art, where human expression feels more personal and meaningful. But outside of that, AI could bring a lot of benefits. It has the potential to help cure diseases, drive innovation, make our lives safer, and even extend life expectancy. It might also eliminate a lot of tedious labor and repetitive jobs, which could free people up to focus on more meaningful or creative pursuits. In the bigger picture, I think AI could significantly improve the overall quality of life if it's developed and used responsibly.
youtube AI Moral Status 2025-06-24T10:2…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx-jc1e7u7DevJtdtF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy8J1f8AedTSmDkidx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwyC8DnI4f6M4qTJvZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyOgPK9SmEXBBvYcCF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxQ0-X4_aSG7mhOugZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyZ31blsKtrVpq3rnB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzjG4GjgZ3W1kqmjUx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw1jPfmdNjZl-mOAMR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_Ugz5VENrwkkp8uRO5h94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzqRJ0NZrv4jO2DNGV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"} ]