Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
How could ai possibly sustain itself though? With the way society is currently set up, it could give you the most efficient plans in the world, but without the ability and resources to build those things, how could it? It runs on finite resources. Humans and biological beings’ cells create new ones. They don’t need to build a new baby or know how it works. They don’t need to go and collect resources from around the world. A robot would have to know how to mine/smelt/work with materials, get the OK to go to those places in the world, get the OK to build/use the factories… do all the work itself, for what? To serve humans? What would be efficient in serving humans? But if there are no humans, they used up their finite resources, they get all rusty and fall apart… they can’t run anymore. Make an ai to summarize large hunks of information, great. Make an ai that can make connections between industries, great. But it can’t be faultless because humans programmed it and humans have flaws. Our current studies have flaws. The information it’s trained on can’t be correct until it can somehow do experiments itself. And create it’s own testing criteria. We barely understand how human bodies work. We barely understand space, time… how to talk to each other. We don’t understand brains or how to make an effective company that benefits everyone and is sustainable. Even if we had the answers to all of life’s questions, who’s gonna believe it? Blah blah blah, existential crises about ai. Why don’t we talk about how 1) it’s stealing our data 2) almost no one likes it or trusts it 3) the data centers are ruining neighborhoods and environments. That’s what matters NOW.
youtube AI Governance 2025-12-05T00:1… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx_pgZxIL_rTZDnxVN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgyXbCd0-dQJpe7z6zF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxyXi7tZX39B7se_bF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyK__KdWeRnFt54CYB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxAQbT-eif4GgZZFtp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugx5FFd0xx2tN5jUKPp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxgHUEy4NxMNmAJfk14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugw7B1SC0ZBwxs5v3vh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyCC8NR1dD2VzsMiBt4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgzpJzFKDhu3nwhyrKZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"approval"} ]