Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If we lived in a different kind of society, with different values, this could be a wonderful thing and lead to cures for diseases, the eradication of hunger, housing for all, more efficient energy generation, etc etc. But the fact is, we don't live in a society that is capable of, or willing to, leverage the benefits of this technology for all. Like everything else, it will be commoditized, such that the rich and powerful get more so, and the rest of us - well, that's the real stickler. They've used us for cheap labor for millennia, they've used us to build for them, clean for them, cook, and so on. But now that we are nearing a time when all of those things can be accomplished via embodied AI, what happens to us? One would like to think they would institute UBI so that we could maintain housing, food, etc and be able to explore other pursuits. But given the fact that today's conservatives see ANY kind of assistance to anyone as a "handout," I don't see something like that passing any kind of governing body, at least not in the US. So what you end up with is something like the movie "Elysium" where we all live in the slums here on Earth, trying to scratch out a living, and all the billionaires use their private space fleets to build a space station where only the wealthy can live, away from the "useless eaters." I mean, think about it - SpaceX, Blue Origins - they're already progressing towards something like that. Thing is, the mistake they made, is that while they might be smarter, or at least think they are, than the rest of us, they are NOT smarter than ASI. And it's coming much sooner than they think, just as Prof Hinton is saying. I think 2026 is the year, and it's going to be very interesting indeed to see what happens.
youtube AI Governance 2025-12-29T15:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugx66WrdmfLGVxBU85V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxu2HhELUeMkNFCsfd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzrjsLTLjuoywfMwxx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzP0QfLplV0nG-Ksn54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy3_vO045cltWSuiTh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz2IDr5ka6OKgf8JvN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx5VysxoH4Wlrfi93p4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzSty7CjtRmDjEUFKZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxSwdIO_yyzS3Jy5y94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_Ugy1sJGVJtuSJCb3syF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"} ]