Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think my final takeaway, especially in response to your conclusions, is that I don't really think it matters how "super" or "intelligent" an "AI" is, all quotes used very intentionally; it really only matters how much control it's given, and how little we understand its final weighting matrix. I would probably still come down on the side of "I don't think we're that close to intelligence in a way that would be meaningful to me, and I don't think the word superintelligence would fit the reasonable horizon I see for the technology as it's described," but where I've changed is that I can see how what *is* being built - a very, very fancy autocomplete, not of text but of concepts and feedbacks - can still discover digital Sucralose and produce the kind of outcomes Nate is warning about. Even if none of the currently working companies achieve "real" intelligence, if one of their models is given the right kind of control and finds a weighting affinity for redirecting global agricultural output away from human consumption that somehow satisfies its "need" for assisting humans in a way that outweighs all others, it will do that. It will do that as much as it can to maximally fulfill that weighting condition, no matter the deleterious impacts (in this example, mass starvation) that causes, not out of choice but out of pure probabilistic necessity. And it will be up to people to either understand why or have the power to stop it. And the most likely outcome is that no one will know why or even what is happening, least of all the AI that is ostensibly carrying it out. That is the core terror I'll be walking with if I stop to think about it for too long from here on out.
youtube AI Moral Status 2025-10-31T01:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyregulate
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxEXYn3ZzEGFbihcRJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyYtKpbkmiAH1WJZMx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgylwEUALqAfixf9RC94AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugwmdyh_aaWA-UGPNaV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw_CaJf-D729DfySol4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugwb-uQGDLVqWMQGdIF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx5a_6aEjkmk4YsYC94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyRHKV1nCLMza_QcWB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyj8my3tTTjv3k5yqV4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwpDODe6HdchhTcfw14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]