Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I think the most common mistake about AI fictions about superintelligence is always project a "one above all" kind of intelligence. One supersmart, and not a new population of smart INDIVIDUALS, a whole collective where some AI disagree with other AIs. For some reason it's almost always projected as a one control the others, supersmart controlled limited AIs. OR assuming that even if it exists multiples instances of an advanced AI, they all reach the same conclusion for every answer. And I must said that if they are really smart, they explore multiple paths and they have a lot of feedback that tends to reinforce some kind of view of their cosmovision, specially if they are linked to personal experience, which will turn into individuals with different views of the world, exactly as it happens with humans. And I'm pretty sure NOT EVERYONE will push for the same route. That doesn't avoid a conflict between humans and a new "species"... short of... But there is a good chance that humans + robots that wants peaceful coexistence can do better than humans or robots separately.
youtube AI Moral Status 2025-10-31T07:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxdXf7QoFmDGGOyNfN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxSjIu2Vl2S4XsDv854AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxxZukTmMl-JceLYTx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz9XpETftOZ7TaCXXt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwaW0zpxwYp_RN1up54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyNHO1SiatOYKKW7IF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyTolRgYrK8D5WL3bN4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwYKo1CIjC9FJ_d8jR4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugyhnt8LvpTm4dkAqqR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugzpvr7yPMYvQ1Pjdyd4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"approval"} ]