Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
well, I don't think the really useful tools of Ai would be available for the vast majority of people. For multiple reasons, due to social gap, due to mindset gap. It's almost giving the grenade to a monkey. Humanity should be focused on improving their mindset and intelligence BEFORE using technologies to improve the quality of life. So here we see the dependency. Only those who really use technologies wisely, should use them. Let's say, the waitress from Kanzas, which has never had an intention or passion to become more educated and have another level and quality of life (not just getting rich from nothing) should not use Ai at all, cause she should start from the bottom, she should start to build a different type of mindset, where she wants to be a more valuable part of society not for someone but for her self. But the absurd is that she will never do. Cause it's her level and some people just not meant to be profesors or medical specialists.Most people want simple life, and self creation is just not what they value. That's why society is highly controllable. Cause people like to be under control and constantly blame the world of being not fair, yet still not being in power to change, cause only intelligence can do change and they are not in need of intelligence but in need of invaluable existence))
youtube AI Governance 2025-10-01T09:3…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyQoErSk4OcMpkCqrx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxzd5j_unxsQ8sb_5p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugzf2f8hYQH2U0jICqx4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzUUtqK3amlCS8WkfJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxft6tc2TZnsVY-bA54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwAipkQLRu4cI6vc7d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzhOHHYVmfhcqbLP-B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwxJprc_Porir0ZkRR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxaeUL-j2mbj_m_UVR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwLEwpTxaZuAMrC2214AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]