Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
When algorithms are built to capture our attention by reaching deep into our most instinctive, unconscious drives, they stop being neutral tools. They begin to manipulate, not serve. And when their sole purpose is to keep us engaged—feeding on emotion to fuel endless growth—they risk hollowing out what makes us human. If we let this continue unchecked, it won’t just damage people—it could unravel the very system it was meant to benefit. This is where regulation should step in. Not just to measure efficiency, but to ask what these systems are doing to us. And maybe the ones setting those boundaries shouldn’t be politicians or those with something to gain—but people who truly understand the technology, and who still hold a sense of responsibility for protecting what matters most: our minds, our communities, and the world we live in.
youtube AI Governance 2025-06-16T18:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzDiZU493yEun7ATSB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzIAAizQJRZBPaJIth4AaABAg","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgzekWLeHeiRbQwqyTN4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzjCy_k7-vjywMvCp14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxatEKB_4tImoezFsp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyowGAVf4v7z_9d6cV4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgymaSC8979G1MjsnGB4AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyIvnGaE9CrNLLshl94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwFQh-P2b2k3VIHIJF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyNCgv5_tk1CMZER8R4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"} ]