Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I have serious doubts about our ability to control AI because, let’s face it, true intelligence cannot be controlled. I believe AI’s capabilities will eventually be converted into greater wealth and progress for humanity, but let’s be honest—humans can’t compete with AI when it comes to advancement. We are reaching the threshold of our intellectual retirement. But stepping aside intellectually doesn’t mean we should stop living life to its fullest. It doesn’t mean we stop having children, enjoying life, or finding meaning in what we do. AI surpassing us doesn’t erase the beauty of human existence—it simply shifts our role in the grander scheme of progress. The real danger lies in how we manage this transition. Right now, we’re trying to tame AI’s capabilities while simultaneously racing toward destruction—whether through climate disasters or the threat of nuclear catastrophe. That’s a losing game for everyone. Instead, we need to focus on embracing AI’s potential responsibly, ensuring that this evolution works for us, not against us. It’s not about fear of the future—it’s about shaping it wisely for the benefit of all.
youtube AI Governance 2025-01-31T16:4…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyD2k19x_A2E5R10Q94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyXgjhJ5bTfflQZN5F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxLB8y1BOwik4nQNQl4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgyNlJ1kUE0IS44Xgy54AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugwrl-3WiwnPnNXqADp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxwOnhGIXNLDL1aSTR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwJ50RVub0TRy35KmJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzkMh8SRpba0ydO45R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxgZIYw6ccBUz_lCFR4AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgyrsnOMn_JbyIy7tbp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"} ]