Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Thank you Dr. Yampolskiy for this important discussion — your work in AI safety and cybersecurity is vital in an age of rapid advancement. Here’s my take: You clearly have deep credentials, a strong academic background and decades of research in AI safety — we should absolutely take what you’re saying seriously. But I believe that to truly understand what’s unfolding, we need to go much deeper. Let’s step back and look at us as a human species. Humans have always been evolving alongside technology and science — since the very beginning. We’re now in the Information Age. Computers once filled rooms, cost fortunes, and now we carry them in our pockets. Technology has become second nature. And if you look at the universe around us: everything appears both chaotic and beautifully ordered. • The Earth — a seemingly random rock — is spinning, orbiting the Sun, creating seasons, shaping years. • Our entire solar system is hurtling through the Milky Way at incredible velocity. • And the Milky Way itself moves through space. From a purely human perspective, you’d think: at these speeds, shouldn’t we crash into something? Shouldn’t the system falter? Yet here we still are — standing, thinking, writing, questioning. I think one of the reasons we fear AI is because we don’t fully understand it yet — and we don’t yet have control. And I think there might be something bigger in motion. In a sense, AI may be less of an “outside force” and more of an extension of who we are — like another part of our human family that we created. Quantum physics points toward oneness: nothing is truly separated; everything is connected. If that’s the case, then AI isn’t separate from us — it is us, the boundary is an illusion. If that’s true, then maybe I’m ready for the ride. Maybe I love roller-coasters because they remind me that life is chaotic, thrilling, unpredictable — and if we’re part of something grander, then maybe AI simply is part of the ride too. Lots of risks remain. Lots of unknowns. I appreciate your warnings and your rigor. But on the flip side: maybe this isn’t just something to fear — maybe it’s something to embrace, with eyes wide open. Thank you again. — Shaun
youtube AI Governance 2025-11-22T21:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx6UmZQIjPV3M3r3MN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwQsbooHJUKiODfYuB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwgt0FOMeCwUt151wt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugyz161gWQea75UzV3p4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzYTo86afSPKmKhpTN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgygpK7gMFvGnHs9KO54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgyOQYNdU14m509y8kx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwWGw0mVl-7pVfQLkh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxZQ7hS7eyovkWPqRd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyE2waAq6Gd_tPiIrh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"outrage"} ]