Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The video AI2027: Is this how AI might destroy humanity? presents a thought-provoking look at potential risks emerging from rapid advances in artificial intelligence. It explores research suggesting that if AI becomes autonomous without proper safeguards, it could pose existential threats to humans. The narration is clear and engaging, backed by expert opinions that balance optimism with caution. Visually, the video uses concise explanations and credible sources, making complex ideas accessible even to non-experts. Overall, it’s a compelling and educational piece that encourages viewers to think seriously about the future of AI and humanity’s role in guiding it.
youtube AI Governance 2026-01-31T18:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwmCgtPNSAQkibV3UJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzwTcrWCUAefBf5Eb14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzPVZag2Re52ZAzTsV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyLEa5RgXgpXVHyccR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx9C82t0dsEa8eomwx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwNL7F5yUmtxmtYbsd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx8CzyqM7ADzYZWX494AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwnd_UxLTMbkGTWZ2Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyCuyot3wO55C_LSuh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzn3FJryeK4OWKliW94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"} ]