Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
mixed feelings on this. i love seeing artists create more art with their own sty…
ytc_UgzlsXsOS…
G
Given how AI actually works... if you want this to stop... make AI works public …
ytc_UgyPLkX2s…
G
That "friend" saying, "AI isn't even that easy to use" is like ordering takeout …
ytc_UgzB6OfSt…
G
Everyone focuses on how human looking we can make them. This has been around fo…
ytc_UgxqKG1Ll…
G
I predicted the AI onslaught would have an effect on the way we speak and commun…
ytc_Ugz6VFLTl…
G
They're not going to tax the Ai companies. It's a nice thought, but we don't eve…
ytc_UgznuW-PW…
G
suppose a bomb fall on your head right now ? what if you choke right now ? eliez…
ytc_UgyOM8-_8…
G
The real criminals are the buyers who are enjoying their lives in various parts …
rdc_erbcjrl
Comment
The video AI2027: Is this how AI might destroy humanity? presents a thought-provoking look at potential risks emerging from rapid advances in artificial intelligence. It explores research suggesting that if AI becomes autonomous without proper safeguards, it could pose existential threats to humans. The narration is clear and engaging, backed by expert opinions that balance optimism with caution. Visually, the video uses concise explanations and credible sources, making complex ideas accessible even to non-experts. Overall, it’s a compelling and educational piece that encourages viewers to think seriously about the future of AI and humanity’s role in guiding it.
youtube
AI Governance
2026-01-31T18:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgwmCgtPNSAQkibV3UJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzwTcrWCUAefBf5Eb14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzPVZag2Re52ZAzTsV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyLEa5RgXgpXVHyccR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx9C82t0dsEa8eomwx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwNL7F5yUmtxmtYbsd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx8CzyqM7ADzYZWX494AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwnd_UxLTMbkGTWZ2Z4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyCuyot3wO55C_LSuh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugzn3FJryeK4OWKliW94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"}
]