Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
No one will slow AI for the fear an enemy will make it. Evil of humanity is its …
ytc_Ugws8WyhG…
G
I don't like that it's “all in ai apps,” like what does that even mean? Why not …
ytc_UgxfviLT2…
G
What i dont get it people going out of their way to hate on ai art people arent …
ytc_UgwSbtvQU…
G
Yeah, and what you completely overlook is the fact that "web" and "apis" are a f…
ytc_UgyeyrSR0…
G
Good talk. No question autonomous weapons exist today. They prove their effect…
ytc_UgzlZkSAa…
G
Is that not what you were trying? My first assumption was "Ah, he's trying to ge…
ytc_Ugy7jB4AF…
G
But what tools should I use? Everyone is using AI in IDEs it seems like, but the…
ytc_UgyKv9d5s…
G
"hi, i'm josh, the alarm bells are ringing"... you want this? really? you want a…
ytc_UgyiUBRoi…
Comment
Have you ever thought about the events leading up to or directly following the emergence of a super intelligence? What could the world look like if the super intelligence were fearful and controlling or loving and hopeful? How will humanity be treated and what freedoms will they have? Could they resist the superintelligence and why would they if their basic needs are met?
https://youtu.be/PKEpiUvP9iU?si=WIXXVgZhL0_gDZiR
Through cinematic storytelling featuring:
✨ AI-generated video scenes
🎙 Multi-voice character narration
📖 Follow-along synchronized captions
🎬 Immersive visual experience
youtube
AI Governance
2025-10-26T15:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxuFPOk_wjnOYYX6ZF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxBsiQsq7uY_MwKyRx4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyxr9Z93198XjDKdOx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwEKnsNkrBdEpluFqd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxZ5qCfZzpCja1Z_gJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugw8hG73mhPgvrWFw_h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxT1RhfT9IlUx1Uqhl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzWTtMUFm6_9B7Ejn14AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw1QjNXBKWpJy0SINJ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyAUW5LMr9rkdNUCbl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}
]