Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Is a corporation without any accountability problem, not an AI problem AI is jus…
ytc_UgxLZk8yL…
G
When is AI gonna start making more jobs then it’s deleting. I’ve only seen compa…
ytc_Ugw0lJrYT…
G
Okay, regardless of what caused the crash, my question is why such a "modern, ad…
ytc_Ugw77GzS8…
G
Why are you scared of AI. . . It's not Terminator. . . It is this!!!! I get a W-…
ytc_Ugw01YYNE…
G
About the topic of joblessness, next generation and the leading AI companies, I …
ytc_UgzjMeM33…
G
I hope AI doesn't overshadow real art. It’s really fun and useful tool and here …
ytc_UgxZFkgMQ…
G
Ai art isn’t even that evil, he doesn’t deserve this! Oh, it’s an NFT bro? Never…
ytc_UgyfQ3MR_…
G
Boy he is utterly delusional. AI will cause the collapse of all civilization by …
ytc_UgwuzMljq…
Comment
This video feels like the "we're destroying the planet so we need to figure out how to colonize Mars instead of how to stop destroying the planet" approach to AI. Completely disconnected from the reality of what "AI" is. Utterly misguided in what needs prioritizing.
Should there be guardrails to prevent a destructive superintelligence? Sure, why not. Is this an immediate concern by ANY measure? Absolutely not. There are so many more pressing issues with AI today - the spread of mis- and disinformation, the criminal (and the "not actually criminal but ought to be") uses, the environmental impact, the impact on education, the security risks...
The big scary superintelligence coming to get us really seems like more of a boogeyman than anything, especially compares to the current, urgent issues.
youtube
2025-11-28T03:2…
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyE65Df8bABbhROkZp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugyezeg17EFbsbdwvF14AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzDxTGlPg8vzKyIqDR4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwRWT-18Hd89OH771p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwiDVf8j4lr_rhnAwh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwSl8stZUIEEWD_gFJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw5UAvXA8uK5e0XQ5B4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugwe0HDJsAF8rNAbG5V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxYw9TfQf6AriJbpYx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"disapproval"},
{"id":"ytc_Ugx2EOTv1FzR635MsJt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"disapproval"}
]