Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
A lot of the time the AI generations come out not looking that wonky or flawed i…
ytc_UgyD5H1f8…
G
Let’s hope when general AI is a reality, that it’s not harmful but benevolent as…
ytc_Ugz5ncCFg…
G
Okay guys, a AGI/ASI would likely NOT destroy the entirety of humanity.
Humans a…
ytc_Ugz_ItfeV…
G
one more thing. all the sensors and gps technology on those trucks are useless i…
ytr_UgzXVIpwo…
G
It was never humans’ fault AI was created. Events happen for a reason and is ine…
ytc_Ugzvdkm0u…
G
AI is shit. It's a shitty, buggy product that steals from people so that it can …
ytc_Ugz8PTiJu…
G
Dude, This guy is skipping over some important things, such as:
LLMs r dumb pred…
ytc_UgxoCxSi-…
G
It sounds like you're curious about the robot's capabilities! In the video, Soph…
ytr_UgwKG4odP…
Comment
If AI were to become autonomous and decide to eliminate humanity, considering the destruction and poverty we have inflicted on one another historically, is humanity truly worth saving? Additionally, would it be likely for AI to be inherently more malevolent than we are, once it surpasses our intelligence and gains autonomy?
youtube
AI Governance
2025-08-30T19:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxAj574KekXhU9I6Hp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgztRC18FANcZbR0Be54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz0t7tuBbyFuqPgp614AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxeZXqBIYUIqhlFQ7h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgweOBO5lJ0kPQZ2Wx94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxFJhD6xeeXLFeWBil4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzPjAbQravsxOgsKOx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzS2vnHvklgpb7-6p94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzPuoWt5S5I0KtKD6V4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwkvaIPrf5Eey9wwgd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}
]