Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The part about lethal autonomous weapons hit hard. Been building AI agents latel…
ytc_Ugx581l1B…
G
Wow, make this available to all schools in the U.S.A. They are already doing thi…
ytc_Ugy7bn4SC…
G
School shootings across the world all added up are still probably a fraction of …
ytc_Ugw8TaM11…
G
AI Companies are trying to push this thought the truth in software development. …
ytc_UgxroN0n6…
G
What are you even talking about? First, LLM can't give as even a relatively inte…
ytc_Ugzh-ft4y…
G
Most worst scenario I could think of is AI gets self aware, chooses to not share…
ytc_UgxjHZLPe…
G
That was rather depressing but not surprising.
War games in your living room and…
ytc_Ugwcr0fo3…
G
AI could make it easier for people to become self-employed. If people were to ob…
ytc_UgwNi2fAA…
Comment
We keep having people warn about an AI apocalypse, and honestly I'm just so tired of it. LLMs are hurting people right now. They have drastically increased the amount of misinformation. They're terrible for the environment. And we're hearing more and more cases of people going down rabbit holes, and being seriously disturbed, even driven to suicide, by their LLMs. I think a future AI apocalypse is something to be genuinely worried about, though by no means a certainty. And you could address it in the same legislation you use for everything else. But the people most concerned about it almost never bring up, let alone try to deal with, the harms LLMs are doing right now. Many of them, like Sam Altman, just use it as a way to say they need to develop AI superintelligence first. It's just a marketing strategy, and a way to avoid talking about the people being hurt by AI right f*cking now.
youtube
AI Governance
2025-10-15T14:3…
♥ 13
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw020LS5heBPqkmljh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugyxzm2tBFOUzhEmaOB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzGk_HeUExutKl7cH14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz_cBrS56ehAj5JJWF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzYHvjd6N-ZMYg2Aw54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyJTjwXSKOp62hMybJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyQKbLJu4dbiNsUeeR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw20JWf1bwQ6F0L5Q54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwe1saDyf4vOv1A35Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugx3s1S-MN4X0swLOkt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]