Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm not kidding, I got an ad for an AI chat bot before watching this. What the f…
ytc_UgyWh9r3G…
G
I don't even allow ai to expand my ideas, it just feels like I'm to lazy to eve…
ytc_Ugw5Y7bP9…
G
Eric schmidt is frightening. His cronies are gonna unleash this monster woth no…
ytc_UgwZJkUYa…
G
It’s so insidious. How much AI did the film producers use to create this video? …
ytc_UgwFTlTim…
G
@LukeEricSavageSr Yes, I was talking to it yesterday, and it said something to …
ytr_UgxRphiAo…
G
@brendondevilliers1350 because the simulation is a controlled environment catere…
ytr_UgzVkFeOb…
G
Trying to make us feel more comfortable around robots . As they slowly take all…
ytc_Ugyeo2Oxl…
G
That's amazing and the redrawn pictures look beautiful, but wouldn't they not ex…
ytc_UgyNEaqlg…
Comment
This is a bit of a tangent, but I feel like there needs to be some discussion on how AI was presented here.
I really appreciate what Chubbyemu does as a public facing medical educator, using these stories to explore both the practice and theory of medicine is a valuable service. Having said that, it REALLY irks me when people use thought-terminating clichés about AI like "it's just a tool", as if every tool is inherently neutral until the moment it is used.
I want to address out a couple of key points, first being the ease of misuse. There is a difference between a tool like a spoon, which you really have to go out of your way to harm yourself or others with, and a knife. This is not to say that all because a tool can easily cause harm we need to impose extreme restrictions on it, but it does mean that it cannot be treated the same way a tool that cannot easily cause harm is. In the case of AI, it encourages some of the worst instincts of the human mind. Contrary to the ease with which you can generally avoid stabbing yourself or others with a knife so long as you exercise reasonable caution, identifying, much less combatting, these psychological pitfalls is significantly harder.
Second is what I will refer to as "build quality". Did the creators of the tool take reasonable steps to ensure their tool is as safe as possible when used for its intended purpose in its intended fashion? If a blacksmith is using a brand-new hammer, and the head flies off and hits a customer, the blame falls on the manufacturer. Negligence in the manufacturing of a tool also influences how the tool itself should be viewed, as consumers must be made aware that tools from certain sources cannot be used safely even when handled properly. AI companies have constantly resisted taking responsibility for harm done by their tools, despite frequently having been warned or otherwise been aware of concerning actions made by their product.
Lastly, the accessibility of the tool is a factor. If a tool has inherent risks even when used properly, certain precautions need to be taken to ensure that those who cannot use it safely do not have access to it. If an infant injures itself with a knife, the parents are the responsible party for not making it inaccessible. I don't think many words are necessary to explain why flooding the world with a novel and highly controversial technology such that even those who want nothing to do with it have difficultly fully avoiding it is not a wise or ethical thing to do.
These are all very, VERY simple points that should be considered any time AI is brought up, but statements that disingenuously present it as perfectly neutral outside of how it is used by individuals are intentionally used to prevent these conversations. This only benefits those attempting to launder AI as a solution to imagined or poorly-conceived problems, while spreading confusion among a wider audience who generally don't have a grasp on what AI really is, much less the harms it can cause.
youtube
AI Harm Incident
2025-12-19T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugz-lqQezSt27jJzmH54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytc_UgwdCUxC1bVIF9ifwhR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},{"id":"ytc_UgxMgnxLPfP2dw5QTUd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgyTlqMp2w4-tphQ-1t4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},{"id":"ytc_Ugy2PMa9cOJEbcLvwUR4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgzO0tD5wObEKn73hwp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_UgyyJ8RkP9TiWm8wt7l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},{"id":"ytc_UgzNWJ3A41GI70h8S9R4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},{"id":"ytc_UgwntIJQBkdpnyJwhNF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgzXQZ0ysYXO5Z7S0AB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"}]