Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It seems like there's another big definitional difference here. Most people here…
ytc_UgzdYdHXo…
G
"We've been asking the wrong question. Not 'how do we make AI safer?' — but 'who…
ytc_UgzHc0Swm…
G
this was over 2 years ago and from what I know he didn't show it on stream. His …
ytr_UgzSiqCg5…
G
you reeducate yourself. There is no other choice.
If America out of all develope…
ytr_UgxRKNQn-…
G
Okay chatgpt, are you willing to put your life at line if you actually believe t…
ytc_Ugwwl8RJa…
G
In Trumpism and Putinism, only the very few rich people will have first tier acc…
ytc_UgyJP131J…
G
@lepidoptera9337 I’m sure someone would pay for an expensive worker if it out pe…
ytr_UgzaaAe59…
G
If YouTube can use its AI censorship technology systems to delete "offensive" co…
ytc_UgwV8lpgv…
Comment
It wasn't the AI that got him arrested. Multiple humans also looked at the two pictures and agreed it was the same guy. This is not the same as "AI makes a comparison no human would make and causes issues." like with the Doritos bag. The two men look extremely similar so they got mixed up. This is not the slam dunk the AI haters think it is.
Also, a lot of AI problems are just human problems. Humans make mistakes or don't check things and pretend it's the AI's fault. Any system, even any human, can have false alarms. In the case of the Doritos, when a system flags something as a gun, humans have to check that before actually calling the police. Even if it wasn't a system, say they had a human watching the cameras, any sensible person would upon being told by someone they saw a kid with a gun ask "Are you sure?" "Show me." "Let's take another look to be sure." etc.
Whenever you hear "AI caused this issue" think, what if it was a human? That helps you figure out if it's AI's fault or human's fault. If a system is in place that can take drastic action like calling the police is controlled by a single entity that is not checked by anything else before the alarm is put through, that's a bad system and issues that arise are the fault of the system and not the checker. Whether the checker is a human-designed algorithm, an AI agent, or a human. The problem isn't the thing doing the check, it's only having one check.
IBM once said "A computer cannot be held responsible, so a computer must never make a management decision". If an AI makes a mistake, the human is responsible for not checking it, because the human is the "boss" of the AI. For example, there was some similar "It's the AI's fault" with commits to the Linux Kernel. Some people would just commit AI generated code and go "If it doesn't work it's the AI's fault.", no, YOU are committing the code. YOUR name is on it. Even if you had help from an AI, or had help from a friend, or from a co-worker, YOU'RE putting YOUR name on that commit. YOU are responsible for it.
Remember when Crowdstrike broke the Internet? CROWDSTRIKE broke the Internet. "The intern writing the code" didn't break the Internet, "AI" didn't break the Internet. CROWDSTRIKE broke the Internet. Because the code was released by Crowdstrike, they're responsible for it. Whether it was written by a human or an AI it doesn't matter. AI is not the problem. Organizations and individuals not taking responsibility for their actions is the problem.
youtube
2026-01-03T19:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugz-6Wb6HjtpcYo2KgN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzw5PWwUZIo9xUoUMZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_UgzbL_itqY2wBYoTWHh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzmouhjcgxQ7Aw2S5d4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz5U9NR3h0cGjSAWJx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzitZMaPRABI5Vm4Ft4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwOCCqvTmwrPAQCaWB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgxfTZs2Mb6hiUBjB3F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwQelHaUMwfPUxikxV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwnXuCUIVJhmGnIseB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]