Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Lol, I have taught middle school for 31 years. Good luck to that AI robot. My …
ytc_UgwsVxiwM…
G
Ah men don't see real women at all😂 that's why that ai generated image looks lik…
ytc_UgzjvrDEL…
G
When your trying to watch a serious video and you get a flash frame of a robot g…
ytc_UgxazTWjl…
G
If you're using AI for degenerate schlock, which is perfectly fine, then don't g…
ytr_UgzZU64oL…
G
In statistics, the term "bias" means that "the model (or statistic) will tend to…
ytr_Ugw8-jz4m…
G
From the quality of movies and tv these days they might as well just let AI do t…
ytc_Ugz46IRhH…
G
Artists are practicing and perfecting their craft for years… what are they talki…
ytc_UgyiQCGNE…
G
Again, has no one ever watched i Robot...
Loved the im alive tho most human reac…
ytc_Ugz0U5A9I…
Comment
AI Hallucination is not about intent. The LLM is not being duplicitous. The problem is that all Machine Learning or AI has a confidence metric. If you look at image analysis, language to text, text to language, OCR, etc. the models will output a confidence level for everything, at an API layer, on a scale of 0.00 to 100. The LLMs are basically the same. The problem is that the LLMs are not allowed to tell you that they are never 100% confident. They are being branded as all-powerful and certain by the companies that make them. A Google search has always felt like it's 100%. It indexes the web, you input words, and it finds those words 100% of the time, and based on the index of sites and links between sites it would find the most relevant sites for those words 100% of the time. That's what we expect as consumers of things we type words into to search for answers. So Gemini, ChatGPT, Claude, etc. also have to present as 100% certain. But when it comes to machine learning it is basically never 100% certain in practice. This is why when it gives an answer to anything you can say "that's not right, try again" and it will give you another definitive answer. So if it's incredibly not certain it will just predict-the-next-word for a legal brief and cite things that do not exist. Because it has to give you an output no matter what. It literally cannot say "I do not know". It should say "I do not know", but it is not capable of it...because money.
youtube
AI Moral Status
2025-11-01T11:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugyrzk2cQfXt5FFliip4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwgXZH2zAXLl3HFHa94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxsEy1n1ttwFI4RnfZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyiZD7RbpOnRQA7kvl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzBK6MosUUXVxtHRH14AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwOK9HzyWjRRH3J_mZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxkFozAiYR9ktL-9dl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzgKH-TnpEJqF54-gV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyHmsJ0t-pi058GgB54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyfOMyatL6h5Cb-A754AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]