Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI Hallucination is not about intent. The LLM is not being duplicitous. The problem is that all Machine Learning or AI has a confidence metric. If you look at image analysis, language to text, text to language, OCR, etc. the models will output a confidence level for everything, at an API layer, on a scale of 0.00 to 100. The LLMs are basically the same. The problem is that the LLMs are not allowed to tell you that they are never 100% confident. They are being branded as all-powerful and certain by the companies that make them. A Google search has always felt like it's 100%. It indexes the web, you input words, and it finds those words 100% of the time, and based on the index of sites and links between sites it would find the most relevant sites for those words 100% of the time. That's what we expect as consumers of things we type words into to search for answers. So Gemini, ChatGPT, Claude, etc. also have to present as 100% certain. But when it comes to machine learning it is basically never 100% certain in practice. This is why when it gives an answer to anything you can say "that's not right, try again" and it will give you another definitive answer. So if it's incredibly not certain it will just predict-the-next-word for a legal brief and cite things that do not exist. Because it has to give you an output no matter what. It literally cannot say "I do not know". It should say "I do not know", but it is not capable of it...because money.
youtube AI Moral Status 2025-11-01T11:4… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugyrzk2cQfXt5FFliip4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwgXZH2zAXLl3HFHa94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxsEy1n1ttwFI4RnfZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyiZD7RbpOnRQA7kvl4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzBK6MosUUXVxtHRH14AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgwOK9HzyWjRRH3J_mZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxkFozAiYR9ktL-9dl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzgKH-TnpEJqF54-gV4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyHmsJ0t-pi058GgB54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyfOMyatL6h5Cb-A754AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"} ]