Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There are many ways to catch ai hallucinations. The way I use AI, I'm always testing for hallucinations regularly. It's just the way I use it.  It might hallucinate on the first prompt, but if it sounds off and you want to double check, it'll usually correct itself on the second prompt. And if you don't catch it on the second, it should become obvious by the fourth or fifth.  The more important it is, the easier it is to consult a second AI model. You can even arrange an agentic array of experts to find a consensus, but I think that's what already basically ChatGPT and Gemini do behind the scenes.  And that's how they have already been able to decrease their frequency of hallucinations. I feel like the concern over hallucinations are people who simply do not know how to use Ai well.  The limits of AI are with the users. You get out what you put in. So if you're putting in slop, you get slop.  I'm not an expert on this though.
reddit AI Moral Status 1765317918.0 ♥ 6
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_nt6usbo","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_nt6njvp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_nt6wlv2","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_nt6qx0h","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_nt6jk1j","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"} ]