Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Not a bad idea actually. We could replace all the actors in movies with AI too s…
ytc_UgxJcBKbi…
G
My limited intelligence tells me this debate is a bit 'hi faluten' as to what AI…
ytc_Ugz-V0oVp…
G
If Ai is allowed to recite exact texts, then photocopying a book from the librar…
ytc_UgyLh4QLw…
G
I'm convinced that AI art will not dare attack Ethan.
Simply because he has a kn…
ytc_UgyDPd1qP…
G
This stems from a poor understanding of how LLMs work. It generalizes at a resol…
ytc_UgxdyzhnP…
G
This is so delusional, as if one white collar job is the same as the next. No, m…
ytc_UgwbLDRyj…
G
If there is no user, ChatGPT doesn't exist. Language, thoughts, etc are not cons…
rdc_j5vqkk8
G
"Papa Franku would not be proud of you"
Idk does Joji even have a public stance…
ytc_Ugzhk_ku0…
Comment
There are many ways to catch ai hallucinations. The way I use AI, I'm always testing for hallucinations regularly. It's just the way I use it.
It might hallucinate on the first prompt, but if it sounds off and you want to double check, it'll usually correct itself on the second prompt. And if you don't catch it on the second, it should become obvious by the fourth or fifth.
The more important it is, the easier it is to consult a second AI model. You can even arrange an agentic array of experts to find a consensus, but I think that's what already basically ChatGPT and Gemini do behind the scenes.
And that's how they have already been able to decrease their frequency of hallucinations.
I feel like the concern over hallucinations are people who simply do not know how to use Ai well.
The limits of AI are with the users. You get out what you put in. So if you're putting in slop, you get slop.
I'm not an expert on this though.
reddit
AI Moral Status
1765317918.0
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_nt6usbo","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_nt6njvp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"rdc_nt6wlv2","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_nt6qx0h","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_nt6jk1j","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]