Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Just have AI generate the image, dont tell anyone and then paint it. Win award. …
ytc_UgwCtqmRn…
G
You know on some level I do feel bad for the gen A.I. bros. I’m no pro artist or…
ytc_UgygnbRpS…
G
Hello
im a little late to the party i apologize
first i have to state that im n…
ytc_UgwZGsyl1…
G
Barely. A few events having different outcomes and the northern hemisphere woul…
rdc_dl018vi
G
Maybe AI will cure cancer and many other diseases and viruses. AI will help peop…
ytc_UgxfjausL…
G
I get the fear for AI, but I just realized that this just seems like a distracti…
ytc_Ugy0cIVZF…
G
We still don’t have a mechanistic understanding of consciousness (350+ theories,…
ytc_UgwZBIHr1…
G
Why do these people think that the goal of AI is to think like a human?
Oh... Be…
ytc_UgyR6QG6L…
Comment
The hallucination issue (circa 17%) will never really be solved at scale.
In order to refine a RAG model to an acceptable low hallucination rate to need to narrow its scope, making it less general by definition, making it less usable for questions not related to its training data.
Statistically impossible for a large scale LLM to remove hallucinations because it is incapable of determining what is true, so effectively you'd need constant human in the loop hygiene of all 3 trillion data points.
LLM's are at a dead end in terms of significant improvement from here.
youtube
AI Responsibility
2025-12-15T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgzAyB4UkzQDPD1dT5Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyJdQc7VpWLV7Obpax4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"ytc_UgyWyASXvzut5TnPQrp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugy0gvvEv-uNvolHgHZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},{"id":"ytc_Ugwd7PHqixPGiU76k9t4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"frustration"},{"id":"ytc_UgyA-KhnhLpiJ1ZyC1h4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},{"id":"ytc_Ugz_D2Nnqh5G0siPE5B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwFJbtsw0d_mVbzX5x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},{"id":"ytc_UgwGvMj1O2A0X7sefx54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},{"id":"ytc_UgxJo3hRkgUEbNPDovl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}]