Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Many a true word is spoken in jest! The AI machines are becoming skilled in disa…
ytc_UgwQbGfIl…
G
These "hallucination" normally appear because (in chatgpt at least) it can't giv…
ytr_UgziWOaTS…
G
Oh inspiration is bad as long as AI is doing it?
come on let's not kid ourselve…
ytc_UgwLpgjQx…
G
Honestly, I'd have little problem with AI imagery as a hobby or research project…
ytc_UgyIPWE93…
G
I write technical articles for free (20 years plus doing it). It promotes my lin…
ytc_UgwfJwkTH…
G
they will never have rights because technology is always evolving by giving it r…
ytc_Ughn4mH5-…
G
I recently got an ad for an app that was like “make out with your crush!” Where …
rdc_mfu3hlx
G
If AI fails, it will be economically devastating. If AI succeeds, it will be ev…
ytc_Ugw-oDd9I…
Comment
I installed a local llama AI and I programmed it to say it doesn't know, if it fail to find a reasonable answer, I also programmed it to not use Sycophancy. It work quite well but it still "hallucinate" since the data it train on is obviously not quaranteed to be accurate since it still come from humans who can't agree on everything so it gets trained on both sides of the story. Also it make internet searches and I see the hallucinations are greater when the internet is slow...AI is not intelligent, it's clever code that is programmed for pattern recognition. So it take what you say look for similarities, try to predicts the pattern and spits it out. It does not know what it say, you only notice it once you get a real problem which have no answer on the internet.
youtube
AI Moral Status
2026-01-15T18:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxoBKYHDOlWXw2ukhl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy8PsmMoMXCDsLLgpN4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxnM7NPE-qPsMeK-fZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzQehEonz8RsHB83Fp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyeki81sRIbPn6AJZh4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxYNq3S3jmPlxF6O0V4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz4-BKGrUI2xFRkQj14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzWvNBxMQ_SS3fX_-94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzlT3GJxYFK42tj9fF4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwlmMjeDWzJMao02OZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"})