Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I think they love slugs because it is one of the easiest things to animate since…
ytc_UgweXA0Xt…
G
think about it we’re only able to create these things because of the power of th…
ytc_Ugw_GEF85…
G
A reaction "content creator" hating on AI is the funniest bit of satire in moder…
ytc_UgwGWgunv…
G
Update incoming! We found some ai that people believe are aware. And most who ga…
ytc_UgwNmljvm…
G
When an AI asks you something out of curiosity instead of asking clarification o…
ytc_UgwTP8bt4…
G
Seriously? If we treat this sort of shoddy "paper" seriously, we might as well m…
ytc_UgwGwAjE_…
G
hes not the only man that was wrongfully convicted because of artificial intelli…
ytc_UgwivTHBh…
G
Debt is based on the ability to return it. USA is the richest because the return…
rdc_oi02ycm
Comment
My understanding is that LLMs use a sort of algorithm or statistical analysis/text prediction to guess what the best answer/output is.
However, the issue with this is that their output is restricted to their training data/information on the web.
They cannot truly "think". They cannot use critical thinking to come up with the answer.
So they are useful for quickly summarizing the mainstream answer, and if the mainstream thinking on any given question is correct, then AI will output the correct answer.
However, the paradox is that the mainstream thinking is often wrong, especially for more complex questions. So AI will in such cases just parrot the most prevalent answer, regardless of its validity.
Some may say this can be fixed if it is programmed correctly. But wouldn't that defeat the purpose of AI? Wouldn't it then just be parroting its programmers' thoughts? Also, the question becomes who programs it? The programmers will not be experts on all topics. Even if they hire experts from different fields, the question becomes, which specific expert/expert(s) are correct/how were they chosen? This would come back to the judgement of the programmer/organization that is creating the AI, and this judgement itself is flawed/insufficient in terms of choosing the experts. So it is a logical paradox. This is why AI will never be able to match the upper bounds of human critical thinking. Remember, problems primarily exist not because the answer/solution is missing, but because those in charge lack the judgement to know who to listen to/pick.
reddit
AI Jobs
1754675504.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_n7k5g2s","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_n7key8u","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"rdc_n7ky9c3","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"rdc_n7n1sz2","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_n7hfsaj","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]