Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
i feel warmth when i see a human painting. i don’t get that when i see an ai pai…
ytc_UgwvalSBo…
G
This video sheds light on a critical issue in AI hiring. It reminds me of how im…
ytc_UgxGW3k-Q…
G
I was watching a video of a white guy visiting a village in Uganda 🌍. One kid st…
ytc_Ugz9tjt4t…
G
Investments in AI will plateau once unemployment reaches a pivotal point that co…
ytc_UgwzMusgd…
G
C'est juste dire que c'est faux et que le vrai est dans décision et bonne volont…
ytc_UgzUwsTLp…
G
Remember, LIDAR does NOT work as well in adverse conditions as well like snow, s…
ytc_Ugxma5y7r…
G
Art and design are much more than an image. Maybe it's useful for small business…
ytc_UgzbnnMiC…
G
Ok really, let’s just stop hating on artists, and to the artists? Stop hating on…
ytc_Ugybs9AHR…
Comment
Chatgpt isn't smart, Chatgpt/LLM's are nonsense-by-default, useful output is a SIDE EFFECT.
"may make mistakes" in the disclaimers is because MISTAKES ARE THE MAIN FEATURE.
You cannot get determinism from a probabilistic system.
It doesn't even really do the all the "smart" things they hype it to do.
"smart" isn't even a good statement because it's still baking in the rhetorical/sentimental idea that the only tool we have or should use is by: comparing math to a humans in order to replace humans.
Even devolving into analogies of training LLMs being like "growing an organism", plants a very wrong insidious idea.
And that's the dumb af rhetoric game being played whose main goal has become to boost overvalued stocks while the floor falls for a long line of reasons not just "AI".
Useful OUTPUT is a side-effect, output != smart,
Chatgpt/LLM's are probabilistic nonsense-by-default.
nonsense is the core feature most everything else is illusory we have to force to happen.
nonsense is NOT the side effect, cohesive useful output is a side effect.
The biggest lie is "AI" (probabilistic LLMs) have understanding, or are "reasoning" or the bevy of other anthropomorphic sentiments, to hype services by slapping words in the UI and the marketing; and yes even stemming from researchers because they need marketable paper titles to get funding.
It's perverse how bad our language is in helping us mislead ourselves.
The illusion of useful outputs is because a ton of money and human time is burned to minimize the nonsense default.
Probabilistic and determinism are different words for a reason.
Saying an LLM is "smart" because it randomly pulls from a corpus of human knowledge is like saying a pile of shit is delicious because it's carbon atoms shaped & textured like a cake.
youtube
AI Moral Status
2025-10-30T21:3…
♥ 3
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzrmdAGaBxHu3fE2od4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyh9VyDP4iVV4TeNBB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz0Re-k0YctHhspmCR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyU_k2lO_vHRhcHj_h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzmjL-k5k3XIV8Io2x4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyS1AlKfeyyTFQg8YN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwiJD32RVEZUWYMVH14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxMf-EdlaHrsKhZwep4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzmiJxClhPU4ivMYwp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugyi0OVPnLvo5kXdA8B4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"}
]