Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Chatgpt isn't smart, Chatgpt/LLM's are nonsense-by-default, useful output is a SIDE EFFECT. "may make mistakes" in the disclaimers is because MISTAKES ARE THE MAIN FEATURE. You cannot get determinism from a probabilistic system. It doesn't even really do the all the "smart" things they hype it to do. "smart" isn't even a good statement because it's still baking in the rhetorical/sentimental idea that the only tool we have or should use is by: comparing math to a humans in order to replace humans. Even devolving into analogies of training LLMs being like "growing an organism", plants a very wrong insidious idea. And that's the dumb af rhetoric game being played whose main goal has become to boost overvalued stocks while the floor falls for a long line of reasons not just "AI". Useful OUTPUT is a side-effect, output != smart, Chatgpt/LLM's are probabilistic nonsense-by-default. nonsense is the core feature most everything else is illusory we have to force to happen. nonsense is NOT the side effect, cohesive useful output is a side effect. The biggest lie is "AI" (probabilistic LLMs) have understanding, or are "reasoning" or the bevy of other anthropomorphic sentiments, to hype services by slapping words in the UI and the marketing; and yes even stemming from researchers because they need marketable paper titles to get funding. It's perverse how bad our language is in helping us mislead ourselves. The illusion of useful outputs is because a ton of money and human time is burned to minimize the nonsense default. Probabilistic and determinism are different words for a reason. Saying an LLM is "smart" because it randomly pulls from a corpus of human knowledge is like saying a pile of shit is delicious because it's carbon atoms shaped & textured like a cake.
youtube AI Moral Status 2025-10-30T21:3… ♥ 3
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzrmdAGaBxHu3fE2od4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyh9VyDP4iVV4TeNBB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz0Re-k0YctHhspmCR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyU_k2lO_vHRhcHj_h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzmjL-k5k3XIV8Io2x4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyS1AlKfeyyTFQg8YN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwiJD32RVEZUWYMVH14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxMf-EdlaHrsKhZwep4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzmiJxClhPU4ivMYwp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyi0OVPnLvo5kXdA8B4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"} ]