Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's possible, but I didn't feel like I had to defend the idea that you can't accurately (or maybe even inaccurately) simulate something you don't understand. What the AI people are doing is just simulating the symptoms of intelligence, which is like painting dots on yourself and saying you've constructed an accurate simulation of measles. Sure. For certain narrow contexts that no one gives a shit about. Moreover, computers are built as a series of yes/no logic circuits in the way that humans aren't. So there might be no way to code a messy soup into that. Also, the extent to which AI researches are amazed by their own programmes might tells us more about them than the software. Breaking it into clean parts and imagining that you're getting closer to something is PRECISELY the problem. Most people have the completely wrong idea of language and what's going on there. I mean we're not sure, but we know what it definitely *isn't*.
reddit AI Moral Status 1663239598.0 ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ioda5zp","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_ioijtq3","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"rdc_iodokdp","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_ioeir86","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"rdc_iodxez9","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]