Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
>I know ChatGPT is basically just an illusion, a large language model that gives the impression of understanding Stupid is as stupid does. I argue that the model does contain lots of language understanding. It's pretty obvious to me. People will say that there is no "real understanding." But they seem to define "real understanding" as understanding like humans do. OK, then that's true by definition since it doesn't mimic a human exactly! It's like saying, sure, dogs can understand some things, but there is no "real understanding" as they don't understand the way a human does. ​ >consciousness is just an illusion and our brains are doing something similar with a huge language model. (Assuming consciousness is just the brain system and consists of unconscious parts, parts such as a LLM) How is it an illusion? Why does understanding how it works mean it is somehow less? Do you think a rainbow is "just an illusion" since we know what causes it?
reddit AI Moral Status 1674700619.0 ♥ 2
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_j5wgh5r","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_j5wgtpz","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_j5wkj73","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"rdc_j5x2hec","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"rdc_j5wt5oo","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"})