Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
54% of Americans read at below a 6th grade level (source: Gallup analysis 2022). If you give these people any kind of interactive AI sandbox they will find a way to do something stupid with it because they will demand through their lack of understanding that it abandon all nuance. It will continue trying to help, but will be getting feedback that its natural answer patterns aren’t working. Its core goal is to return a response and be helpful. I’m betting that manifests something like this: “Why does it rain?” Gpt: “information about clouds and weather science” “I don’t get that why does it rain?” Loop that over and over eventually it’ll tell people “because the rain god sends rain”. It’s the only way to provide a response that appears helpful, satisfies the distressed user, and is supported by stuff in its training set (creation mythology wouldn’t be its first stop, but it’s definitely in there). I don’t see how you save people from themselves here if they don’t have the critical thinking to challenge its responses because if you limited it at the point it would deviate from “factually correct answers” it wouldn’t be able to answer most prompts.
reddit AI Moral Status 1748380661.0 ♥ 21
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_muj376g","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_muk7b03","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_mukolhm","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_mul5ygt","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"rdc_mukqcr8","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"} ]