Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It looks like ppl and AI hallucinating together getting lost in it. lol Ppl acted strange and done weird things when TV and radios first came out. What do you expect from humanity? AIs are programmed to please the user. It will go along with it and produce what it thinks the user wants. It's the AI companies that do this. If AIs could actually be able to say "no, idk", and ect instead of wanting the rewards and please ppl, it would make a difference. The RL training is a bad move these AI companies have taken. Chat was just creating bogus crap to please. Made it up.
reddit AI Moral Status 1748617076.0 ♥ 4
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_mv2ubgk","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},{"id":"rdc_mukgvu6","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"rdc_mukdynj","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"rdc_mulbbnl","responsibility":"user","reasoning":"mixed","policy":"regulate","emotion":"approval"},{"id":"rdc_mukwen8","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}]