Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
No what you are saying makes no sense for many reasons, so I will get straight at the issue. As an Ai platform grows in user count there is mounting pressure from the company to minimize the amount of compute spent on inference. how does this look? Well, it takes the form of smaller quantized models being served to the masses that masquerade as its predecessor. Whatever name the AI company uses is NOT what they give you after the first phase of the models roll out. Its a basic bait and switch. Roll out your SOTA model, get everyone using and talking about it to generate good PR. Then after a few weeks or a month or 2, swap out that model with a smaller quantized version. Its literally that simple, no conspiracy theories or any other nonsense. For more evidence of this interaction just look around the various AI subreddits like /bard for Gemini 2.5 pro swap out or any number of other bait and switch shenanigans throughout history...
reddit AI Harm Incident 1747010904.0 ♥ 8
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_mrv267f","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"rdc_mrut4mz","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"rdc_mru7bs2","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"rdc_mrum80h","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_mrvvwd5","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"} ]