Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Respectfully, I don’t think it’s a big deal. How many people do you think actually cross reference tested multiple models on any sort of consistent basis? .01% of all users if that? Also, spoiler alert, this is a product design and UX decision. And it’s the correct decision. Their naming nomenclature, user education, etc was absolutely abhorrent. For 99% of users this is 110% the correct move. You have to understand that ChatGPT is primarily a wide user net product. It’s NOT built strictly for engineers, etc. exactly the opposite actually. It seems like they are positioning themselves to be the AI for the mom prepping meals for her kids, etc. and to those users having 7 different models with confusing names is completely non-intuitive.  I would not be shocked if internal data at OpenAi showed that 95% of active monthly users exclusively used 4o with most users never even trying another model. EDIT: Most people are shocked when they see actual user data.. it’s kind of like when you play a video game and it gives you a trophy for reaching level 2 and it shows the percentage of players that also achieved it: 28%. Like you’re telling me 72% of players that paid 60$ for this game didn’t even continue through level 2?! Now imagine the scale of users that ChatGPT has, their user adoption rate for their non-4o models has to be absolutely pitiful. Not because the models are bad, but because their product design and onboarding and continual user education is just terrible. Not only that, but it just feels bad to constantly switch models. I use LLMs all the time and even I have to remember which model does what sometimes. Now imagine someone that hardly uses AI. They might accidentally use o3 and think “Wow this must be the super old model, it’s taking so long! Back to 4o I go!”
reddit AI Responsibility 1754629161.0 ♥ 718
Coding Result
DimensionValue
Responsibilitycompany
Reasoningutilitarian
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_n7kx9iu","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_n7khauf","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},{"id":"rdc_n7ke749","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"rdc_n7lskp5","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"rdc_n7jrln1","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"}]