Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Safety tuning is always affecting performance, and vast majority of safety testing is not to prevent people from doing bad things, but just preventing from the company to be canceled/sued. So while I agree those models for sure are ahead of time, but I don't think those companies are necessarily delaying the public available model that much, I just think it takes a very long time to safety tune those models. Also, it's obvious that Military can handle much heavier models. For every mission, there are a lot of theorists, military analysts and inter agency cooperation, which makes things very expensive, so an AI model that is prohibitively expensive for most civilian tasks becomes financially viable when it comes to military. Another such high value task in my opinion is research, as researchers already are paid a lot and their labs often have very high costs, so they could absorb much higher cost models as well.
reddit AI Moral Status 1772373406.0 ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_o81wdm5","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_o820r7m","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"skepticism"}, {"id":"rdc_o824q9q","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"rdc_o8253wv","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"rdc_o829hm4","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"}]