Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yeah, I don’t buy the fact that Anthropic is giving the military tech that’s 1 year ahead of consumers. A pretrain run is tens of millions of dollars of compute. That’s the difference between Deepseek V3 and Deepseek V4. Anthropic would almost certainly give consumers the output of a successful pretrain run as soon as possible, not 1 year later. In fact, OpenAI is getting shit for using the same pretrain run for over a year. GPT-5.2 is the same pretrain as 4o and that’s very out of date. Anthropic is NOT giving the military a finetune- they don’t need to finetune their models for them- but rather just need to do a separate posttrain run for the military, sans safety training RLHF. This may very well be better than public Claude but it won’t be one full year better.
reddit AI Moral Status 1772355908.0 ♥ 192
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_o80xfwa","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_o82wdxr","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"rdc_o81dhs6","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"rdc_o81bw5b","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"fear"}, {"id":"rdc_o81qat7","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]