Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One reason the US military is so successful is that every soldier knows that if shit hits the fan, they just need to hang tight because the cavalry is coming. 50 guys and millions of dollars in equipment will be put at risk to save one wounded. An AI would advice against these rescue missions because it’s not worth the risk on paper. But then you have to ask yourself, what happens when your soldiers know that nobody is coming for them if something goes wrong? Will they be as driven? Will they abort the mission the minute something isn’t quite according to plan?
reddit AI Responsibility 1648685378.0 ♥ -1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyunclear
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_i2urib5","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"rdc_i2rxbq2","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"rdc_i2s77xm","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"rdc_i2uz7ty","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_i2s95y3","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"} ]