Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This truly is the parallel. Among the lessons we should have learned by now . . . --In war, often one side loses. Even the best-made cleanup plans don't become actions when you no longer have control over the trapped territory. --In war, generals are routinely dishonest about consequences in search of approval for immediate actions. The viability of undoing a mine field or a killer robot swarm may be nothing like what is purported on deployment. --Warzones often generate regional poverty. Minefields are especially tragic and deadly when area scavengers attempt to salvage scrap from the munitions. Killer robots would pose the same problem. Land mines seem like a good idea at first glance, and under fire it is understandable that an ordinary soldier would act without thinking about long term consequences. Officers and civilian overseers should be held to a higher standard. Given even a little bit of actual competence, they would uniformly oppose the manufacture of autonomous killing machines while also joining the effort to rid the world of land mines.
reddit AI Governance 1438030818.0 ♥ 3
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningdeontological
Policyregulate
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_cthpwz8","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"rdc_cthvac3","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_cti7g1z","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_cthrdb6","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"approval"}, {"id":"rdc_cthqeqw","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]