Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Except way, way dumber. The AI in war games was an example of traditional reinforcement learning taken to the extreme- it could discover inconsistencies in its own understanding, design tests, acquire new knowledge, and extrapolate that knowledge to other scenarios, while operating with an overarching goal to focus its actions. A transformer model (what LLMs are based on) is fundamentally incapable of this kind of learning, no matter how big you make it. The fact that the military wants to use LLMs to decide who to kill is fucking terrifying, not least because it shows that the people running the show have no fucking idea how the technology they're using works and what its limitations are.
reddit AI Responsibility 1771981333.0 ♥ 1585
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jkrf68b","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"rdc_jksdu2y","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"rdc_jksupl6","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_o788tt3","responsibility":"government","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_o78s7wc","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]