Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
That PR campaign will be the next serious war the US is involved in, particularly if it's a war against a near-peer adversary and not going great (the most likely case being a US-China conflict over Taiwan.) I think the US will develop and distribute fully autonomous weapons, but not turn them on until some crisis happens which massively swings public opinion (similar to the post-9/11 fervor) and legitimize their use both for that conflict and all future ones. In the meantime, the goal is to make them available for use at a moment's notice, similar to how we already have thousands of nukes waiting on standby. The lack of ability to prove or disprove whether an adversary uses similar weapons (compared to, say, WMD use) will make it easier still to claim the enemy has used them first, whether it's true or not, so we are only responding "proportionally". Faced between a choice of a hypothetical future loss to AI, or a likely and imminent loss to an enemy in an ongoing war, the public will support their use.
reddit AI Responsibility 1700987835.0 ♥ 3
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_katap8i","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_kars0gr","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"rdc_kasm8un","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_kar76r1","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"rdc_kardxxk","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]