Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Actually, I see plenty of ways they could be exploited in favor of those protesting. Computer programs follow rigid rules, if our hypothetical robocops were programmed to only become hostile when x action took place, people could just do whatever they wanted and not get touched, because the police program would only react to that specific action. Same goes for military drones in combat zones - I can see an ISIS bomber casually walking in front of an automated tank, placing a disguised IED on the ground, walking away and blowing everything up. As long as you know what will trigger a hostile reaction in a war machine that is not set to just kill on sight, you can work around that trigger with 100% certainty that you won't be attacked until you make that particular move. By knowing its engagement rules, you can literally read a machine in its "mind" to know what decision it will take. Imagine if the average jihadist could read US soldiers in mind.
reddit AI Governance 1428583849.0 ♥ 6
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_cq6fu0h","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"rdc_cq6gz59","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"rdc_cq6hhhi","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"rdc_cq6faj5","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_cq6grck","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"} ]