Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Good points, but... > They're not talking about a general AI, they're talking about automated weapons systems Say I develop an AI that can target and kill a specific pest for a farmer. Like, say those invasive species beetles that are killing trees or something. Its not a stretch that you could just plug it into an armed quadcopter and tell it to target humans. My point Im trying to make is that this, like any technology, can be used for good OR bad, and that AI thats capable of this is completely inevitable and un-stoppable from being developed. I disagree about it changing the morality of warfare. War is already as low as you can get, morally. > If the West developed and sold technology like this, dictatorial governments would be almost impossible to overthrow Im not so sure. As an example : Lets say Blizzards World of Warcraft servers represent 'bad guys'. The bad guys, while they may have many of the brightest people who know the system very well since they built it, still fall to the occasional hacker. The point Im trying to make here is that 'no entity can compete with the masses when the masses put their mind to it'. Google, paypal, Microsoft, etc... masters of their domain, yet have still been bent over occasionally by anonymous. Resistance will always be possible. > with automated sentries around their palaces hehe these already exist and chances are youve thrown some money at the companies that build these. Samsung for example has a really cool automated anti-personnel turret system.
reddit AI Governance 1438006956.0 ♥ 5
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_cthqepc","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_cthrhts","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_cthsho5","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_cti2e2i","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_cthq57j","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"} ]