Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
They can’t but it kinda gets tricky if an autonomous drone actually makes a mistake and I.e. targets American ship or something like that Now the Chinese couldn’t say „oopsie, coding error, sorry”, they would have to lie that this was a rogue pilot but that’s kinda tricky if pilot doesn’t exist and there’s no one to prosecute So having or even testing these weapons would be unnecessary liability to the owners - those in power don’t want any stupid robot to create a major international incident by mistake so I think this agreement will actually achieve its goals Keep in mind that world leaders are almost exclusively narcissistic control freaks (why else would you want to become a president?) so it kinda makes sense to not offload thinking to machines. If international incident is to happen they want to make sure it was because _they_ ordered it, not an accident
reddit AI Governance 1699783757.0 ♥ 23
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_k8woe3m","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"rdc_k8wtmg7","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_k8y4f22","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}, {"id":"rdc_k8wopbc","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"rdc_k8wmgld","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]