Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem with holding the user responsible is that there are so few controls on AI to predict what it will do. It is essentially automated software for most applications. It would be like making a hammer where the hammer head could disconnect and fly at anything during normal use and expecting the user to be accountable for that. We already have laws that punish malice (which do need refinement and better enforcement with AI). We need to stop pretending industries that seem to be designed to break these laws aren't an accessory to them.
reddit AI Responsibility 1724512947.0 ♥ 5
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-25T08:13:13.233606
Raw LLM Response
[ {"id":"rdc_liyju5z","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"rdc_lj9148h","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_lj9vb52","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"}, {"id":"rdc_ljps9we","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"rdc_lkzhu0u","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]