Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Disagree with that framing, because it suggests that the lawyers in this case are a hindrance. There's a reason why legal liabilities *should* exist. As Gen/agentic AI starts doing more (as is clearly the intent), making more decisions, executing more actions, it will start to have consequences, positive and negative, on the real world. Somebody needs to be accountable for those consequences, otherwise it sets up a moral hazard where the company running/delivering the AI model is immune to any harm caused by mistakes the AI makes. To ensure that companies have the incentive to reduce such harm, legal remedies must exist. And there come the lawyers.
reddit AI Responsibility 1755583552.0 ♥ 236
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningdeontological
Policyliability
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_n9i5c43","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"rdc_n9ie952","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"rdc_n9hnanf","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"rdc_n9hftrt","responsibility":"distributed","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"rdc_n9hzids","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"} ]