Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is one of those statements that feels right at first but wears away quickly the more you think about it. The most obvious limitation ought to be harmful information: it shouldn't, for example, ever recommend toxic doses of medicine. Then you get to greyer areas like what we do for children who may gain access to the tool and not be able to discern its sincerity? We should at least agree that AI shouldn't *actively* trick children right? And I feel like you could Socrates that away into issues of sharing private information between users, of providing false information that empowers criminals to scam people, for example.
reddit AI Responsibility 1678932025.0 ♥ 6
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jcd1ccx","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"rdc_jcdnzhu","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"rdc_jcb47hj","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"rdc_jcbqdq7","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_jcbo75o","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]