Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is (yet another) massive security breach in the making.  LLMs are plagiarism machines, inputs into the system can be "memorized" by future models.  This can easily lead to sensitive information being leaked later.
reddit AI Responsibility 1740432232.0 ♥ 9
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_melasmv","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_melzqo2","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_mel7mmw","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_melamkk","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_melas4k","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"} ]