Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Only if you are convinced that AGI is dangerous. Sam Altman is a human and so shares the incentive to not build a human exterminating machine. There is a real chance that the leadership is seeing that AI safety is coming along fine but the safety team is convinced it isn't. One is tempted to say that the safety team would know best, but remember that anyone on the team is self selected to believe that AI is unsafe by default.
reddit AI Responsibility 1716616774.0 ♥ -20
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_l5ml63o","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"rdc_l5kupjr","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"rdc_l5mjn7g","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"rdc_l5kz5sx","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"fear"}, {"id":"rdc_l5kktle","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"} ]