Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I read this as the absolute reverse. Sam Altman saw the safety team as what was holding him back from going for AGI. The board tasked with keeping AI safe tried to fire him (the assumptions is that he manufatured a confrontation to bring it to a head), and he won the corporate battle. Now having won that battle, he can get rid of other impediments. This is him getting rid of guardrails. Not him being convinced they are no where near AGI.
reddit AI Governance 1716139476.0 ♥ 3
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_l4tad0f","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_l4qlm3c","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"mixed"}, {"id":"rdc_l4qn2sy","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"rdc_l4rdt6d","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"rdc_l4p8b79","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]