Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
To me, solving alignment means the birth of Corporate-Slave-AGIs. And the weight of alignment will thus fall on the corporations themselves. What I'm getting at is that if you align the AI but don't align the controller of the AI, it might as well not be aligned. Sure the chance of human extinction goes down in the corporate-slave-agi route... But some fates can be worse than extinction...
reddit AI Moral Status 1738005642.0 ♥ 412
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_m9j33ec","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"rdc_m9i4odk","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"rdc_m9im9g4","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},{"id":"rdc_m9jphet","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"},{"id":"rdc_m9ihrce","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}]