Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The most important part about R1 (which people are ignoring) is that it essentially outlines a straight and clear path towards AGI. They used R1-zero to train R1. Providing a way for AI to recursively make better new versions of itself. They show that it hasn't reached the sealing yet and it's way more cost effective than training from scratch. The mainstream has picked this up as "China is catching up in AI and leaving us in the dust" instead of what it should have been "AGI is not an engineering problem anymore, it's simply now a matter of implementation".
reddit AI Moral Status 1737826603.0 ♥ 24
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_m967qai","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_m94ba1f","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"rdc_m95dgnh","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},{"id":"rdc_m953yyo","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_m94f9d2","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}]