Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is of course a valid question, but it’s important to remember that we currently have no way to ensure that AI systems do/not do any specific thing at all. We have techniques that kind of work a little bit (which is why these systems are useful at all), but nothing that ensures specific behavior. This is the technical challenge that the superalignment team and similar efforts at other orgs are trying to solve. If we solve that, then we need to discuss what those goals and values should be, but we are so far off solving the technical problem, that it’s really not a super relevant question yet.
reddit AI Responsibility 1716621429.0 ♥ 10
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionresignation
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_nc3017r","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_nc3qyko","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"rdc_l5lkwen","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"rdc_l5l18h7","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_l5lkn1r","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"mixed"} ]