Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Every one of these stories has the same explanation: these things do what you train them to do. If they do something bad, it's because you gave it a training program that made that thing look like the best option. If you want it to do something else, change how you're training it. Every AI scare deadline can be rephrased as "humans have their true colours exposed by software algorithm and don't like it".
reddit AI Jobs 1707119509.0 ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_kozs17t","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"rdc_koztotq","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_kozybf6","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_kp04who","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_kp0dp30","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"} ]