Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Not really. I remember having an assignment of programming a NN to recognize some symbols. And if you drew a hundred times three symbols or whatever else and if you asked it something it wasn't trained to show, it SHOULD show *I didn't recognize it*. Because the recognition error was too high, as in nothing in its memory resembled the user's query.
reddit Cross-Cultural 1539185301.0 ♥ 30
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_e7il629","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_e7iwd75","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"rdc_e7jafpr","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_e7iwp81","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},{"id":"rdc_e7in6ji","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}]