Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
5% are using the tech correctly, LLMs are fantastic at *transformative* work. "Give me a one page summary of this project proposal, the audience is C Suite so be light on the technical details." "Rewrite this email so I don't sound like an asshole, but try to stick to the original vocabulary and writing style." "Analyze each customer review and flag the ones that include swearing, threats (both veiled and open), and names of people. These will be reviewed manually, so it's better to be overly cautious." "What can be made more efficient about this code/database design? Implement those improvements." As a software engineer, I have investigated this tech at depth and find it occasionally useful (mostly the auto-complete). For smaller generative tasks (here are the requirements, make feature X), it can do pretty well too, but people tend to be overconfident in the "all knowing" machine and feed it a large number of requirements. It'll shit the bed, and unless you already know what you're doing, you won't catch it's mistakes.
reddit AI Responsibility 1755606140.0 ♥ 10
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_n9hx9gf","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_n9inv0l","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_n9hp64x","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"rdc_n9imk09","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_n9ighu8","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"})