Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It's actually worse in automated workflows. When your data pipeline's AI step synthesizes from multiple sources and passes that synthesis as input to the next step, there's no human doing a spot-check if the synthesis was wrong — errors cascade forward silently. At least individual users sometimes notice when something feels off.
reddit Viral AI Reaction 1776734528.0 ♥ 2
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_ohd65dx","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_oh2ey69","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"rdc_oh2g0dp","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_oh2gyo7","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"rdc_oh2slxl","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]