Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It’s not surprising — this isn’t a matter of brain damage but of effort. The issue with AI quality is that it tends to produce a mix of excellent work and subpar content. When you correct the AI, it often becomes less coherent instead of improving. AI has a tendency to make specific mistakes and then, when corrected, fails to remove them properly. Instead of omitting the error entirely, it rewrites the same incorrect material with a disclaimer attached. For example: The AI writes, “Then Barbara grabbed her purse from the car.” You correct it, saying, “This does not happen.” The AI responds by rewriting: “Barbara did not grab her purse from the car.” The problem isn’t that it misunderstood — it simply can’t let go of the false idea it generated. Instead of removing it, it tries to justify or reframe it. This behavior makes AI editing difficult, as it introduces hallucinated elements that can conflict with an essay’s plot or intent. Additionally, when prompted to correct a single sentence, AI often rewrites entire passages unless explicitly told not to. This leads to multiple, conflicting drafts and unnecessary manual editing to restore accuracy and coherence.
youtube 2025-10-27T10:3…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxiPpVelVheSIY1oFh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxlOWrIUIhTwwQ4ra54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx5Zaphh6dm45sPQi54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgybHuI2y2o3IzkndE54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxdpNaJZXy6Q6Ac08V4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzTSVnrbjwSXCG152l4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyzfGWhZM0FjRj1gGF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyaqpYMyZNSj6LiNJN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxWQR1qOcRS1nLkngF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgynXH3bAyHIaUJoUMR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]