Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
>Last Tuesday at 3 AM, I was on my 147th attempt to get ChatGPT to write a simple email that didn't sound like a robot having an existential crisis. >I snapped. >**"Why can't YOU just ASK ME what you need to know?" I typed in frustration.** >Wait. >What if it could? >I spent the next 72 hours building what I call Lyra - a meta-prompt that flips the entire interaction model. **Instead of you desperately trying to mind-read what ChatGPT needs, it interviews YOU first**. Bolded emphasis is mine. It honestly sounds like * you made zero effort, zero context demands of ChatGPT to "write me an email" 147 times, and furthermore * expected ChatGPT to read *your* mind and somehow just know all the things you didn't tell it, as evidenced by the fact that you suddenly had an eureka moment in which you realized that there were things it needed to know, and should ask you about. The whole "Lyra prompt" is unnecessary. Literally this whole problem is solved if you tell it what it needs to know before you make your request. Seriously, what the fuck.
reddit AI Harm Incident 1751221398.0 ♥ 133
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_n0gozgp","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"rdc_n0fcv0j","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},{"id":"rdc_n0g3rd8","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"rdc_n0itavb","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},{"id":"rdc_n0f7idf","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"]}