Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It reads like panic to me. I think they're starting to realize their limitations are... huge. There is a fundamental aspect to AI (whatever) assistance that is going unchecked until it *has* to be - there is no accountability. Will it work for order screens and basic customer service tools? Of course. It can absolutely replace the checker at Wendy's or the greeter at Wal-Mart. But when someone need to be *accountable* for the decision it makes - it becomes an incredible liability to take on AI as a means of producing much of anything. It can be a huge benefit in review. Perhaps editing. Organizational processing is something it will most likely have a huge presence in. But it won't replace or take away those jobs. Because at the end of the day - when something goes wrong - I'm going to sue someone. And even the AI is *owned* by someone. So that person has the liability. You going to tell me a bunch of a major industries are going to replace accountability and financial security for... a slightly lower bottom line? That's a fiscal sidestep, not a net benefit. And if it is - it's razor thin. Any adoption isn't going to come with AI 'advancement,' it's going to come with legal battles and the courts assigning liability so anyone who touches that stuff knows where to park their Brinx truck.
reddit AI Harm Incident 1684258593.0 ♥ 80
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_jke7wyr","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_jke8g54","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_jkfg7rw","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"rdc_jke1biv","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"rdc_jke4fc1","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"} ]