Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This will happen with an AI too. Except the person on the stand will be the hospital that chose to replace the radiologist with an AI, or the creator of the AI itself. Since an AI can't be legally liable for anything. And then the AI will be adjusted to reduce that risk for the hospital. Because ultimately, hospitals don't actually care about accuracy of diagnosis. They care about profit, and false negatives (i.e. missed cancer) eat into that profit in the form of lawsuits. False positives (i.e. the falsely flagged items to avoid being sued) do not eat into that profit and thus are acceptable mistakes. In fact they likely increase the profit by leading to bigger scans, more referrals, etc.
reddit Cross-Cultural 1577923911.0 ♥ 173
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_fcsgjxh","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_fcssdy9","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"rdc_fcss70u","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"rdc_fct0byw","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_fcsqe3w","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"} ]