Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Part of my job involves figuring out how we can use these systems within the NHS in the UK. The real potential of these systems revolves around the issue of who owns the clinical risk. Basically, a medical AI has no "skin in the game", so they don't care if they make a mistake. Consequently, all medical decisions that involve clinical risk must have a human in the loop to take legal responsibility for the decision. This currently limits the possible use cases, and the potential cost savings. Personally, I believe that these systems will initially be of most benefit in developing countries where they are short of specialists, and less litigious. i.e. you go to your family doctor who is supported by AI, and they send you to test facilities such as imaging who provide basic diagnostic reports generated by AI that the family doctor can understand.
reddit AI Responsibility 1692046165.0 ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningdeontological
Policyregulate
Emotionindifference
Coded at2026-04-25T08:06:44.921194
Raw LLM Response
[ {"id":"rdc_jtfvnns","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"rdc_jupqgk3","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"rdc_jvvw5ig","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"rdc_jw4ydrq","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"rdc_jw6xqzk","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"indifference"} ]