Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I feel like this can be a survivorship bias problem. We don't know how many people are getting helped by AI medical advice, but the cases that shine are always the worst ones, so people naturally tend to assume AI is incapable of giving good medical advise. The later may be true, but one needs to prove it before claiming so. Anedoctally, I've seen AI give me answers very similar to ones given by doctors, and it's also able to look at my textual medical data and come to the same conclusions doctors came, so it seems reasonably good for this kind of stuff. I don't think, however, this replaces the need for a doctor, there's a social component in visiting a hospital, you get asked questions, they know similar cases as yours from their real everyday experiences, they know how to test you very fast and to check for adjacent warnings and clues, being able to have this kind of contact, I believe, increases the effectiveness of medics over AI. But AI is still useful for this kind of stuff in the sense it can have pretty good guesses on problems. But again, this is anedoctal, I don't think I can push for "use it" or "don't use it" for this right now, but I also fear people might simply ignore this potential use case and appeal to blindly regulate the thing without first testing it accurately and fairly, just because those bad cases are louder than the good ones.
youtube AI Harm Incident 2025-11-25T21:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgyQFwJofeROUYFIFY14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_Ugyn4WHzlVJAaf5y5td4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},{"id":"ytc_UgzCculZMPJm8jqjmgR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"horror"},{"id":"ytc_UgyYim_e5RzUSX1D5Md4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyAwtg_zNBLnwPmExJ4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"indifference"},{"id":"ytc_UgzWV6JwlWSJfXNc1ax4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"approval"},{"id":"ytc_UgzCzMh9KybB48SI8n94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugy948MZ8WHsevwfZFd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},{"id":"ytc_Ugyg0x6_iJkTCNqax6h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},{"id":"ytc_UgzvwazGxmlVOgyGnzN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}]