Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Imma be so fr, I’m not going to any doctor who uses AI. Not because it could get whatever diagnosis wrong, but people get lazy, even doctors. There’s high chance every other doctor who uses ai would just start listening to it without a second thought. Maybe it works 80% of the time. But they could get complacent and not catch some shit that the ai also doesn’t catch. So I rather get a second or third opinion from other doctors. Humans get complacent and lazy even doctors.
youtube AI Harm Incident 2024-06-06T10:0…
Coding Result
DimensionValue
Responsibilityuser
Reasoningdeontological
Policynone
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugxtz9aQhjYazuNXjEt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwFJcwVy2dUwJMc5Rh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz94zc2UqgcdpQhOyp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzInO3aa22WG2kT4lp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugy-opD22pSUFAe6SUF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyNwLW8z0BYGY4V1ph4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugz2qSLzMUTJz-3kE514AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgyGN_Fa0YCtYbDg3yV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgznQruYukPmWw0N_MV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugws7x2GSvvwwC-1dL54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"} ]