Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
No, I can absolutely not imagine a world where doctors use an AI that was trained to give plausible-sounding responses instead of factual responses.
youtube AI Harm Incident 2024-06-07T02:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionoutrage
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugw5dtyb1_GqWBI_qmV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy_7frqWwvNHN-k3OF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwRR2X2P3KQbQYoWb14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgydHdIV9kKt6g-6ZrV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw2vZYVzmGloxMPytd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugz9rHSvqUMzVdRmQHd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwnKdsv_Z--Y0orNKd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzLaF51EZMRtGGA6XB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxEY20n_jzyk2fwcQ94AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwB9QpcNHgQjVrR6qB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"fear"} ]