Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This study actually demonstrates an important thing about AI models that people do not appreciate. The AI can accurately predict the correct words to output based on a given input. But the AI does not *understand* what it's doing, it's just recognising a pattern with no understanding of the material. It does not understand how to apply that knowledge, or the implications that it has for a human patient.
youtube AI Harm Incident 2024-06-02T15:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Ugx_Pxirv-wLas1e5-l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyyv_JsgKrDTBohFGB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgybDrhg2YFIpXri87x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyiJotdDcnXwpOhnId4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxp_kZOgmu7-cQv0QV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyEreGVMfSDYcopFct4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy86s2zPEDQGLYC7nZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyXET68cBD77o7Fhf54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgzV9SR28a3i3oVb9IZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxcSMECF73rEBT04_B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]