Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI will always, forever, be able to give unsafe or untrue advice. Even when it becomes able to reason, and find “truth”, it will still be based on human studies, which can also be misinterpreted. AI has a hopefully great and interesting future, but taking its outputs at face value without thinking to do further research or apply critical thinking is a failure of our (global “our”) education.
youtube AI Harm Incident 2025-11-26T21:0…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwXj31gZXjyfnR-8up4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz95s8kc4TgnNNi_IB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyjlcVDkQq3j3BRrE54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_Ugx-hZdCuHMWwxGDycV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxNfC7qVMPQcwSB0jN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugysiw5QjG6QDLAznrV4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx2ANk3EIvvzbvkY5B4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw0vPJtRa0pcmTh0lF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugwb35NSCRp1OZc1CsV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxayw4_NQ-AG2hDevR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]