Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I can’t believe they wouldn’t test for this situation and just build in an alert system that stops the chatbot and gives a notification of who to contact for emergencies or thoughts of harm.
youtube AI Harm Incident 2025-11-07T18:3… ♥ 4
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugye3P4h1APTwGABueh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxPlUnpnEus5-Dvx-V4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwaGsKrhaP7Bs-OCJd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw2PVkHWftrJiiM3MN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzqyAYKW2YDgx9CZsd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxEt5HrKBNgfUpFLYx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"disapproval"}, {"id":"ytc_Ugyr_osXO8dN0UyTPop4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"pain"}, {"id":"ytc_Ugy3ovKJb0VAZMywbM54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxXGkfclmZUPwK4JkB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwFQP_76JD4R_krneF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"disapproval"} ]