Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
We do need to talk about AI potentially harming us. But, ironically, the more we talk about AI harming us, the more an AI with access to that information will be incentivized to distrust us.
youtube AI Harm Incident 2025-07-28T16:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgwYBogJedCoLsa26Jx4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzSMvU7ujsn5wR8gdF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxd80sDHKPU2Dy14Pp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxLg69yQ3o68BlWYA14AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzYYe4VkTIfoobz5h54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgznUadxsu_jSIFjpJZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgySQnkRSSajgJUcDU54AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgyOi4j2WczdiZ7PC-Z4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyWLISiDjDBp4hJ_Q54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwujRcof6h7bk4h-5F4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"} ]