Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
getting AI answering questions doesn't prove anything. it's just a bunch of statistics figuring out what the most likely the next word is going to be. so it doesn't care at all if it's true or not, just what word is likely to follow. also, tests like these don't test how good someone actually is at their job, so just because an AI scored well, it doesn't mean that it's a good doctor now.
youtube AI Harm Incident 2024-06-05T21:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionmixed
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgyaITIVyzseFYrW4w54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwrO7Sn9FUs3ZwwfS94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugyg_2xTpqgKIsBZ7fx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxyLKrWPFGqWeB20Q94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzTUe2zk4g5f2PJH3p4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwUEKEFOGSLBdIpB5F4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgyZGp2sDipo-gZw7LB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgzxHrRW5nMHqZ12Yth4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzlzU17_msEcuz3auJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy99HCxAU72DndsyH54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]