Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One annoying thing that AI does: it will agree with you even when you're going down the wrong path. Not always but it sometimes can reinforce an idea or belief that you have that may not be based in science, like in this patient's case,
youtube AI Harm Incident 2026-01-06T13:5… ♥ 12
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgywgxD8g-0HC_gELQV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyWK3ZgD0IhTghCgod4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwE3nt2Xlu_MlY10LJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzFY7qnnK_1bcIhImN4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugxvke82GR-FO7AM7FB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxN5iqpXwHx82m3_S14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwi5xWb_iabQbWR9pF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgztHNi6qdL-XxEtWP54AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx6EXZBSfP_ztP_oY14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugwbv5k96NVNJOLX5al4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"resignation"} ]