Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I just asked chatGPT about this, and it says it's due to human error. Which is a bit unnerving, because that's *exactly* what HAL said in 2001!
youtube AI Moral Status 2025-06-05T23:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgynjWtTiZkvAL5LcKZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugwg9ApgYk57prrgiRZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugzkl-6r93SwWXM1VD14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyqNJ99RImroJhe9Y94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwAOUEU1aZwpBTsbDJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzJuHO_e4oOn3kyZ_B4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwmmme_7H-BA_T0NC5Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxq5FqAJ4Awq4oE-cZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzqI2W1wZd5ZH2EY9p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugz1f6Cr0hT0cFFTP714AaABAg","responsibility":"government","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]