Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Incorrect, some aspects in coding you simply cannot rely on AI. Once the code handles something that involves the potential to kill someone (defense, medical, etc) then it becomes a logistics nightmare to prove the AI can be sufficient to support/end human well being.
youtube 2023-09-03T17:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugys1c71pyFnu4YEPBt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyGfr7xNTRAnhEGKCl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxhfhMeZGYqOk4vTCR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwdUNbBJfJUNDnzUYh4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxzPpuYjG7yOuxOtMN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugz8NAAbyRL7zp0X2Ml4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwND1aH9Mcs6mre7ox4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugyr3joc1vvHEUmqOrp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgxHB4k65Qx51VQgMd94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxEsM1Rg_Frr2E856Z4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]