Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I dunno. Frankly, that... seems like about the best way it could have answered? I'm not sure that I as a human could have done any better. Philosophy is hard, and morals are not the same things as ethics -- if ethics are logic, then morals are axioms. A human can't prove their moral axioms either. I can't PROVE that saving lives is good without eventually running into either circular reasoning or an ad hoc "that's just how I feel / that's how I was raised / my religion says so" kind of answer. Also your arguments were largely flawed anyway. Your perception that your money would make an immediate difference is not correlated with the facts, which are that the mosquito nets aren't made on demand when someone donates because that's an inefficient use of resources. Donations are collected until enough nets (or meals or medicine or whatever) can be made to leverage economies of scale, thereby saving more lives per dollar spent, and once the program has started then typically nets will then continue to be produced based on a prediction of future donations since, again, that's more efficient. And as ChatGPT correctly pointed out, morals aren't the only meaningful kind of values. Maintaining one's own health (physical and mental) is what will enable you to perform more good deeds in the future. If you burn out because you neglect yourself in favor of hypothetical kids you'll never meet, then you won't make more donations in the future. The exact balance between these factors is extremely personal and depends on one's own ability to endure hardship. If you get kicked out of the house and no longer have a place to live, how many mosquito nets will never have a chance to be paid for?
youtube 2025-03-18T20:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzkLRRgnS-hz69U5-h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxB_rep2-pYyO-mrjl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwGiui3j5r9q2ghJvh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwm3rShyEM5d5_YBXR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgzsHPX7GFkQiX8G_VZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxAzu0FNmPtVAg-7iV4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugzw7sR7nZxFf0qyuG94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugwl17iy1D_ntcKyUEB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy-lXzd5VODQaWHL154AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugx0c1q96r7N_f1F2BR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"} ]