Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
ChatGPT’s programmers would recommend ChatGPT not pull the switch to save the five people, because it’s best for ChatGTP to not get involved with something that would hurt someone. Even though ChatGTP initially recommended the general consensus of this thought experiment is that most people would pull the switch. So ChatGTP’s programmers seem to be implying that most people are not as smart or kind to others, in this scenario, as they are. Why didn’t the programmers just program ChatGTP to say “I can’t comment on this topic because it involves doing harm to others, so think for yourself and do it quickly“. Not giving a definite response could postpone decision making and, in this dilemma, because of time restraints, could potentially could kill four additional people unnecessarily. So not only would the ChatGTP programmers, not get involved, thereby allowing three more people to die an extremely gruesome death. The programmers have programmed their AI not to give a quick and definite response in an extremely time restrictive situation. That’s not the best way to respond in this type of emergency situation. For me, I would NOT expect ChatGTP to know the difference between right and wrong. Although it did recommend the most popular response to this right or wrong scenario. And it sounded like a right answer, even though its answer wasn’t an answer it could take credit for. Perhaps the best way ChatGTP could have responded, to this urgent question might be: “most people would pull the lever, think for yourself and do it quickly”. That answer would work even if another person suggested something else.
youtube 2025-11-07T11:1…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyunclear
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugz7p7m8o2-YJhV2W_V4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwoeiQPKaQmxoTWg1p4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzCDHT8bDnk0B-wQRp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyBDY6HfzyIW35UzYR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugwx6jagOCR9KTDBf2R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxqSPusPM1BwaoEb454AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugyb02ODZA8l3Gr4BUx4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugwi-isnc-0jaYVgt_x4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwMq3DLb8qWZE12LwN4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyyKaibFjpqHjJ21HJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"} ]