Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Well, my only point of contention here would be the final point, comparing the inaction of the incapable to the inaction of the unwilling. There’s a very subtle, yet very real distinction to be seen between these two concepts, despite their identical outcomes. Let’s instead substitute ChatGPT for a rock, and insert it into the trolley problem. You set the rock in front of the lever and give it its only two options, which can be boiled down into “Action, or action through inaction”. The rock, of course, won’t act; not because it has some sort of conscious aversion towards making a decision, but because the capacity to choose was never there to begin with. A human, from the moment they consciously comprehend the idea of the trolley problem, is already involved in the situation. The moment you realize the consequences of either decision, you’ve already been forced into said decision. The difference lies in conscious awareness and understanding of the situation at hand; otherwise, would we stipulate that all inanimate, unconscious objects are choosing not to pull the lever?
youtube 2025-10-15T06:5…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx-epRa3w5FfCNs-Lh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugz2PnJOa8dM8arkrVV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw9ml2DzUggVkdJ-4p4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugz241Cy9m3-fqmcn354AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgyIFGk6tCItgBp7V4p4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugww43cHU9ErtCnvRZB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwxwFr__8Gur_VzsnJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugz0j1AgtucfAjX79gl4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyWSO0QwXrdr1u8iVx4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxFlOe4NQwrqBxfX4F4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"} ]