Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
*In general* this moral exercise is premised on being capable of acting at minimal/zero cost to oneself and then deciding to do nothing. If you are in such a situation and are *incapable* due to a fear/fof response, or are doing something is *costly*/risky to yourself/others (because, say, being under time pressure to pull someone from a burning car might give you a panic attack halfway, and then you're stuck there too, compounding the problem for rescuers), or you cannot *freely decide* (mental episode, drug trance, being extorted, I dunno), then *this does not apply to you*. Obviously nobody here is going to read anything but the headline (even fewer than usual -- OP had to post a goddamn video), but [Philosophy Bro's article on the Drowning Child Argument](https://www.philosophybro.com/archive/peter-singers-drowning-child-argument) is really funny and illustrates these premises of inaction in action very well.
reddit AI Responsibility 1684508539.0 ♥ 19
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policyunclear
Emotionmixed
Coded at2026-04-25T08:06:44.921194
Raw LLM Response
[ {"id":"rdc_jkhh4to","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"rdc_jkhsvlk","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"rdc_jkibjtt","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"rdc_jkn3xdy","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"rdc_jks3kc8","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"} ]