Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As cold as it seems, I see this as nothing more than using morality to slow technology. This issue will only be relevant for the short time between when the first self-driving cars go up for sale, and when they are ubiquitous. For the specific example in this video, it is even more moot. The self-driving car shouldn't be following so closely that it cannot stop in time (neither should the human driver, but computers are much better at following instructions). Once self-driving cars are starting to become ubiquitous, we end up with the scenario where the crates fall (because some human screwed up), then the following vehicle senses it, then transmits this information to the network, alerting all other self-driving cars in the vicinity that it has to make an emergency maneuver. At this point, other cars will move out of the way to make room (there's no need for hard lanes with self-driving cars) and move around it. Subsequent cars also maneuver around it in kind. The human in the truck that lost crates is notified as the truck moves to the nearest shoulder or exit, and necessary authorities are notified as well. Meanwhile, traffic continues to safely and efficiently avoid the crates in the middle of the road without the passengers even noticing, unless they happen to be looking outside.
youtube AI Harm Incident 2015-12-15T01:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgjaTrm3xjVlsXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgiVJWa_Y6bmRXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UghRt6TFpVDC0HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgjWo4JZkIB25ngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UggQIWXW0Sjhu3gCoAEC","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugh4kvsWRmbvVXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UghHt-JHGMzZ1XgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgiJ_L1RWjzFSHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugiygo6Qdg1iq3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UggxWJ27f-_UIngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"} ]