Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Your hypothetical situation is easily avoided by actual safe driving, which the auto car will do. You are, by law, supposed to remain a safe distance behind any vehicle in front of you. If you are in a self driving car that is too close to stop, then you are in a self driving car that is not following the law. One has to question why we should consider such foolish hypothetical situations that necessarily require the car to do what the car won't do. Furthermore, you are not accounting for reactionary time in this safe distance. Why is it that people always feel compelled to force dichotomies (or trichotomies in this case) on situations? You don't think that a self driving car that has the ability to react within a thousandth the time of the operator would be able to hit the brakes, begin to slow down (given that it, by law, must remain a safe distance behind the truck in front), and then turn into another lane, weaving behind or around the SUV or motorcycle? The problem with hypothetical problems, is that we can continue to make hypothetical problems about a fraction of a percent of the outcomes....not to mention, the ability to deal with the ethical complications down the road. Here, it is illegal to operate a motorcycle without a helmet, and the laws will increase in order to promote safety. The more we sit around and ponder pointless moral dilemmas about whether a computer making a decision to take an action that results in collateral damage is somehow semantically different than if a human did the same thing, the more people die on the streets from preventable accidents that self driving cars would have eliminated. Rational thought is only rational when we don't irrationally let our fear of change raise pointless moral dilemmas that ultimately result in the more irrational outcome of letting people die at higher rates because we are threatened by change and innovation, and justify it by arguing the semantics of whether it is a decision or reaction on the computer's part. The real moral dilemma with the future of self driving cars, is that if the safest outcome it decides involves subjecting the passengers to injury instead of avoiding it and causing harm to someone else, will it affect the sales comparitively to normal cars. Will people buy cars that may put their life in danger for the overall safety of humanity as a whole, reducing accidental deaths on the road? Or will people opt to stick to driving for themselves if it means not having to take that personal risk? It's one thing to apply reason and logic to yourself, and say, yes, I am willing to increase my risk factor ever so slightly in order to protect society as a whole, but am I willing to do so for my children? Fundamentally, that goes against our most basic biology. And if this is the case, then will auto makers produce cars that will deflect the threat of injury from the driver in order to improve sales, and what will the legality be around this? This still is a hypothetical question, and one that needn't be of concern to launching these vehicles on the roads en masse, since, like all innovations and entrepreneur adventures, taking action first is always the best course for success in the industry, and capital growth. The issues raised, and changes needed can be made on the journey to implementing these cars. It isn't an excuse to let our irrational fear of change hold us back from taking action.
youtube AI Harm Incident 2017-02-03T17:2…
Coding Result
DimensionValue
Responsibilitynone
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UggAC0mV8oC9jngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjpcS32Uc2yJngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UggfHBar3vbNengCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiwvgjYZIffAngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgjxDEIZXTjr23gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UghcPoA1NFGlengCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgitDjAIO4MRV3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjY90-a_EZ8FHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugiew_Ebk3iMfngCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiX854HF1O3sHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]