Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is a good video, generally, but there are a number of disingenuous moments that make this simply appear like a smear campaign. First, you make a big deal of the two motorcyclists death (as you should, these are tragedies) but then entirely ignore the fact that autopilot is virtually always safer than not in the graph you show, even if that difference isn't as big as Elon's grandiose claim. In fact, by your own graph there were only 2 quarters where autopilot was involved in more crashes, 3 where it was the same, and 8 where it was safer - including the most recent 5 quarters that you demonstrate. Do those lives not matter in the calculus? Secondly, the autopilot turning off one second before isn't a means to avoid liability. If I throw a knife but my hand isn't on it for the final second of its trajectory and it hits and injures someone, I'm liable notwithstanding the fact I was no longer controlling the knife. This is a basic principle of law. No court would say "well they turned off autopilot one second before so they can't be responsible Duhr-Hur!" In reality, autopilot turning off is a last-ditch effort to provide a chance to avoid an accident by removing itself from the equation entirely when it is clearly failing. In fact, it would be entirely irresponsible and negligent to allow a failing system to be a variable for any longer than it should. Moreover, where is the blame for the drivers in your video for these deaths? It is made BEYOND clear that the driver is ALWAYS required to monitor and be ready to react. Autopilot is a convenience, nothing more. If people are irresponsibly treating it as a fully autonomous thing, that is 100% on them. It is not advertised, nor intended to be used like that. The two cyclists'' deaths are exclusively on the human drivers involved. Imagine being the pilot of a commercial airline and the plane crashes because the Captain decided to take a nap in the cabin trusting the plane's autopilot would be fine, and then something fails. Who's fault is that when it is made clear that the pilot is responsible for control of the aircraft and should always be ready to take back control at a moment's notice? Lastly, the caked-on implication that Tesla is just a greedy company that doesn't care about deaths as allegedly indicated by them removing radar and refusing to use expensive LIDAR is grossly disingenuous. At the end of day, anything produced on a mass scale necessarily requires a cost/safety tradeoff. Should ranchers build 15 foot high steel walls with guards stationed on them to prevent cattle from getting out? Or should they use barbed wire fencing knowing that this will generally be sufficient but inevitably less safe, cattle will escape, and cause accidents on highways? Some things just are not cost effective. But Teslas are generally safer than standard vehicles despite their limitations, and they must be made affordable so that more people can own them. If more people can own them as opposed to owning less-safe traditional vehicles, doesn't this save more lives overall even if Teslas aren't supremely and perfectly safe?
youtube AI Harm Incident 2022-09-26T23:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzWThy_31yWJGPWy0R4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyG1VHogRheYoD7aV94AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzfaddCTVV7jHOcCch4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxMVrDvmyCCWG29Px94AaABAg","responsibility":"government","reasoning":"mixed","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugyq9vdbdEM8g6b2wy14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgxPpGM4YiM6wBY3aRR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw9GQhsuOjt5NpnpaR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwMJ6PtPy5wuF4C__N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugy63s3lvA0De1gsCO94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgyJJWwjKUTeZ3n_SKN4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"} ]