Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Overview: In a nutshell, the video argues that the autopilot is a flawed system that is falsely promoted as safe by Elon / Tesla, and a so called "expert" says their technology will never be safe/ capable without Lidar. There are several selected videos of where the Tesla crashes, often at very high speeds. One such story highlights a now widow who's husband "trusted autopilot with his life," which is understandably tragic and heartbreaking. Here are the facts: Tesla FSD is safer than a human driver by analyzing average miles driven / crash. The technology is not released to substitute a human driver who is ultimately responsible (this is clearly labeled when you agree to use the tech, and every time you turn it on). Tesla FSD is improving over time, not declining. Finally, Tesla's crash safety scores outperform all other car companies by a huge margin, saving lives of those in crashes (caused by FSD or driver or other drivers). My opinion, for whatever it's worth: To say the technology can never replace a human driver is frankly dumb. It's already safer than a human driver as it stands (data shows), and we still are at the stage that the driver is still ultimately responsible. Also, the "expert" says that not using LiDar and only vision is inherently the fatal flaw of Tesla's tech... human drivers rely almost entirely on vision to drive, which makes us no different than Tesla's technology... and there are so many more deaths caused by humans than this tech does (obviously we can use hearing, but I'd argue this is not the reason behind the crashes seen in the videos). The video should highlight how this technology still isn't perfect, and that people should remember that they are the ones ultimately responsible. Nevertheless, I'd like to see this technology still be allowed to improve and ultimately replace human drivers, because idk about you, but I'm sick of hearing about drunk drivers killing others. A robot like this can't get drunk, can't have bad days, and can't fall asleep at the wheel... why would you want to cherry pick reasons to not responsibly allow this technology to advance? The producers are clearly biased and want to drive a narrative that this technology cannot be trusted, and I say shame on you for doing so. Use your platform better.
youtube AI Harm Incident 2024-12-13T22:1… ♥ 51
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyunclear
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_UgwpNDOGRitQ--mJc154AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy6-D02lPdiI_9xe-B4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyWSnMxYTPhCixlU-l4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzJBeuzcor2yQgLs-94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgyL3ci_G78yrFmvXvZ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyaTJh9fLc7B3kzggZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzurZP0dELbeeJ4ZcZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyzGO8EeBJry5B06wt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx6JcHb_lekKJdf2kh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxzRliabjDpFggjxEx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}]