Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I use ai to give an image to my characters. I don't ever play it off as my own. …
ytc_UgyDIV5nb…
G
Oh god someone in my class saw my sans au ai chat and never forgot and won’t let…
ytc_UgwrEV_ck…
G
relying just on AI is mental, this is what happens when you don't know , seriou…
ytc_UgyTc76z2…
G
The difference is that human styles are formed through both reference (which AI …
ytr_UgxFaJ_UJ…
G
We are so worried about something we create destroying us that I think we need t…
ytc_UgwxJprc_…
G
The AI doesn't make quality content. It steals quality content from talented peo…
ytr_UgyOkvzW7…
G
There’s something profoundly stupid about purposely creating Robots/A.I. smart e…
ytc_UgwNyiggG…
G
Deleted my account this morning, but got a warning "your records may be retained…
rdc_o7w8aax
Comment
Honestly, this is video displays the best and worst of reporting. All in one video. Like having a former employee refer to 'FSD' as 'Autopilot'. How does he maintain credibility when they are entirely different technologies? One is C++ code the other is ML/AI. They both have different predictable and unpredictable failure points.
Thank you for keeping the driver accountable. But that ending: "Having the car do most of the driving for you, and requiring you to pay attention to make sure nothing bad happens, I don't think this is going to be a long-term technology that we're gonna keep in the cars" is so bad, its ridiculous.
Tesla's are going more miles per accident using this technology than people, why wouldn't it be a tech we keep? She looks like she couldn't export a pdf if you asked her to, she absolutely is not someone I want driving a computer on wheels. She won't be diligent enough to understand the techs strengths and weaknesses. I've never once recommended to an older person a Tesla. Legit asked this woman that wanted to know if I recommend my Tesla, "Are you slow or quick on your tablet? Because this is an iPad on wheels and if you don't feel tech savvy, you shouldn't drive one." She looked at me like I was rude, but I think there wasn't a more honest way I could have put it.
I feel safer using FSD and Autopilot, but NOT SAFE. As my car passes other cars on the road, and I'm in autopilot/FSD, I am staring into their car. If I see a phone in their hand (which has to be 1 in 8 cars, at least), I save the video of them and keep track of their car. I make sure if I am going faster than them, that I am not in their lane if we suddenly come upon stopped/slowed traffic. If they are passing me, then I will slow my car's speed to the speed limit - or 5 above - to make sure the idiot passing me on their phone gets as far down the highway as possible and that I never see them again. I couldn't do that in another car and nobody can honestly say that isn't me being safer because of this tech.
We need to make sure this tech doesn't pre-maturely allow hands off driving and distracted passengering, which could be prone with Level 3 & 4 autonomy. Not take tech that is helpful and useful away from drivers because some amount of people develop a false sense of trust.
Y'all didn't even cover the eye-tracking and steering wheel nags that Tesla's do to make sure you are paying attention. This is just straight lame reporting from either bad, lazy, or biased reporters and it is sad. I own two Tesla's and know our drivers need more accountability. But this silly goose reporting has no place at a reputable firm. Most Tesla drivers have an example of the car avoiding an accident. Why not even acknowledge that fact? There are so many videos online of it. I have two videos personally where FSD started to go and then braked at a green light and someone I didn't notice through the trees ran their red light. One of those would've hit driver side at 55+mph and probably killed me. The other would've hit passenger side at 40+mph, which thankfully had no one in it.
youtube
AI Harm Incident
2024-12-14T23:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzqEV4yT0aVnKybUuV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy05XBUcoOZd3pAL1p4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxbiph27soTmH5kRfR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy2slWTY2HWoO4mp9d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzQ8D0k3UaIP4qECEd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyU0Idc7l0q4HbBwch4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwBMko-9JWHDnXlS514AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzGlFU3Pw7VlBtSnCJ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwQTz9powzUITbYhwh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyMQ80iEicXIiufX6N4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"mixed"}
]