Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Furiends GPT 3 was trained on 45 terabytes. In 2021, the overall amount of data…
ytr_UgxAafv6R…
G
Running these things costs a ton of money. They’ve been getting investors to fu…
rdc_n82fopg
G
@thedarter That's what they are trying to say. They are saying that even if AI c…
ytr_UgydLqvxV…
G
These are super intelligent human beings talking about AI personhood. These are …
ytc_UgyICTlyH…
G
Yea, no shit. Very few people have the knowledge, the data, and the specialized …
rdc_ohwribt
G
It seems eerily stupid that tech billionaires think AI is somehow going to go to…
ytc_Ugx_Y1KR2…
G
This sounds less like an "AI stole my job" story and more of a "I trusted the wr…
ytc_UgzlzyJCE…
G
This stupid robot Sophia should be eliminated like right now 🔥 because I see som…
ytc_UgymcybfH…
Comment
both Tesla and the driver are responsible. Driver should not have been pressing on the accelerator, overriding full self drive so he wasn't allowing the car to attempt to drive itself. And the Tesla should never let a human override the self driving computer like that to blow through a stop sign and strike a vehicle without even applying emergency brakes. All of this is tragic and avoidable and stupid.
Edit - and he got THAT MANY strikeouts? Omg. This guy was an accident waiting to happen. That is absurd. I've gotten 2 strikeouts total over the span of 2 years. It's hard to get one if you're operating the car sensibly. Tesla should perma-ban people like this from using full self drive. And this driver in particular has absolutely no right to say "I didn't know it couldn't drive itself" because the car literally screams at you that you have to pay attention and monitor it when you get a strikeout. He knew it was dangerous. Tesla knew it was dangerous.
youtube
AI Harm Incident
2025-08-15T19:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwuRx9UpPhP587tdo14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_UgzSjj9Tp60Cr89I_tZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Ugy495fkc9ChMossIzB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugyk1L8QXTLa0ZHUmjh4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugw6Jio8EXR8fpft5eR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwRgotJplF-O_rekRx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxwOlyGLFQVbRUGKN54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwY0Ozrgn3a-9CpDex4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9UWRQflY_Lol5RJp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwKsTeKHXeqYl2q-kd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"resignation"})