Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This person is probably part of the AI revolution. You can't believe anything d…
ytc_UgwCsTe9O…
G
This is missing the deeper aspects of a relationship. Although I suppose in this…
rdc_lzalstu
G
The whole world should live like the Amish people do. Get rid of all computers, …
ytc_UgwGQDJf_…
G
humans drawing something in the style of others: "That's how art works, we are a…
ytc_UgwOAQuTN…
G
seems like sm just letting it happen they must have people in sm that perv for I…
ytr_UgwT2FH3v…
G
CNN...bow your head in shame ... listen to yourself .. how untruthful and unjust…
ytc_Ugz6B9dTX…
G
Plumbers will not even be needed in the future, im a 3d printer/designer and hav…
ytc_UgyR2wK9P…
G
My usual writing level is about 11th grade unless I tone it down. I don’t think …
rdc_nsic6ce
Comment
Your point about radar is well taken, but I also think there is a more fundamental problem.
I keep thinking about those "who do you kill" trolley problems. On one track there is a 21 year old new parent, on another there are two 96-year-olds. You're the switch operator. Who do you kill and who do you save? I've always thought that the purpose of those problems isn't to help people make clearer ethical decisions, but to make elites more comfortable with killing people for a putative greater good. By necessity, engineers working on an autopilot system for a car are doing trolley problems. Because there is a baseline of people who will die in traffic accidents. Tesla's engineers, knowingly or unknowingly, are in the business of deciding who is going to live and who is going to die. This is necessarily going to be based on far less information than a live driver has. A system that results in an odd dead motorcyclist (statistically very few) is preferable to a system that no one uses because it brakes for apparently no reason on the highway.
When you think of night driving on a dark highway in a car (less so a motorcycle, where riders tend to hyperfocus), your mind makes so many mistakes of the kind that you identify. You'll think that you're seeing car that's far away, until it quickly becomes a motorcycle that is closer, and you immediately respond to that. Highway lines play tricks on you until you get closer to them, etc. The mind (hopefully) corrects itself, because it's more complex than an algorithm, more experienced, and because it sets its priorities differently. An autopilot reacts to probabilities, and if you were to make it err too far on the side of caution, it would become impossible to use. When driving, you can make a decision to tap on the brake when there's something in front of you and you don't know what it is. An algorithm doesn't really know what anything is. The risk management decisions are being made by engineers in an office with various career and economic considerations, not someone who is actually in a moving vehicle. Is it worth tapping on the brake and not approaching whatever is in front of you? Probably. But your mind probably has better pictures than Tesla does. What if you were riding in a car with someone who stepped on the brake 10 times as often? You'd probably be annoyed.
youtube
AI Harm Incident
2022-09-03T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugwmo6lY9J1q_QZl8gd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzwPTtD2Zjw1LEcwwt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzVLtwrHKaqztXd0nd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyfFgVDeFmF8-78RJN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugz97HBu64simFQ0ubV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugym076uQoHtFNwRHrF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugzg6JxTemOhQIqOedR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyUpz-ikMpA4Z4aS3x4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxkQ420yOHFabLDkKt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwBFe3HYK6GShFM1vV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]