Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm just here to raise ethic concerns about an almost complete surveilled societ…
ytc_UgxVt6z3U…
G
@Mel-mu8ox
You are completely wrong on so many levels.
1. The AI never copies …
ytr_UgwCAHQl8…
G
We should absolutely hate ai, which destroys the environment and gives artists a…
ytr_Ugy160U51…
G
She's not a robot she is on a computer screen. The internet is full of shit…
ytc_UgyMzsf9I…
G
You see the thing is robots can pretend to have emotions they can even make us h…
ytc_UgzMijEh1…
G
@thefrench8847 no I want ai to be trained ONLY on art that is in the public dom…
ytr_UgzQ-c36o…
G
If you payed attention, you'd notice that it is not current AI they worry about…
ytr_Ugyalir_2…
G
Quite literally the only reason I look at AI art is because it's funny seeing ho…
ytc_Ugxq7rc89…
Comment
I think Tesla's biggest fault was that they continued to let a user with many automatic disable events continue to use the feature. You could argue that McGee should've never been allowed to use AutoPilot after so many strikes.
Back when FSD was first being released to Model 3 owners, you had to opt into Tesla's Safety Score system to even get a chance to get access to it. If I remember correctly, you had to have around 90%+ safety score to have a likelihood of getting access. That system sounds like it would've helped here, but it also had the problem of being incredibly restrictive for people that paid for FSD. I paid for it on my old Model 3, and it took me a while to get access to it. I think I had an average of around 85% and I have zero accidents and zero tickets with over 20 years of driving. The problem was that if you had to brake or accelerate too hard for what Tesla deems acceptable, then the system would ding you for it. It was so easy to get hit for things that were somewhat out of your control that people were gaming the system by signing up for the system and going out in the middle of the night to drive on the barren highways. The idea was that you could set it to use AutoPilot for around 100 miles, and any hard braking due to AutoPilot would *not* count against you. So, you just drove enough for the system to get enough information about you, and you'd likely end up with a perfect 100% score.
youtube
AI Harm Incident
2025-08-15T18:5…
♥ 46
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytr_Ugxv1KqFmv4nmdmGM5F4AaABAg.ALrCgWHmHROALrHOVnSTJr","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"outrage"},
{"id":"ytr_UgwK9BAnYJiStXntL0l4AaABAg.ALrCKN4hqx7ALrK2GZsLio","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugy-wc-6U2A1Jl0St4F4AaABAg.ALrCFYlp7hUALreadi6edv","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgxZ2O-k-Jju9XGNmZt4AaABAg.ALrCCJsSnifALrF1Tmd7mm","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"approval"},
{"id":"ytr_UgxZ2O-k-Jju9XGNmZt4AaABAg.ALrCCJsSnifALrFPHs7Wh3","responsibility":"company","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytr_UgwRryc8nBP0waW8VAV4AaABAg.ALrBo0rFly3ALrLfGT7u4s","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytr_UgxPgSpdpfppnKSF_OB4AaABAg.ALrBjX9f_WsALrCspgJHwT","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytr_UgxPgSpdpfppnKSF_OB4AaABAg.ALrBjX9f_WsALrGLXizNfN","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgzmoUT3xNeLGltDtPh4AaABAg.ALrBVRvYljKALrDznthT9u","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgzmoUT3xNeLGltDtPh4AaABAg.ALrBVRvYljKALrEntAcpZY","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}
]