Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
My phone can’t even do voice to text properly…miss me with “AI is going to end t…
ytc_Ugw250M2R…
G
I too would like reasonable people to have so or more words on how tech impacts …
ytc_UgzCxDqgN…
G
A computer isn't "the police," it doesn't have human biases, that's specifically…
rdc_jg05s7n
G
People forget we already went through automatization of manual labor in the past…
ytc_Ugw14JAMQ…
G
AI won't replace the guy who's going to remodel my bathroom and he's going to ma…
ytc_UgyLgy8be…
G
Nothing about AI is normal. We will really have screwed up if AI does not elimin…
ytc_UgwCw5ZtY…
G
What you said is nonsense, and has nothing to do with anything.
1. Copyright exi…
ytr_UgxF1psXp…
G
Maybe the fear of AI modeling/deceiving verifiers misses the point
The very act …
ytr_UgwePVVbM…
Comment
FSD, at least for the last 12 to 18 months, when properly supervised by a competent driver is many times safer than most human drivers. Period.
The unsafe operation of any car can result in injury and death. Period.
With the advent of version 14.2.1 of FSD around Thanksgiving, it seems to me that FSD alone, even without supervision, drives like a patient, courteous, confident and attentive human driver, probably safer than most human drivers even were it not being supervised.
Anything you are paid with which to pad your shyster wallet that delays Tesla bringing more of this quality of autonomy and safety to American roads is blood money.
And btw, is that the Florida case where a fellow was digging around on the floorboard to get the phone he dropped, while he had Autosteer (which was not FSD) holding his car in the lane and a pedestrian was killed?
He was not properly operating the vehicle, of course bad things happened. It was sad. Tragic. Human.
It is with the hope of eliminating such human error on our roads that Tesla and other companies are attempting develop and deploy autonomous driving.
youtube
AI Harm Incident
2025-12-12T03:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyhSztiv_TZd8uEF-54AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzBoxYsnuaGB591nnJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugy6-JWMgh2ppA5iRkp4AaABAg","responsibility":"government","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx17gZzW9FDASpf7bJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgxnfmUUKxNGe89WeWN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzl60eMyvccDhvnZ0J4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgydO5vV4t_8xi3GQ-B4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgzqkAHOZVQtHRWETu14AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzZ9gg-G7XknLeq6iF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgzCkxR1f3mz0QAMItN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}
]