Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Do people actually love ai? Soulless ai slop? ... Uh every time I think people c…
ytc_Ugydfr2HC…
G
Why are these thing always hot woman they even gsve this one a set of tits like …
ytc_UgwF2Yc2T…
G
If you make AI "art" you officially qualify for the title of... proompter. Nothi…
ytc_Ugy0b1IIP…
G
Bruh if AI can write what you write, I have news for you about your writing skil…
ytc_UgyFjXpqB…
G
@bleachedout805 you know maybe AI will revive traditional art somewhat, and we'l…
ytr_Ugyj3WzG9…
G
This is a very misleading video. Saw no proof of driverless trucks causing probl…
ytc_Ugx_a0bby…
G
Yeah WTF happened?
People defensive about their own AI use?
Understandable if …
rdc_odiax5n
G
if AI art was trained from artworks where artists were fairly compensated, I thi…
ytc_UgzXlXpYB…
Comment
Tesla’s Full Self-Driving (FSD) is safer than the average human driver on a per-mile basis, according to Tesla’s internal data and independent analyses of crash rates. As of Q2 2025, FSD achieves 1 crash per 7.63 million miles driven, compared to the U.S. national average of 1 crash per 670,000 miles—a ~11x safety improvement. For fatalities, FSD’s rate is ~1 per 10–15 million miles, versus the U.S. average of 1 per 94 million miles (still ~6–9x safer).
youtube
AI Harm Incident
2025-10-22T14:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgwvPLhlRk0qSqQjXrx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuAZSgaG7ls2Mw37Z4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxcURLOJPFcbmfwhCp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgwxewlIiwb4oT14LY14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyejCoE2dQxEafIaJB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw6EC-bQnwDazllkEp4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuNtJaaFEwatvsQ5x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyux2RlLLKNjE0v8ON4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxKm5qAFE2OiEgF0QR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxSzEh_OTKgKQmy2fZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]