Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
"Expression is sacred". Exactly, so why are you the one using AI?. AI is not sac…
ytc_UgyBP0iGq…
G
I would also point out, because they've read everything and they are good at rol…
ytc_Ugx4fI-uF…
G
@matthew-z5l Appreciate your reply here’s the thing.
Most people don’t realize …
ytr_UgzGWhgzP…
G
Looks garbage. Ai could do better
Also, this trend was 100% fueled by Ghibli mar…
ytc_UgwBZhGXt…
G
Neil! I want the Jetsons cartoon life, not this silly AI silly thing that turns …
ytc_Ugy_FTJu0…
G
I just can't seem to reveal what i made my ai endure, AND NONE OF YOU WILL KNOW…
ytc_UgwgUhCuu…
G
I love this ai and i believe the decision of the AI that they train for somethin…
ytc_UgxpIna_m…
G
We each have our own experience of being self-aware, of experiencing colors, tha…
ytc_UgytREZkZ…
Comment
What a tragedy the loss of life. The context is over 40,000 people in the US every year die in car accidents. I own a 2025 Tesla model three I've done 17,000 miles in full self driving and yes, you need to supervise the car the technology driving 100%. I would like to point out that my car has kept me out of several car accidents because it could respond faster than I could. The data is already showing that FSD supervised has far fewer accidents than the average driver on the road. The affect is fewer people are getting hurt or dying when this technology is used. The accident avoidance system works extraordinarily well. I would like to see 60 Minutes do another piece where they look at the entire spectrum of the technology because the technology is already saving lives. This technology has been completely redone since 2019. Yes, it still has issues and it still needs to be supervised. With that said on that it is saving injuries and lives.
youtube
AI Harm Incident
2025-10-22T18:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgwvPLhlRk0qSqQjXrx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuAZSgaG7ls2Mw37Z4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxcURLOJPFcbmfwhCp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgwxewlIiwb4oT14LY14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyejCoE2dQxEafIaJB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugw6EC-bQnwDazllkEp4AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzuNtJaaFEwatvsQ5x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyux2RlLLKNjE0v8ON4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxKm5qAFE2OiEgF0QR4AaABAg","responsibility":"user","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxSzEh_OTKgKQmy2fZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}]