Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Seems your issue is less about AI and more about, you know, plagiarism, lying, e…
ytc_UgxNED3TO…
G
I remember someone arguing that art was "something to profit off of" and "anythi…
ytc_UgwLXsQPv…
G
This time, it´s really the begining of the end, for the world! People are NOT re…
ytc_Ugwe1eyLB…
G
AI prompted people to build stronger more intelligent AI. It does not care about…
ytc_Ugw0O2kTX…
G
AI needs a human organization regulating its creators and its uses, just like a …
ytc_UgwaLTEVB…
G
"What if AI goes rogue, and undermines global stability ?"
We don`t need Artific…
ytc_UgzqhJgwo…
G
You are correct advance Ai will diagnose a much better than human
And about som…
ytr_UgyagXBBf…
G
It was flooding in San Diego because of rain. Think maybe you can tone down the …
rdc_fn5nv5d
Comment
6:40 While I'm willing to trust you if you gave medium-quality evidence that Tesla's cars are overall more dangerous than humans, your "two cyclists are dead" argument is a huge no-no.
If 10'000 drivers kill with a 3% chance and 10'000 AI kill with a 2% chance, that's still 30, and 20 kills respectively. One is clearly better than the other. Such statements without comparison (a.k.a. a Baseline) are highly irritating to me.
By demonstrating that you do not consider ratios, you weaken your whole argument and your personal.
Hospitals kill many people every day; it's one too many; let's remove hospitals! >:(
You get my point...
Don't make bad arguments, especially not when you cover them with cream of emotional manipulation, it makes you look bad.
I personally believe that Teslas might be worse than humans, but I don't have the will nor time to check that out. I have better to do. I wished your video would have enlightened me on that point and given evidence.
youtube
AI Harm Incident
2022-10-02T22:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgwnBYHLKEqXh_e-0mB4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwpSsLV-ro0_E5CHLt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxo4o9yo2hLxj-msV94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwxZIr0zAr7vk6nDop4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgwnldreHGeXyT2ZQKh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzLw2H82mWnX4OXMqJ4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwBPxs3sJxGmbQlV3t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyls7BwIgpDs0M93ih4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwN8uZBJjWtqseMS114AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgwlPDycl5TdNN0xkEl4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]