Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Only AI will find anything wrong with this. So unless you're planning to feed he…
ytr_Ugy2Qs9DN…
G
They are better drivers than humans. But when one screws up all the driverless c…
ytr_UgwqFEkfj…
G
It's too late. ⚔️☠️⚔️ Megalomaniacal, narcissistic CEOs and rich corporate mogul…
ytc_UgwjF9kFZ…
G
The whole argument for self driving cars is they are safer than human drivers .T…
ytc_UgwX_g2oZ…
G
You know what I think the biblical book revelation may have revealed what A.i. a…
ytc_Ugx8MEhNI…
G
Lol as if AI isn't also stealing almost everybody else's work too. AI is the fut…
ytc_UgyqTqeIK…
G
AI is a good thing, losing jobs is a good thing, and UBI is inevitable…
ytc_UgyFjCAMq…
G
ROBOTS
NOT
STUPID
ROBOTS
GOOD EDUCATION
THEY
CAN
PROVIDE
TO
ALL
THE
C…
ytr_UgwQiKbba…
Comment
There is one question that needs to be fully solved before I will support robotaxis: responsibility
The first step should be a set of ethics guidelines that answer questions like: if the AI has to decide between killing 2 pedestrians or one person in the vehicle what should it do?
Then there needs to be a definition of responsibility based on the ethics guidelines, so if the vehicle followed ethical driving standards and someone died then it's an accident but if the vehicle didn't follow the "rules" then it's the company's responsibility.. but here it gets murky. If a human drives badly and someone dies then the driver is held responsible.. so who should be responsible when a Tesla does it? Elon Musk?
youtube
2026-04-02T06:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgzxPq5sv8g1dPzz-Kd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxF-jMFDU627cs7o594AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw6kggfjF_NX0mcGml4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugwz9sVn15HrMLlfrvF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzgyf1-OXl2x5PcfHt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]