Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes ai Is destroying art but if you are using it to have an idea for art I under…
ytc_Ugz2Me2uv…
G
What I love most is that all the actual art looks significantly better than the …
ytc_Ugzx5Z7pu…
G
So...now they are saying that artist are royalty. I can't ser the issue there, I…
ytc_UgwP4UN83…
G
Okkkk when these robot turn on you , don't say oh shit Buddy . keep playing with…
ytc_UgxUzy2R6…
G
It‘s not enough to talk about the guard rails for AI, we need plans that can be …
ytc_Ugw6L3zGs…
G
Thanks for doing this AMA. I am a biologist. Your fear of AI appears to stem fro…
rdc_cthuvw9
G
This is totally possible! But I believe the base programming is why this is occu…
ytc_UgzkWH-8-…
G
What is this road? Its dark, there are no sidewalks to be seen nearby, no buildi…
ytc_Ugx23zX0c…
Comment
+Thomas Smith There's another big ethical dilemma that this video doesn't address: the potential for self-driving cars to be hacked. Currently, it's fairly simple and easy for new models to be hacked and controlled, even when not self-driving. This could be a simple, horrifying way for people and organizations to kill anyone they don't like, and often without a trace. You hack the car, slam it into a building or off a cliff, and voila, victims killed. I imagine if we ever get a majority of vehicles to be self-driving, we will still need cars with human drivers for important politicians, heads of companies, etc. Anyone who might be threatened with assassination.
Meanwhile, another dilemma, less deadly, but far more common will be passenger road rage at their own driverless systems, for being unacceptably slow, safe, and polite. I live in Slovakia where 70% of drivers will not stop at a crosswalk, and damn the consequences. How would these people react if their car stopped for them? And on a regular basis? If we all had self-driving cars, they would save our lives, and we would hate them for it.
youtube
AI Harm Incident
2015-12-10T08:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugi-ra97OFAYf3gCoAEC.8A2x-6Y9iR39_jA1MigRw-","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UggcjG7wPcXM-ngCoAEC.87ksLSYwmAW87lRqc_5nOt","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgjbGUooE19fn3gCoAEC.87ae9OwYcWP87aeSIvS21k","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"resignation"},
{"id":"ytr_UgibjtNUDEehjngCoAEC.87_AnhDBK0Q87_DW-CiC2P","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Uggm5BdzwhyWVngCoAEC.87ZJkl4btdC87ZRqdCDAIY","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytr_Ugj-Xh3Fxwz1RXgCoAEC.87YwkNlHcCU87Zv0jNj-Ag","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugj-Xh3Fxwz1RXgCoAEC.87YwkNlHcCU87Zx3NNhY6U","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytr_Ugj-Xh3Fxwz1RXgCoAEC.87YwkNlHcCU87ZxquYYaiQ","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UggP-iFt14eaaHgCoAEC.87YkvCWMel-87Zi3ixQABR","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UghURWjOQRHtGHgCoAEC.87XLJSTRT9v87clu7Fezdn","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]