Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thanks, Shiv—you've finally hit on the mother of all truths, the real news we've…
ytc_Ugw7usYqP…
G
You bring up an interesting point! In the video, Sophia mentions striving to emb…
ytr_UgzzUC_r8…
G
"lets automate everything except for the higher ups making all the dough"
"oh sh…
ytc_Ugw69yiVg…
G
So deepfakes are shitty I know but my question is where is the line drawn? AI de…
ytc_Ugyvb8j5B…
G
I'm a student mathematician doing my final year research project and it SCARES m…
ytc_Ugzf2HmYF…
G
Can you try Gemini with deep research and compare this?
I really like what Goog…
rdc_mbobknq
G
I get the reference! It’s interesting to think about how AI can evolve, but in o…
ytr_UgwBSudWL…
G
I think that what these scientists forget is that to be intelligent means to hav…
ytc_UgzoFtm1c…
Comment
@TED-Ed You seem to missunderstand one thing: Unavoidable accidents aren't an ethical question at all. Everything what counts is to minimize harm. A programmer would never let the system decide to kill anybody - so your idea to kick others off the road to protect the own passengers is a typical human error. If you cannot even think completely rational in such situations, you should not judge programming - because you don't seem to understand how decisions in programming are made. [This is not meant disrespectful. I am a programmer and I would say, your fears are not going to play any roll in such a software.]
The solution for your scenario from the view of a well programmed self-driving car would be to try short braking, changing the lane into the best matching gap left or right and accellerating not to cause a rear-end collision. When a software is able to check environment 10+ times per seconds, it is at least 3 to 5 times faster and more accurate (at any of the several actions needed to do this) than a highly attentive and experienced human driver. You cannot imagine what opportunities of avoiding accidents this can create...
Especially your shown scenario should really *never* end in an accident with a self-driving car, because a car which ensures a safe distance at any time and with no significant response time *can stop faster* than dropped load nearly at any condition. It's simply human response time which regularily leads to such accidents. So in the end, even if changing the lane is not possible or the risk of irritating others by doing so would be too high, it would be extremely safe to stop the car by emergency braking - maybe this would be the best solution in all cases.
BUT, you are right, there *will* be unavoidable accidents - as well as they exist for human drivers, too. You may blame a human driver not to have done anything possible in such a situation. But I would really never think about blaming a programmer or producer of a car who can prove to ensure to do really everything to reduce harm. Why? Because this means, NO human driver could have reacted in a better way. There is only 1 exception: If it was a software failure. This would be very tragic, but even then, recorded data is a chance to guarantee(!!!) that such a failure would not occur only one more time at any place. And this chance is - again - a fact which no human being is able to ensure.
*Conclusion:* _It is ethic to avoid accidents. It is ethic to minimize harm if accidents are unavoidable. But it is never a question of ethics to decide between potential victims._
youtube
AI Harm Incident
2016-10-18T20:1…
♥ 15
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgjGy_ree2B0EHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugj96NpyN-f2BXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UghqMvbGky59jHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugi_k_2d8FQ3c3gCoAEC","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugg_qQYiL1e7ZngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UggUGDnRAEQYy3gCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UggfRtqOpBkxgHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UggNnXWdPpcRW3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugjqog_GKULDRHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UghYlkS6IWtLL3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"})