Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is radically over-simplified; understandably so, since its a short video. But honestly, I feel this video is doing more harm than good by fear-mongering. Maybe thats not the intention, but thats the inference I made. The problem is that the programmers are not hard-coding in "ok, take out the dude without the helmet because its safer." Thats not something youll find in the code...on any level whatsoever. If such an outcome occurred (which it most likely wouldnt if both the car and motorcycle were self-driving), it would be done on the conditions that the car was attempting to avoid the accident all together (ie it swerved toward the motorcyclist because its smaller and more likely to be missed). Youre not going to find moral decisions in self-driving cars, only code whose singular purpose of existence is to avoid the accident, period; regardless if the accident could or couldnt be avoided. Thats no different than a human being with perfect (or close to perfect) reaction time.
youtube AI Harm Incident 2015-12-09T02:1… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UggTAra7ykO18HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgiiIRzPV-PDJngCoAEC","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"}, {"id":"ytc_UghNHFfbScHAI3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Uggs6xSxQV1idHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugh555atHjwB23gCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjGZiL-RQWZh3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugj_T2kb-3J5iHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgglL4SDgYq70ngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UghQrXYx4XEWV3gCoAEC","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Uggozw99vhiuyngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]