Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI would be used best to replace manager and administrators, you know, repetitiv…
ytc_UgxyXPeCH…
G
How is it exactly destroying ai? Would have been better if peoplemade dtiys for …
ytc_Ugyvllis8…
G
At the end, I agree with you Hank. It is very easy to be convinced when listenin…
ytr_UgwAR-miK…
G
Its so shocking to see so many people falling for this stuff. Ai is a marketing …
ytc_UgyyzgN4O…
G
Answer is: YES IT IS.Especially with AI improving everyday,everybody will be abl…
ytc_UgyPBZJk8…
G
No clue on what you guys talk - if no jobs no consumers … then for whom are you …
ytc_UgwN0qW_s…
G
Are you generalizing artists fearing ai art, or are you mentioning those "artist…
ytr_Ugx_IacPU…
G
This is the reason, why I loath the stupid argument of "women cannot assess you …
ytc_UgzeoF5Re…
Comment
This is radically over-simplified; understandably so, since its a short video. But honestly, I feel this video is doing more harm than good by fear-mongering. Maybe thats not the intention, but thats the inference I made.
The problem is that the programmers are not hard-coding in "ok, take out the dude without the helmet because its safer." Thats not something youll find in the code...on any level whatsoever. If such an outcome occurred (which it most likely wouldnt if both the car and motorcycle were self-driving), it would be done on the conditions that the car was attempting to avoid the accident all together (ie it swerved toward the motorcyclist because its smaller and more likely to be missed). Youre not going to find moral decisions in self-driving cars, only code whose singular purpose of existence is to avoid the accident, period; regardless if the accident could or couldnt be avoided. Thats no different than a human being with perfect (or close to perfect) reaction time.
youtube
AI Harm Incident
2015-12-09T02:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UggTAra7ykO18HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgiiIRzPV-PDJngCoAEC","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"fear"},
{"id":"ytc_UghNHFfbScHAI3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Uggs6xSxQV1idHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugh555atHjwB23gCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjGZiL-RQWZh3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugj_T2kb-3J5iHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgglL4SDgYq70ngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UghQrXYx4XEWV3gCoAEC","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Uggozw99vhiuyngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]