Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If you truly are deluded to think chatgpt is good enough for taking over jobs at…
ytc_Ugwuc8tQU…
G
In a totally atheistic worldview, intelligence itself causes consciousness. The …
ytc_Ugy4bfTO6…
G
Why create ai ,people have made it this far without help ,why create a monster t…
ytc_UgxDTmJJD…
G
Why do you refuse workers rights and not giving them jobs? "Money!"
Why do you k…
ytc_Ugx3QptkN…
G
as an artist who has used AI art generators before I have gotten so angry at the…
ytc_UgxUpucl4…
G
I understand using AI for something like descriptions or something but seriously…
ytc_UgwwXqMmJ…
G
I'm someone who likes AI and I'm interested in what it can do for humanity, and …
ytc_Ugz1PipYz…
G
honestly same. if the robot wants to sit in my 3pm standup and explain why sprin…
rdc_o8bfszt
Comment
10 years ago I wrote a paper in university concerning the ethical implications and liabilities in the event of a crash involving an autonomous vehicle. In this paper I attempted to explain how difficult it is to definitively point a finger at who's responsible. Is the "driver" responsible? Is it the manufacturer who is at fault? Maybe even the programmer who designed the self-driving algorithm? It's a whole can of worms that is decidedly complex and ethically challenging to answer. My prof dismissed the thesis of my paper as being a dumb idea as it's not relevant or applicable. I failed that assignment.
Well here we are! Hate to say I told you so... But I told you so.
Writing software that can 100% make an informed decision is incredibly hard. I can't for the life of me understand why having more data available through additional sensors could ever make it worse at making an informed decision. Harder? Absolutely. More processing and higher costs will be incurred without a doubt. As long as you have quality data, which in this case with expensive equipment is to be expected, then more of it will always (well, usually) help you make a better informed decision. Ss mentioned in the video, the self-driving software at any given moment needs to make a "yes" or "no" decision (over-simplification but good enough as an example). The more data you have, the more variables can contribute to making a safer decision.
youtube
AI Harm Incident
2022-09-03T17:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | mixed |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxLKj91_yUZcmtzKP54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwfuReyxukU6hInDXB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzCvmcpbSIrarILhTl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwVN0ZCKCao6_Zjh414AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxkerxGD8MNCT62-Vl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwhqePNsfSq-qGeLD54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgxgKW9FglOYf1rTYZt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw3hys9fA5p-pXqASB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyUQtiA2BozN-OGUjN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwuu7-b-u-guWrgrxl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}
]