Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
10 years ago I wrote a paper in university concerning the ethical implications and liabilities in the event of a crash involving an autonomous vehicle. In this paper I attempted to explain how difficult it is to definitively point a finger at who's responsible. Is the "driver" responsible? Is it the manufacturer who is at fault? Maybe even the programmer who designed the self-driving algorithm? It's a whole can of worms that is decidedly complex and ethically challenging to answer. My prof dismissed the thesis of my paper as being a dumb idea as it's not relevant or applicable. I failed that assignment. Well here we are! Hate to say I told you so... But I told you so. Writing software that can 100% make an informed decision is incredibly hard. I can't for the life of me understand why having more data available through additional sensors could ever make it worse at making an informed decision. Harder? Absolutely. More processing and higher costs will be incurred without a doubt. As long as you have quality data, which in this case with expensive equipment is to be expected, then more of it will always (well, usually) help you make a better informed decision. Ss mentioned in the video, the self-driving software at any given moment needs to make a "yes" or "no" decision (over-simplification but good enough as an example). The more data you have, the more variables can contribute to making a safer decision.
youtube AI Harm Incident 2022-09-03T17:3… ♥ 1
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyliability
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxLKj91_yUZcmtzKP54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwfuReyxukU6hInDXB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzCvmcpbSIrarILhTl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgwVN0ZCKCao6_Zjh414AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxkerxGD8MNCT62-Vl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwhqePNsfSq-qGeLD54AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxgKW9FglOYf1rTYZt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw3hys9fA5p-pXqASB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyUQtiA2BozN-OGUjN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwuu7-b-u-guWrgrxl4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"} ]