Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The solution is easy... you program in ALL the possible decisions that the car can make in a situation like that and if an accident happens you make it choose *randomly* so that there's no "moral blame" to the programmers. The number of lives that self-driving cars will save *FAR* outweighs these nitpicky moral dilemmas. The fact that you're using a safer device overall that reduces the chances of hurting people justifies the rare crashes that will happen.
youtube AI Harm Incident 2015-12-16T01:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugh8rhSAIlTrjHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugj8lU9CWdFWf3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UghFWNaMvDiVGngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgggdoqiWWgg1HgCoAEC","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UggYKBs14QZPoHgCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugi9xnzyNGEqq3gCoAEC","responsibility":"society","reasoning":"contractualist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugim4SKNBlRtfHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgjXjm0R3slUzXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UghxbQR1FcrERngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgipNWetGSuz7ngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"} ]