Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is no real answer to this question. If this were the only thing to be done, I would deploy a fuzzy logic and leave the car to make a randomized decision. However, the better thing to do would be this: 1. Why should the car be so close the vehicle in front? It should always maintain a distance from which it can come to a halt without hitting the vehicle in front even if the vehicle were to instantaneously come to a halt (physically impossible, but the assumption can save a lot of lives). Also, all self driving cars should be equipped with ABS. 2. Why don't we reimagine the interiors of a self driving car? There is no reason for a self driving car to look like cars of today. By removing obstacles and making a smoother interior, complete with reusable airbags (remember, the computers can inflate these, so it doesn't have to rely on the crash and can be inflated in anticipation, thereby making them reusable), we can make it so that hitting the object in front won't risk the lives of the occupants of the car. Minor injuries may be sustained, but that's much better than jeopardising neighbouring vehicles. 3. The computers can even alter the seating positions of the occupants to make them safer. This is harder to implement, but hey, if we're building self-driving cars, it should be possible. In short, we should stop thinking of self driving cars as cars where the steering wheel is turned by a computer. It's something much more radical and the solution goes beyond deciding whom to hit.
youtube AI Harm Incident 2019-07-10T05:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyY8hea89mmyVR35pN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwY7v7cjww7oOQlHrt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyB0_b6TK8PT7vISdx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyufpnIhu4nx08cBkN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgydFUcos-c2u9kplcx4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyP5A_agca7B0tqbB54AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwfznTscFqr9M1BaA14AaABAg","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgzjIPWV9pMw8zPPp914AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx4-0RV-1R8m98VkZ54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyD53mqEZwoHOxpHE94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"} ]