Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
So the basic question posed: Should you randomly crush into sth, or should let the car take the action which leads to minimal damage? Yeah i know, it's a hard one. Sometimes there are no good choices. One outcome may be that you go straight for the things falling. It minimizes causalties, what's wrong with that. When you're in the car you try to compute the causalties and minimize them yourself. The difference is that you only think about yourself, because you don't have time for complicated thoughts. Sometimes in your attemp to avoid the accident you drive off the side of the road on the mountain. Would you rather to have the car roll a dice and decide like we do? We already try the least causalties. E.g. Pedestrians ought to be avoided. But nobody frames it as: "But if you avoid the pedestrian that jump like a fool in front of the car, you will hit another car in which case you punish them for not dying as easily.. And i have one last note. Self-driving cars are going to follow driving rules. One of them is to keep proper distance so you can hit the brakes in time. But i guess there could be a scenario where the decision above must be made so while not dismissing the expirement, i would like a better example.
youtube AI Harm Incident 2015-12-13T16:3…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgjvJ6NnfmEbp3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UggXZfa6C2KKR3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UghNp5BGhWiGfHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugicy25a1_k_VngCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UggVXqoniLKpUngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugi6d151MupypngCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ughowk26EsgP-ngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjPUX-D7spJtngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiXjYc4IT9HpHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugh_gj4KR5_ORngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]