Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The car will follow the traffic rules. It will minimize harm to the driver but still follow the rules of traffic. It will not start an internal AI based ethical dilemma debate in the 1/10th second it has to react to the data. So if a crash can't be avoided, it will not kill an old lady on the curb because she seem to worth less to society then the young driver of the car (except she is the CEO of a big company and important to keep thousand of jobs... better use face recognition software and google to check her profile in that split second...). It will slow down as much as possible with a quicker reaction time any human has. So in reality the system will prevent a crash from happening in the first place by reacting to events you can't even see with your eyes but the radar detects.
youtube AI Harm Incident 2017-01-19T18:3… ♥ 4
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UggAC0mV8oC9jngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjpcS32Uc2yJngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UggfHBar3vbNengCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiwvgjYZIffAngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgjxDEIZXTjr23gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UghcPoA1NFGlengCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgitDjAIO4MRV3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgjY90-a_EZ8FHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugiew_Ebk3iMfngCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiX854HF1O3sHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]