Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
A car on a busy road has nothing to do with an airliner flying in cruise at 35.000 ft. Any automated driving system will obviously greatly reduce the « driver » focus and attention to what’s happening at any time. So the driver is put in a position where he will be mostly unable to react properly to any autopilot error occurring, even more so if these errors are rare. Putting an autopilot and asking the driver to react in case it malfunctions is totally hypocritical and incoherent from a safety standpoint. This should be banned. This is as stupid and stubborn as OceanGate’s Titan submarine design. No surprise Elon Musk has a mindset very similar to Mr. Stockton who was the OceanGate CEO. Libertarian, dismissive of any regulatory body or safety regulations, authoritarian, stubborn, intolerant to internal critics… Such a system might be used to monitor the driver’s driving and alert him in case of a dangerous behavior such as lack of vigilance or lane crossing or entering a one way road in the wrong direction. It might handle emergency situations in case the driver becomes unresponsive or engages in extremely dangerous situations.
youtube AI Harm Incident 2025-01-01T11:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwFJ3r-mSz8Ki5EjE14AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyP3UC7ZIe6OlWhkKh4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzuHMuke_HsCydo7z14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugxhjb7hY40MJRZfsAx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzcMDePjjJlmhur5Yx4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgznYUo6T9_ZFN9Gg7F4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx6Kr_QY32ZjXnf2SJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxC4VrzZAmnwYqeLvx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwYNvqCIMcsIJJWdhZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyWb6ulwrgqkWzlzpd4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"} ]