Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
+Thomas Smith There's another big ethical dilemma that this video doesn't address: the potential for self-driving cars to be hacked. Currently, it's fairly simple and easy for new models to be hacked and controlled, even when not self-driving. This could be a simple, horrifying way for people and organizations to kill anyone they don't like, and often without a trace. You hack the car, slam it into a building or off a cliff, and voila, victims killed. I imagine if we ever get a majority of vehicles to be self-driving, we will still need cars with human drivers for important politicians, heads of companies, etc. Anyone who might be threatened with assassination. Meanwhile, another dilemma, less deadly, but far more common will be passenger road rage at their own driverless systems, for being unacceptably slow, safe, and polite. I live in Slovakia where 70% of drivers will not stop at a crosswalk, and damn the consequences. How would these people react if their car stopped for them? And on a regular basis? If we all had self-driving cars, they would save our lives, and we would hate them for it.
youtube AI Harm Incident 2015-12-10T08:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytr_Ugi-ra97OFAYf3gCoAEC.8A2x-6Y9iR39_jA1MigRw-","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UggcjG7wPcXM-ngCoAEC.87ksLSYwmAW87lRqc_5nOt","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytr_UgjbGUooE19fn3gCoAEC.87ae9OwYcWP87aeSIvS21k","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"resignation"}, {"id":"ytr_UgibjtNUDEehjngCoAEC.87_AnhDBK0Q87_DW-CiC2P","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytr_Uggm5BdzwhyWVngCoAEC.87ZJkl4btdC87ZRqdCDAIY","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytr_Ugj-Xh3Fxwz1RXgCoAEC.87YwkNlHcCU87Zv0jNj-Ag","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_Ugj-Xh3Fxwz1RXgCoAEC.87YwkNlHcCU87Zx3NNhY6U","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytr_Ugj-Xh3Fxwz1RXgCoAEC.87YwkNlHcCU87ZxquYYaiQ","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UggP-iFt14eaaHgCoAEC.87YkvCWMel-87Zi3ixQABR","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytr_UghURWjOQRHtGHgCoAEC.87XLJSTRT9v87clu7Fezdn","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]