Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Clearly the self driving car of today works like a savant. AI does not think like a human at all. In kit kat's case, why wasn't the radar able to detect the cat? Why wasn't the system able to surmise from its on board memory that if an object enters its FOV then disappears, the object could have gone under the car? I don't think the answer lies in employing more remote operators. It would be better to hire more neurodivergent people to think out of the box and hardcode the system rather than relying on AI learning as a all-or-nothing strategy.
youtube 2026-04-02T14:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policynone
Emotionmixed
Coded at2026-04-26T19:39:26.816318
Raw LLM Response
[ {"id":"ytc_Ugw0Rkt5dXIS6TpX8xp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwPzwohvDOgCIUJVp14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyKmscu31j6bp-LSwZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugw-9b4T33_dvAyB37B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgysTxlacpUJVRLqOZ14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"} ]