Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
It seems a lot more likely, that "driverless" trucks will become a pilot-autopilot combination in the future, just like it has been in aircraft for the past 40 some years. You see, technically aircraft have been able to take-off, fly point-to-point and even land themselves safely in normal conditions, since somewhere in the 1980s -- simply because the open sky is a much less complex space for an AI, than a congested high-way. (Autopilots are AIs designed manually by engineers, rather than trained by machine-learning such as today). Yet precisely the "first mile, last mile" problem is exactly the same in flight -- because the sky above airports by nature are congested and local weather conditions are far more important during landing, when you are low to the ground. That is, even today we keep bumping into that outer limit of you still need human intelligence there, when local surroundings fall outside the system specifications of the AI. As anti-intuitive as it may sound, even if/when the Aurora autopilot is capable of taking care of 99% of the journey safely -- it will always be that mile at the beginning and end, as well as (importantly) ANY unforseen situation during the 99% part of the journey, from emergency to the surprisingly trivial (e.g. a small bird flying close to one of the tracking cameras), that will need that human ability to improvise and therefore process unforeseen event. And in current AI, we still have little to no idea of how human intelligence is able to do that with such speed and flexibility. Even a human intelligence with a shaggy beard, a gruff voice and a lack of manners, that smokes 40 cigarettes a day. Current AI simply cannot improvise. The closer AI research pushes to copying human capability, the quicker the technical difficulty grows to "pinch" that last capability. Exponentially quick. And the simplest way to skip that last "pinch" is still to put a human supervisor in the cockpit.
youtube AI Jobs 2025-09-14T08:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzQzcamF4ZzPcyGrxN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwDgpfcV4qYz11q2ct4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_Ugx5WpOQtThVAQz2F914AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgxGDiwvacuqrwn0VJt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy6zT1it5JgmJhzI4R4AaABAg","responsibility":"user","reasoning":"mixed","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugw6ajiTwuRRsrbmM4t4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzAln_Zq9Vp3t6p9IZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugxg1aBw6PMa3AhywQt4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzIRsEJvTITVA5YA2p4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxrx2lO8VVQ_rRdY194AaABAg","responsibility":"company","reasoning":"virtue","policy":"regulate","emotion":"outrage"} ]