Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I don't see AI taking over any time soon. I see just as much progress as I see utter failure and an industry that is over promising and cheating. Vemo had remote driver in India and when there was a power outage, Vemos stopped functioning. The robot delivery carts are an entertainment feature where they destroy property, because they do not recognize glass, land upside down, because they don't see staircases. Seems to me the service is not really up to the task and so far are running in mapped urban centers. Lawyers lost cases because AI invented laws and court decisions. What has happened constantly is robotics, which reduced assembly lines by 90%. But that happened long before the AI hype. Automated warehouses are not new either but they depend on ISO standard packaging and undamaged Bar/QR codes. Elon Musk faked his robots' capabilities. And while the mastering of bipedal balance by Boston Dynamics is remarkable, their robots with actual tasks in the real world have four legs and their routes are pre programmed. There is no decision making or creativity involved. AI in warfare: pattern recognition is astounding, but AI in missiles and drones is still human assisted. No fully automated robots and drones are deployed. The technology isn't there. They are just as unreliable as those little delivery bots in Asia thus their is a soldier with AI assist, when it comes to drones deployment. Palantir: the assistance of Palantir in ICE raids produced a lot of false positives. They send a heir dresser to the South American torture camp because he had a tattoo honoring mom. Just like the Dot.Com bubble AI tech Bros are hopelessly over promising.
youtube AI Governance 2026-04-23T21:3…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyhIx9X7vMqxe-bWV94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy0fot9G79C59Qcn8p4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwP346b2FQqZIbuLYp4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx4tQXleN-MIyZ3iuF4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwYDU1snkqAP0IDRCd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzLLGvjtXdupYLvzip4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyPtibdkddoZaXc8Vd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy827AYrlJNuzaM7Mx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxzCTjPLk7VW4vcITN4AaABAg","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyNsPCqLicTM8zqGbZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"} ]