Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If I want an AI to be used by a doctor, or a lawyer, I want an AI that can take all the schooling required for such professions, listen to & comprehend all the lectures, pass their exams, & then either perform their residency or pass their bar exam, get their licenses & maintain them. If an AI is unable to complete these as a reasonably intelligent human would, then it isn't qualified to assist in the task. I can't help but reflect on recent reports of things like a medical transcription AI that was absolutely _not_ doing its job, & inserting text that wasn't part of the conversations it was transcribing. Another-- slightly more terrifying-- report was on an AI that was being tested for autonomous operation of military drones (in simulations), which would not only be willing to sacrifice its own allies-- as in, actual human infantry on the ground-- in order to kill as many of the enemy as it could, but then when given constraints, would destroy its own allies' communication systems to prevent further orders for various instructions of restraint from being transmitted. Asimov's laws of robotics got all the love & attention over the decades for their prescience, but I think there's another that is much more relevant. _Thou shalt not create a machine in the likeness of the human mind._
youtube AI Responsibility 2025-05-23T20:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningdeontological
Policyliability
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgwTg8olocTla9mTAL54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgzRld4ruYxvQmUeclJ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwmpINpRX7DJoaQeCB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugyof273L7KbmjzVQF14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxuRgE5CngADBeJ2Zd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxi2pCbnI5KV6O_j0N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxR2oqArh-dZnfxX8x4AaABAg","responsibility":"society","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwCNadhBzA5VXxUKOV4AaABAg","responsibility":"ai_itself","reasoning":"clarification","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxx1QxRAsLE9FI4mkt4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwBXerzYxOlO6Q5Ta54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]