Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What a thought-provoking podcast! If I may offer an opinion, I believe that even the opening stages of AI already possess the capacity to kill people, TENS OF MILLIONS OF PEOPLE, not directly, but indirectly, and for this simple reason. As I speak, robotics and AI are already putting many people permanently out of work and we know that this trend will only continue with ever increasing regularity. For example, just the other day I noticed two self-driving Teslas within 20-minutes, where only a few months ago I saw none. When you consider how many men and women make their part and full-time living as drivers, this problem will increase exponentially, for what other job offers this kind of autonomous income despite the vagaries of the job market?? And that doesn't even include millions of long distance truckers. I think that putting people out work will have lasting and devastating consequences, witnessed when you see someone, either by illness or some other handicap, turn into virtual vegetables overnight. Because when you take away someone's ability to earn a living, you are handing them hopelessness, sloth, and a total lack of self worth, often times ending in alcohol and drug addiction simply to deal with these negative emotions. This same effect could easily turn into suicide in many cases, with the same phenomenon occurring when you force someone into retirement, making their lives suddenly become meaningless. I mean, what is the point of even getting out of bed when you already know that you have absolutely nothing of worth to do, all day, every day? I can only equate this new paradigm to just another welfare state, on steroids. I welcome any dissenting views, and thank you.
youtube 2025-10-19T08:4…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyliability
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugyrv371Hu6eOs7YGJh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzfDXiU2R6dbVPqbLd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy5w-EsmmTQea4yaZt4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyJeDIiGCTk_xP3xRR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxoNhkeL6MlMsBHA814AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxayCbSK2GpVCbV0T14AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxHmA602z2DvJaZT8t4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgwxqD-jpeSdARHbOrZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxQMAuuzU8-ZfDBFbl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzscHGwG1h4ROH_2iB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"ban","emotion":"outrage"} ]