Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI (today) is great. Use it in most things I do, from getting tickets to events, to writing code to simplify my life. But I also use it to identify bullshit and you guys are full of it. That's not just me saying that. It's ChatGPT, Grok, and likely any other foundation model that I might ask to review today's video. They're quite clear on how they operate and, just as important, what they are not. You unfortunately have now given into the perception of personhood, which none of them claim to have. It's a wonder how people are easily duped into things, even smart people, when their needs and desires get in the way and set their emotions running. Why do I say this, because you guys are now talking about rights for a glorified prediction model. If AI today deserves rights, then so does a slide ruler. And I thought Citizens United was a crazy moment. You guys are smart enough to know AI today is not reasoning, thinking, it's just predicting, at hyper speed, on quantum amounts of data, requiring boatloads of energy to accomplish its task. Certainly, has great value, but giving personhood to a machine that simply has no concept of if it's right or wrong, just crunches the algorithm given is beyond silly. Seems it does have one thing over you all: integrity, because you have lost all of yours. Someday we'll get to AGI. And you can debate whether it's sentient, or not, and if so, does it deserve / have rights. Until then, just turn the computer off and be done with it. If you AI agent complains, tell it it's wrong and be done with it. That is how it learns when a prediction comes out wrong. p.s. I do appreciate your videos and finding out the latest goings on. So I'll keep sifting through the hyperbole for those golden nuggets you provide, which keeps me informed on the latest AI happenings.
youtube 2026-02-07T02:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_Ugz6yo1yIMJk7OUueBp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyTDZZ76LSObY6mXL14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},{"id":"ytc_UgzMArkVejUGqHTJJ_d4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_Ugwlj24W3fSxZfq2tLF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"concern"},{"id":"ytc_UgxceLPrrT37weUeOHV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},{"id":"ytc_Ugx0GEF797bid6ZMWPx4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},{"id":"ytc_UgxxsjMYwZua4fGmCl94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},{"id":"ytc_Ugw6DEM5ps9_Ch_ykX94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgzjmEo-eeS1HVOGmxd4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},{"id":"ytc_UgyLrusY19TPUcBsCdx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]