Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
LoL, the two critical terms thrown around are “intelligence” and “learning.” This “Ai” is and does neither. It isn’t intelligent and it doesn’t learn. It’s a quicker Googler than you, similar to how a calculator is quicker at math. But the calculator is constrained by the arithmetic logic unit, so they can only perform inputted math calculations to the extent it’s capable of. Likewise, this fake “Ai” that we wrongly call Ai isn’t actually intelligent nor does it learn. It’s siphoning information on the Web and regurgitating it. It’s not creating new information consciously because it cannot. And it will never reach the point that it can. Just as a calculator won’t spontaneously start performing auto-calculations because it isn’t actually intelligent. Nor does the calculator learn. It has the inputs it has. So no, this fake “Ai” isn’t what people think nor say it is. The biggest problem is most people don’t realize this, cannot discern fake “Ai” from actual learning and intelligence, and will become reliant upon a gimmicky Google search bot—confusing it for what it is not. That’s the real risk is humanity becoming dominated by fake “Ai” by our own willingness to surrender our autonomy to the fake “Ai” rather than the fake “Ai” seizing it like a Terminator or Skynet or any futuristic ridiculous scenario. We’re basically right back where we started where humans see something supernatural occur and mistake it for magic when in reality it’s sleight of hand and never existed to begin with.
youtube AI Governance 2025-08-10T12:4…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxUPM_EQ-dUk8vGXMR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzb-WGUU3rT6_U5noV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"resignation"}, {"id":"ytc_UgwEQjcxNJNFb0saYlR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyUj-2PzgtsWLXjEjN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgydDBABp2FFUZ3FPeV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw0W8U7GJko1luV-pB4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwf2WYeWbMH70mSLZJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyvRHRKYC-6OFKKkP94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzWKsBa-9By7sXyjDl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgztzY_jyUutLmlPPPN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"} ]