Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is such a grounded take on where AI actually stands versus the hype. Love how you break down the DIKW pyramid—it perfectly illustrates what today's AI can and can't do. The point about hallucinations being "shockingly wrong" really resonates, especially for businesses trying to deploy AI systems reliably. At GYB, we see the same pattern: AI excels at execution and pattern recognition, but humans still need to define the "why" and catch the edge cases. Your framework on macro vs. micro goals hits hard—that's where most AI projects fail, not because the tech isn't smart enough, but because judgment, taste, and strategic direction still require human intelligence. Curious how you'd apply this to AI agents in enterprise workflows—do you think the solution is better human oversight or smarter prompt design? Would love to dive deeper into this on The GYB Show. Great reality check for anyone building with AI.
youtube AI Responsibility 2025-11-06T20:5… ♥ 5
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzH89X6bUBv4wCZTgF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyqGjPNwJ6QA0Fz-4F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyxQO09TTCNTIZLiEV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy4NUAksRfnApvRwk14AaABAg","responsibility":"distributed","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgypZknkEThfR3Qywtx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwOHQ0pyTPcxci4HiF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyB20yVDFKkceDmmgp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzDRtwqT8lqpKvDHax4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwKB3YK0w4etax9s254AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzprmYLK9plq4KKxah4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"} ]