Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Right quick, part of Shads argument seems to be that instant gratification is always better, to anyone willing to read this here’s why it’s not. Say I walked up to you and offered you candy (and no this isnt supposed to be stranger danger, I’m 100 percent legit in this scenario) you agree and hold out your hand before I can explain any further, I try to explain but you just want the candy so I give you a single funsize candy bar, as you eat it I tell you I was gonna give you much more if you would answer a simple question, but you’ve made your choice and missed out on a potentially greater reward for the sake of one in this very moment that didn’t last very long. This… this is what instant gratification does with your dopamine, you get things done quickly, then you have no choice but to do so again because you didn’t get a much larger payoff, so claiming it’s fine to do things that make you feel good that it’s happening fast isnt an argument for the benefit of AI, but instead against it in terms of how it helps destroy attention spans and fulfilling projects. Thanks for reading if you did, and feel free to disagree if you’d like, I’d like to see other views on this
youtube Viral AI Reaction 2025-09-01T22:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyWo7XHESQJH08lx714AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugw969mwcPtKIYZIHcl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx2Vl9jnVD6hQRJBzV4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzlUv6d6jIYbBGFdIt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgyEv3xqf0T5qpNvIQZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy3AmHjB_LaIbvwHlN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx0RnKauEgbT89y56N4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzPMpf7eT3IoPEJ8o94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyKav5L7CcVDe_FXf94AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzOzQ17V_uEbD-sfPJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]