Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
8:50 No offense, but it doesn't. I was at a distance from my monitor and it still looked underwhelming, to be as charitable as possible. I keep hearing people (even people extremely anti-AI) say that AI is getting 'good', but I've yet to see it and it frankly seems to be getting worse. 14:56 'Inevitable' seems like such a weird argument in any situation. Nobody is an oracle, it's very clear from seeing history that lots of things labelled 'inevitable' or "guaranteed' were dropped while things that seemed largely unknown suddenly became ubiquitous. How many times have we heard 'it's just around the corner' about all sorts of technology for decades? It's also not always clear what 'inevitable' means in this argument. Will computers continue to get better? Millions of people are pushing the limits, but it's all over the place which things get better at which rate. Gen AI could theoretically be dropped when a strictly better AI system comes out and that could be soon. And that's ignoring any of the fanatical feeling of the word 'inevitable' (I don't want to call gen AI worship a religion, but we saw what happened with things like NFTs...) I would say there's two great things that gen AI did cause; I am personally able to find flaws, problems, and just details in general better because gen AI did such a good job of making its own flaws so obvious and it seems like loads of people are going through the same thing. And after trying out gen AI, it finally pushed me to start drawing myself. The only way I'm going to get some ideas in my head out is to make them myself, no artist online has drawn anything even remotely similar and AI doesn't even understand the prompt.
youtube Viral AI Reaction 2025-04-11T07:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgwB3lb4ZsuePuMXLV14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyZeZ4xg4oy26arzjN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxJmtQunE4ozC-CbWx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugyy8J6LfJBwAcHr2rt4AaABAg","responsibility":"user","reasoning":"contractualist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugy4ohxVvwEzYGmqxQ94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgxvXT6p2kRjgTE_l0B4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyvdxYgkYv7ozTtIwp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugx0Ayf94v1SCyGHYFB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwSUbxCepN6BBDDUvR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxczRpGATlVUFm4EbB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"mixed"} ]