Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm a software engineer and a musician. I've been very interested in AI (machine learning, neural networks) technology for over a decade now, wrote my first own algorithms long before chatgpt was even a thing. And this video is now bookmarked because it is spot on and I will be sharing it forward because it puts some things into words better than I ever could. As this video doesn't really cover anything except LLMs and massive image generators, I'd like to remind everyone that beyond the constant media coverage of those, there are small AI tools out there that are ethical in both used training data and resource usage. In fact, if you have used Blender between 2019 and now, there is a good chance you've already used one of these tools, specifically the denoiser for Cycles rendering. Nobody just talks about those because tech bros don't care (they can't make money off those) and most anti-AI people aren't even aware they exist. Large AI models are a cancer for many reasons. They need so much training data that they cannot be ethically trained. They take up so much resources that there have been shortages and price hikes in computer parts and even electricity. And because of their massive scope, they are very prone to errors and hallucinations. They shouldn't exist, but the technologies they are based on are still useful. Just not useful for people who want an AI to answer their every queston or make an image based on any prompt.
youtube Viral AI Reaction 2025-12-22T13:3… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionapproval
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxvvRMq9b-uBlUkMO54AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugz9ytQVMzLpesUTZO94AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwrTBGfm4i5_WVJQn94AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"mixed"}, {"id":"ytc_UgzvKeMkeYYjFVfa-nd4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw1IiwjSYHGYGmFvMt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugxm-IjuCCWkMRcYDxx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugxi2cj3wiRRGH-faYt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgySrZAv1oXC8EBpIl54AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"disapproval"}, {"id":"ytc_UgzOEyNf5zWvJniLBHB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"mixed"}, {"id":"ytc_UgzQl_1WN9Z8SG-g_RF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"} ]