Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I rant because I feel like people here would understand: I have a deep hate of AI as a not-so-cool-middle-artist. Once I spend two hours of my incredibly precious time and effort to draw a meme sketch with no colors, after what my friend just said "LOL" and generated same sht rendered in ultra graphic quality TWO DUCKING MINUTES LATER. It had artifacts, it had strange unlogical pieces, character was only a little bit recognizable, but guess which one everyone in the chat liked more? OF COURSE that nicer colored AI image. My other friend is admin and he deleted the AI one just to make me happy, but if not this, my hours of time would be wasted... I never felt so pathetic ;( Another coolstory happened when I was commissioned to draw a character, I did it, and customer said something like: "I dont like how it looks, why you can't make it like this?" And then he proceeded to feed my art in chatgpt, ask him to enhance the picture and show that to me. The worst part is that art really started to look better, because AI corrected some of my beginner-mistakes. But how I could compete with these monster machines? Why he couldn't just ask Chatgpt to do everything instead of commissioning me? Oh well, he had an explanation: "AI can't give me exactly what I want unless I feed it with some precursors."
youtube Viral AI Reaction 2025-12-23T22:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionoutrage
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_Ugwa2mXH0m30-elOV3J4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgzXRJKw8D9x0NY4xCB4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzgYEQiGl3Uz6jXTUJ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgyuFVkJy8dPT6HzwVt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxFKAAxCr6LIWuTImF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugy758dpqmt-3gWxyTV4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwaNHzlHMhDIiC-c0d4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugz7PmJlgrAiHrXn9i94AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugz8KJA4osF1J9Dh0tR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_UgyiYNsCQ-BCLhpdDFd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]