Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Lavender, you may not want to read any of this—it's just noise that you don't need to see...but if you're at a low point, then please take a look. ----------- It's not easy to pick your own goals/fights in life when you constantly have others treating you like you're stupid and useless. We don't write essays to make more essays, but to train our thinking skills and to better voice our own ideas regardless of how similar or different they may be, so when you have someone like Elon Musk saying that students are better off using algorithms to correct accelerated algorithms directly in their exams/studies, how does that affect their learning?... Using models to check after other models doesn't solve the issue because THE STUDENT IS NOT THE FOCUS, BUT RATHER WHETHER OR NOT THE ALGORITHMS ARE DOING SOMETHING SATISFACTORY; at that point, you've already determined which one will be used/hired in the actual workplace, so when people say that "it'll just create more jobs", where does that opinion come from? – the History books and older technologies are effectively a strawman argument posing as an answer because, as I will keep repeating over and over, YOU HAVE TO VERIFY EVERY LITTLE THINGS THE ALGORITHMS DO AND CORRECT THEM IF NECESSARY, which is IMPOSSIBLE and you would also have to know what to look out for anyway! It's why I believe it's a predatory waste of time to be optimizing these things on important duties that humans can do! ---> Humanoid robots are nothing more than what you engineer them to do. The question is: is it something we really need or is it something that is used to take advantage of us? That's a rhethorical question. Another question: do we want more PERSONALLY FULFILLING jobs/work, for that matter? If so, why are people using accelerated algorithms to do things they “love” doing? Is it really just me or are people deceiving everyone and themselves? Go figure. ["A.I." bro:] "How much meaningful work we already do? To some, if you automate more than 0% your work is meaningless, to others, if you have some automation is fine. The variance of this answer is way too large to judge how much you think is enough. Even if you keep saying that is not enough, someone said to me what you do is not enough. When I came to the scenes, they already had those choices taken away. If "living in a world that AI doesn't exist or work" is this choice. They still have choices, I gave them one very specific, adapt to the tool to prepare for a future that people will require using it to be more effective and carry on with your career. I'm not the force which is pushing them in that direction, I'm just making the transition easier and more smoother." Why be surprised when people treat you like garbage because this is how they see things?... This particular individual is dangerous when it comes to meaningful things; all he can see is the superficial act of drawing on a surface or mechanically pressing a few buttons and embedding code for tokens. All I've ever been hearing from people in tech is how efficient and different or similar you should be. Never once is it about self-expression and true self-improvement. Never once does anyone try to make it clear what automation there is. (We are always left to our own devices to figure out what is, only...most often we are undermined in that endeavour for the "interests" of others. And this is how they label exploitation as "ADAPTING", among other vile things that were mentioned in this video. "Aw, can't make any revenue, anymore? Stop being whiny because you can still express yourself! You shouldn't even be payed to do what you love because money has nothing to do with it! Isn't that right?" ... Why, yes. Money has nothing to do with expression. Now, pay me or get lost. ... As if I'd sell my soul to scum like this...though I'm aware that this sentence could be interpreted in many ways.) While you didn't see the 25 comments we've exchanged, I'll keep telling them off, because meaningful work is a gaping hole that society won't ever try to fix, as it mostly relies on the person themselves; it's one thing when you don't know what's better for you, but don't waste your time trying to make me believe that there's nothing I can do with regards to how you use "A.I." that is meaningful for me. This guy calls it abuse? Unjust? Pretentious? Selfish? Ungrateful? Pointless? Being uneducated? Maybe I'm just a greedy technophobe "preventing people from having access to information" in the digital age? (And who's a technophobe when I'm using Glaze and Nightshade? Gee, I wonder why it’s a problem for him. Maybe I should apologize because I’m too “stupid” to figure it out...) OR––maybe I'm worried over nothing, and that it makes people believe that I don't really have anything worthy of attention in the first place? I...don't care—it's my work. GET. LOST. "But you could always reference it," one might say, until you realize that you've been referencing too many things, to a point where it hinders what you did and you've made poor choices because of it (or accepted stuff as something reflective of your choices), like ErgoJosh on Youtube did with MidJourney, as far as composition and time spent are concerned...and that you don't really have control of what you're using...which segways into this next point: "Knowing how A.I. works", the aforementioned "A.I." bro said to me... It's a blackbox, optimized with our illustrations, your videos, your average underpaid worker in a 3rd-world country—and him as well.
youtube Viral AI Reaction 2024-12-17T01:0…
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxM5GLO_NV9N91oJ-14AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugx9mfyQZYHoIp8AbTR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugwf-DT2JfN2i54LAeF4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgwnFDsbXXFzMeLlgHd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx6V_2PfYoOmCV49bJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzhnLq7qsfdxOzasL14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzuOOwPxYZFDUavLSB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxuQPfsMtC-YMwfkb94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwQ9pCaCvvh8h--Oy14AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxMlp3xBQD2GpUnlf14AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"approval"} ]