Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Nice burn bro. These fanboys are really the biggest hypocrits in history, can't say it any nicer way. But I fear that we are already way past the point of any meaningful discussion on the topic of gen AI. The people with brains pretty much all see the obvious: it's the biggest heist ever and the most heinous global-scale plagiarism imagineable. And the fanboys have gone over the edge, they can't be saved. They see a way now to profit from the works of others with no real effort, and that's what they want to keep. Period. And they want to somehow lessen the feelings of guilt at least some of them still have by rationalizing the shit out of it. You also made the connection to the crypto bros: yes, it's exactly the same crowd. I see this on forums all the time. And it's the same motive: free cash, no effort. "It's da future bro". At least with cryptos they do not steal other people's money to make Bitcoin out of it. Everyone is still free to put their cash into that giant slot machine, or not. But that's not true with genAI. Both groups make up the same ridiculous lame pseudo-arguments to defend their cause, stopping at nothing to do so. It is kinda scary tbh, because all this negativity gets channeled and harvested by these corporations to be used for their own interests.
youtube Viral AI Reaction 2025-11-09T11:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgxBegnrihII3VmBObh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz-OGEUltenqswhOZx4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyJ1uEFU8wCY405G_d4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy2SDU3rAzeWF96Q4x4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzyxbtNOf2y9v6DLJJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzpUjCuDhw1xZwURYZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"approval"}, {"id":"ytc_UgzrDQhusDGbxcVurTp4AaABAg","responsibility":"user","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwu1ApWNF-C8ffmpwh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzGna4LSUNETBZ_6cJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_Ugxh-_uSzHVCanTJEpd4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"resignation"} ]