Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I'm afraid of AI. Not because it's "slop" or about the "soulless" aspect of it. But because, we, as human, are not the theorical pinnacle of intelligence and creativity. At some point, AI will surpass us. And while most of people are still in deny about this, that nothing could ever ever replace human creativity and intelligence, I'm glad some are realizing this now. There will come a point when machine will get too good. Too good at making art, too good at making you feel, too good at hitting you with the right thing at the right moment. And when we will get to this point, who can honestly claim they won't go back to the AI, having another dose of this "AI Art". The point when people will, whether they're conscious of it or not, actively give their creativity to AI. And, isn't faster? Who would want to wait weeks? Days? Hours? Minutes? And if we let AI replace our brains, what would make human "more" than just animals? AI is a threat to humanity, not simply because of some terminator-esque type of scenario (although there is indeed the problem of misalignement ,etc.) but because, if everything happens well, that by some miracle, society still goes on and we reach an age of abundance, we will be living in a world where every single of our hopes and dreams will be given to AI. Because it is easier, and humans are lazy. Because it is smarter, and humans can be dumb. Because it is better, and humans make mistakes. And us? We would just be wandering between one thing after another, always wanting to consume more to forget our now meaningless existence. It is a pact with the devil.
youtube Viral AI Reaction 2025-09-01T21:0… ♥ 1
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgyTrYVzYp3bYvwteAJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzI5KamvfZlyw65TFV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgzuosZVCq34l06F8GF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwa7ED2tA1VbHJWE0x4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxMGwh1gA1LZHy2xMt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx7ajD7rfthfP9EkV54AaABAg","responsibility":"none","reasoning":"mixed","policy":"industry_self","emotion":"mixed"}, {"id":"ytc_Ugx8fCyMf3FsMtRAOsp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxReQhMy9GoaXOwIa14AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwrDLzi2BVT2ZdvBnB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxTk39zSmKmRQnazIx4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"}]