Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
On a social level one thing that I think is absolutely needed right now is laws policing malicious and/or deceptive AI generated content. All of the big social media companies have the capability to integrate AI detection software into their platforms, but do we see youtube, tiktok, facebook, reddit, twitter, etc flagging content as "probably generated by AI"? No, because there's no consequences for them if they allow people to be deceived. Did you see the Neil Degrasse Tyson video where in it he claims he believes the world is flat? It's 100% realistic and completely believable, even Tyson himself in a response said that it was completely believable as him, EXCEPT for the fact that he does not believe that the world is flat. When a person can't even tell that the video of themselves isn't really them, we have a serious problem. To be clear, there is nothing wrong with fiction, we all consume fiction on a nearly daily basis, it's entertaining! But there is a world of difference between fiction, and fiction that we don't know is fiction. We need major governments to enforce laws that fine social media companies for allowing AI content that is unlabeled be posted on their platform. Sorry Google (the owners of the platform I'm typing this into) you know this needs to be done, our world is dancing on the knifes edge of not being able to know what is real and what isn't, you and other platforms like you are the only ones who can install an appropriate safeguard against it.
youtube 2025-11-07T04:5… ♥ 1
Coding Result
DimensionValue
Responsibilitycompany
Reasoningconsequentialist
Policyregulate
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzSp4WU0iS8uWcukaZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzY-qG9ChlDMoOmNSt4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugw0Ep9q05sw4hUkEUd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxVYU8LGl6ZTLyJEv54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz7r5wWsptOr4kzMhN4AaABAg","responsibility":"user","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgysGux8p9k_TVUHQit4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzzJpELAaMZSYNph0x4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgyV_coD2-F297O0JKB4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxaYiWbGSyEPUQYCYB4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwEgowmtxjwyNSPyCt4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"} ]