Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
who cares? if you want historically accurate pictures you dont go to an AI generator. these early AI tools have all kinds of issues. can we fix these before we start playing the social media buzzword racial finger pointing game. if you are frustrated that asking for a doctor gives you a male white guy most of the time then fix education so more women and people of color have equal chances in education and highly competitive fields like STEM, medicine and law. The training data for those AI systems comes from reality. AI is the mirror reflecting the real world. If you dont like what you see in the mirror then what you need to fix is reality. Not the mirror. Of course its a different story if the bias is inserted by the system designer. But the reality is that we mostly doing the opposite. We are already forcing diversity on training data that is not diverse. Which in turn creates more bias. Thats like trying to fix reality by bending the mirror that reflects it. Its just a pointless waste of everyones time. Just so a few people that have nothing to do but spend their whole day on social media to do their social justice warring have something to do with their time. Why do we care? If you ask the AI for fictional pictures then dont be surprised if fiction is what you get.
youtube 2024-02-29T16:2…
Coding Result
DimensionValue
Responsibilityuser
Reasoningconsequentialist
Policynone
Emotionresignation
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugw1oFVqltjMo3InbNp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgzblzMawLOljy4xvf14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugzn9wsopp5wJ_5US5N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}, {"id":"ytc_Ugzet26WgxIeMlyJREl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxdXyRCyFbYd0TqgRN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxm70ACixh466sXOdF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwL2-NkluacSvx4nSZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"mixed"}, {"id":"ytc_UgzXjCPQ4ojyZYYwKUl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgxV5d7eCgdinOJ7ZxV4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"fear"}, {"id":"ytc_Ugwr7eg239H3vZUOKoJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"} ]