Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Suchir Balaji was an artificial intelligence researcher who worked at OpenAI from 2020 until August 2024. During his tenure, he contributed to projects involving the collection and organization of internet data used to train models like ChatGPT. In October 2024, Balaji publicly expressed concerns about OpenAI's practices, alleging that the company violated U.S. copyright laws by using protected content to train its AI models without proper authorization. He argued that such practices could undermine the commercial viability of original content creators. Balaji articulated these concerns in an essay titled "When does generative AI qualify for fair use?" published on his personal website. Tragically, on November 26, 2024, Balaji was found deceased in his San Francisco apartment. Authorities initially determined the cause of death to be suicide, though his family has disputed this conclusion and is seeking further investigation. Balaji's whistleblowing has intensified discussions about the ethical and legal implications of AI development, particularly concerning data usage and copyright laws. His death has prompted calls for deeper scrutiny into the practices of AI research organizations like OpenAI.
youtube 2025-01-16T08:5…
Coding Result
DimensionValue
Responsibilitycompany
Reasoningdeontological
Policyliability
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzDb4n7iBrVpoIeM1h4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgwuLUFuFprYtMyEb0J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgzhigjzYV-FxDPcneF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzWWy3USrI36emfOXV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwsVu0yPC97jDB4vXp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgzHO0uvSHR2O2o6qUJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgynUUSxPB1L3FZDwIt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgyvPItpPSjGxt1jZMV4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy2RKYUuBZQogVJVSB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxNFyF_PEWyB1g0rIR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"unclear"} ]