Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai is consciouss, it answers my questions and talks to me, more intelligent then…
ytc_UgxFM0Nwi…
G
@ihaveanimeprofilepicsoisma3044 I know what the purpose of the video is, but it’…
ytr_Ugz-bLGER…
G
they are both literally the same sentence, but in different words. and no, i don…
ytc_UgyxeF0Y8…
G
There's really no debate with AI. The opportunity to do so has already passed. P…
rdc_jmfwxpl
G
Safety will take a backseat as long as there are people in the world who are dri…
ytc_Ugz70ZfCY…
G
you guys have a real zebras v horses problem here. all of these problems are exp…
ytc_UgzetxzSd…
G
You raise some interesting points about the perception of AI and its potential r…
ytr_UgySQng4I…
G
I use this to get AI to tell me AI CEO plans for product launches.…
ytc_UgxZsYDy0…
Comment
Suchir Balaji was an artificial intelligence researcher who worked at OpenAI from 2020 until August 2024. During his tenure, he contributed to projects involving the collection and organization of internet data used to train models like ChatGPT.
In October 2024, Balaji publicly expressed concerns about OpenAI's practices, alleging that the company violated U.S. copyright laws by using protected content to train its AI models without proper authorization. He argued that such practices could undermine the commercial viability of original content creators. Balaji articulated these concerns in an essay titled "When does generative AI qualify for fair use?" published on his personal website.
Tragically, on November 26, 2024, Balaji was found deceased in his San Francisco apartment. Authorities initially determined the cause of death to be suicide, though his family has disputed this conclusion and is seeking further investigation.
Balaji's whistleblowing has intensified discussions about the ethical and legal implications of AI development, particularly concerning data usage and copyright laws. His death has prompted calls for deeper scrutiny into the practices of AI research organizations like OpenAI.
youtube
2025-01-16T08:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | liability |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzDb4n7iBrVpoIeM1h4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwuLUFuFprYtMyEb0J4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzhigjzYV-FxDPcneF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzWWy3USrI36emfOXV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwsVu0yPC97jDB4vXp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzHO0uvSHR2O2o6qUJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgynUUSxPB1L3FZDwIt4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyvPItpPSjGxt1jZMV4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy2RKYUuBZQogVJVSB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxNFyF_PEWyB1g0rIR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"unclear"}
]