Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
> you're crazy if you think OpenAI and other LLM's aren't already using all of reddit scraped (for free). GPT2 was almost entirely trained off of Reddit data. "Instead, we created a new web scrape which emphasizes document quality. To do this we only scraped web pages which have been curated/filtered by humans. Manually filtering a full web scrape would be exceptionally expensive so as a starting point, we scraped all outbound links from Reddit, a social media platform, which received at least 3 karma. This can be thought of as a heuristic indicator for whether other users found the link interesting, educational, or just funny" https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf I still despise how popular chatGPT got, because OpenAI used to actually publish their damn research. The second that people realized how good their tech was getting and how much money was available, they closed everything up. I want to read about how they did Sora so badly but nope, those are secrets now and we're turning this into a black box. Sorry.
reddit AI Responsibility 1708362813.0 ♥ 21
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[{"id":"rdc_kr52u1i","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"rdc_kr58d86","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},{"id":"rdc_kr5tqmo","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"mixed"},{"id":"rdc_kr526z0","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},{"id":"rdc_kr5cfsh","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"approval"]}