Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
We are doing it with blinders on no matter what. We don’t understand artificial …
ytc_Ugz7u3qZG…
G
As someone who has an interest in IT, AI is one of those fields I want to dabble…
ytc_Ugy-FGQlN…
G
I'm seeing history repeat itself here on real time.
These is the EXACT same lam…
ytc_Ugwj-BcZn…
G
we're toast, we're cooked. is their subconscious mind envisioning ai making huma…
ytc_Ugz510j8p…
G
Damn dude this raises some questions about the training sets used for facial rec…
ytc_Ugz6Z6RME…
G
In 30 years when the AI OverLords take over, they are definitely gonna poison yo…
ytc_UgwuRDqE4…
G
Matt Kaufman is so naturally disfigured like one of those british trolls in book…
ytc_Ugz-DVjs9…
G
I'm an AI/ML engineer and researcher, however I do not share the "big AI" sort o…
ytc_Ugwsnfr6J…
Comment
No what you are saying makes no sense for many reasons, so I will get straight at the issue. As an Ai platform grows in user count there is mounting pressure from the company to minimize the amount of compute spent on inference. how does this look? Well, it takes the form of smaller quantized models being served to the masses that masquerade as its predecessor. Whatever name the AI company uses is NOT what they give you after the first phase of the models roll out. Its a basic bait and switch. Roll out your SOTA model, get everyone using and talking about it to generate good PR. Then after a few weeks or a month or 2, swap out that model with a smaller quantized version. Its literally that simple, no conspiracy theories or any other nonsense. For more evidence of this interaction just look around the various AI subreddits like /bard for Gemini 2.5 pro swap out or any number of other bait and switch shenanigans throughout history...
reddit
AI Harm Incident
1747010904.0
♥ 8
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mrv267f","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"rdc_mrut4mz","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_mru7bs2","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mrum80h","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"rdc_mrvvwd5","responsibility":"company","reasoning":"deontological","policy":"unclear","emotion":"outrage"}
]