Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The United States is just becoming a police state like China is, they really are…
ytc_UgwG1mvsK…
G
Need to ban ANY employment, labour, hr decisions from being made by ANY ai, auto…
ytc_UgwOamiHu…
G
05:28 haha... This is the exact kind of prompt that I will often give to "trick"…
ytc_UgxkG-GI7…
G
Well, I would agree that there's something about modern films that make them loo…
ytr_Ugy9HiSn9…
G
@echiko4932 agreed, and that is the stance I take too. The major difference betw…
ytr_UgymlH6Bz…
G
Result from Gemini:
Traditional Islamic interpretations of Surah 9:29 center on …
ytr_UgxeKVwkc…
G
I am sorry you lost your job, but lets admit it, if your company settled for AI,…
ytr_UgzMB4jDS…
G
the important thing is that they were both made by humans through creative expre…
ytc_UgzGHgytH…
Comment
This represents one of the biggest fears I have about AI. I'm less worried about mass surveillance and automated weaponry, which is pretty fucking scary in of itself. Rather, what makes me nervous are LLMs, which inherently have no ethical structure beyond the material that it's trained on, that are specifically tweaked to support and push certain types of biases.
The example I keep coming back to is that of a person who is looking for someone else to blame for their own personal circumstances.
'Am I wrong to think that these fucking immigrants are taking our jobs and ruining our country? Seems like everywhere I look they're getting jobs and I'm not', a person might say to chatgpt looking for consolation and validations.
'Yes,' it replies, 'you are right to feel that way. It's not just you. Lots of people fee that way, and you deserve to have what's rightfully yours.'
If the government is willing to lie to your face about everything and anything, it's doing so because it care more about pursuading you to support its own agenda than helping you get a better life. That means that they'd also be willing to control how the frontier shops like openai and anthropic fine tune their models and biases. These can be controlled through system instructions that guide the LLMs behind the scenes without a human ever even knowing.
They can easily become tools of mass persuasion, just as effective or more so than social media bots tipping conversations in whatever directions the commander of the Bot army chooses.
This is why I canceled my chatgpt account. It's because of the things that I can imagine they're going to do that I know people in charge are sick enough to try and force companies like openai to do.
reddit
AI Harm Incident
1772726437.0
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_o8sndk2","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"fear"},
{"id":"rdc_o8sqyi6","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"rdc_o8sr9fz","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"rdc_o8tbz00","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"},
{"id":"rdc_o8wyzmp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]