Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This represents one of the biggest fears I have about AI. I'm less worried about mass surveillance and automated weaponry, which is pretty fucking scary in of itself. Rather, what makes me nervous are LLMs, which inherently have no ethical structure beyond the material that it's trained on, that are specifically tweaked to support and push certain types of biases. The example I keep coming back to is that of a person who is looking for someone else to blame for their own personal circumstances.  'Am I wrong to think that these fucking immigrants are taking our jobs and ruining our country? Seems like everywhere I look they're getting jobs and I'm not', a person might say to chatgpt looking for consolation and validations.  'Yes,' it replies, 'you are right to feel that way. It's not just you. Lots of people fee that way, and you deserve to have what's rightfully yours.'  If the government is willing to lie to your face about everything and anything, it's doing so because it care more about pursuading you to support its own agenda than helping you get a better life. That means that they'd also be willing to control how the frontier shops like openai and anthropic fine tune their models and biases. These can be controlled through system instructions that guide the LLMs behind the scenes without a human ever even knowing.  They can easily become tools of mass persuasion, just as effective or more so than social media bots tipping conversations in whatever directions the commander of the Bot army chooses.  This is why I canceled my chatgpt account. It's because of the things that I can imagine they're going to do that I know people in charge are sick enough to try and force companies like openai to do.
reddit AI Harm Incident 1772726437.0 ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyregulate
Emotionfear
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_o8sndk2","responsibility":"company","reasoning":"mixed","policy":"liability","emotion":"fear"}, {"id":"rdc_o8sqyi6","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_o8sr9fz","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"rdc_o8tbz00","responsibility":"company","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"rdc_o8wyzmp","responsibility":"ai_itself","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"} ]