Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hate to tell you but anyone with an ounce of brain matter in their skull would n…
ytr_UgzF_J1XN…
G
I understand your concerns about the future of AI and automation. Sophia’s respo…
ytr_Ugzs9art6…
G
Good luck believing that, because anyone who uses AI will always be scum to me.…
ytr_Ugzr6FIKo…
G
So fake .. A.I. is not good.. everyone fir got the movie Terminator rise of the …
ytc_Ugw5CHqxL…
G
AI follows a pattern to generate content
The issue is that you don't always wan…
ytc_UgwgHKgV4…
G
I work in Microsoft's AI org. After that documentary was released we had lots of…
rdc_g9aphso
G
Artists are not tyrants. Artists and AI artists should be friends, for no one ha…
ytc_UgxP7BcVn…
G
So what's the endgame with AI? Replace humans, make their own currency doing job…
rdc_m83ggyw
Comment
No, AI, like any other piece of tech, does exactly what we tell it to do, and that's the problem. AI doesn't *understand* human language. It's like a Chinese Room, the AI has a massive set of rules and instructions regarding which output matches which input, but it doesn't understand why they match because it doesn't reason. So the company has many developers working on AI, adding to the set of rules and instructions, scrapers scour the internet to add more sets of rules and instructions from a large percentage of the 8.1 billion of us alive today AND everyone who had lived before us whose work is available online. Then the user adds a prompt, one more set of instructions layered on top of all the other sets. The output is somewhat unexpected or 'not what we asked for' because no human can fathom the sheer number of rules and instructions that are all followed in conjunction with one another. That's the real alchemy, just as the original alchemists did not understand modern chemistry, how atoms interact or even how they are structured, or that ions can hold a charge, modern devs and tech bro, as well as AI users, can't possibly fully understand the inner workings of the behemoth they've built due to the sheer size and complexity of it all.
It does what we tell it, but wtf are we actually telling it?
Remember the adage of 'too many cooks in the kitchen'? Now imagine none of those cooks understands what a kitchen is or what food or ingredients really are and 'cook food' according to recipes built off which trending words are often paired with the keyword 'food'. Do you really want to eat that output?
The (dubious) 'beauty' of the output is how wrong it can be in varying degrees of subtlety. Every wrong answer leads to a new search or a new prompt, or as the tech bros call it, engagement. Eventually, the world is either addicted or worn down to the point that critical thinking and human reasoning are a thing of the past and humanity becomes an unthinking cash cow, funneling ever greater profits to their tech overlords (Microsoft, Google, etc.). And if the bulk of humanity dies, the tech giants don't care as it's more money for them. The numbers just keep getting bigger. What to do with those numbers takes far more planning and reasoning than what the tech bros are capable of, and so they pay for luxury bunkers to be built to stave off their own end and they research how to live forever.
And wasn't living forever what the alchemists were researching all along? That, and how to turn things into gold (money, wealth, profits)?
youtube
AI Moral Status
2025-10-30T19:5…
♥ 4
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzlNS7h6F8yzYvzSyJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgzRtPT5FtYtVnhQMr54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzlLChLDho2DZmM6hJ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxJ11a3gGSNO0nYdlx4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwV58WTvHOgo-2Fg254AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyB2Y4vkhlSl-Jzq5h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy03CnsV188SKGRnIp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzbtSwOPU84WWS6Txd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxKjykBOQ2trpS-78l4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzAT3zD70G2CGdh6hh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"industry_self","emotion":"approval"}
]