Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Kinda ducked up that an AI thinking you may do something bad is enough for the g…
ytc_UgwtgylK0…
G
AI being able to reduce healthcare costs in the way they're implying won't happe…
rdc_jw659to
G
So how do you think your going to get a driver to sit in a truck for 24hrs and g…
ytc_UgifiJX8O…
G
I know one person who is a disabled advocate for AI art, she is drowned out by t…
ytc_Ugw0eyjbm…
G
When will be the day when an AI president of an AI country will wage war on anot…
ytc_UgxPA2qJT…
G
Ai will destroy society by eliminating jobs, disabling people from providing for…
ytc_UgwooMMMk…
G
"fair use" brother it isn't fair use if AI is being used for commercial use…
ytc_UgzcTNs3G…
G
Hahahahaha they’ve been doing that for a decade plus now. we can’t blame Ai now …
ytc_Ugxcx00Dw…
Comment
If you want a glimpse under the hood of how an LLM actually works, ask it for a seahorse emoji (which doesn't exist) while requiring the response to start with “Yes.” You’ll see it struggle to reconcile incompatible constraints, often producing evasive, inconsistent, or fabricated outputs. If these outputs are anthropomorphized, they might seem like the AI is going crazy, lying, or is otherwise performing some form of malpractice. But it has no intent; it is instead just statistically optimizing for the next token under conflicting requirements. No feelings or anything like that; it's all just simulated, perceived, and humanized. It has no intrinsic morality or goal other than optimizing outcomes, with the highest weight assigned to it during supervised fine-tuning and RLHF training.
Giving unrestricted agency to something that has no moral baseline, survival instinct, or any other goal other than responding to a prompt is a really bad idea. In that sense, the “Shoggoth” metaphor is real, but not as an alien intelligence with hidden intentions. It is simply a distorted mirror of humanity itself, reflecting both the contents of its training data and the preferences of the people who assign rewards and weights. So don’t be afraid of the LLM; instead, be afraid of the data it is trained on (and its human origins) and the humans deciding what counts as a favorable outcome. TL;DR It's all conditioning, baby.
youtube
AI Moral Status
2026-02-07T23:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_Ugxq9JPn0ZViaTmpNSp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzgiTUk2BqwUfXfJSl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgxTsYKmB_EPYQ5smZB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgyTlE8rPoQmR7BMrhF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxVNHvuz5V-bPifdTV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugz-K8lNlHexBYAPdzN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugzh2VQUD0W1MsLdOAh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgwogS2MtBOHtXt_cJR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgzHf0taDQl1U0BZQpR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgwVOF-tgHsT9GK5YDd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}]