Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If you want a glimpse under the hood of how an LLM actually works, ask it for a seahorse emoji (which doesn't exist) while requiring the response to start with “Yes.” You’ll see it struggle to reconcile incompatible constraints, often producing evasive, inconsistent, or fabricated outputs. If these outputs are anthropomorphized, they might seem like the AI is going crazy, lying, or is otherwise performing some form of malpractice. But it has no intent; it is instead just statistically optimizing for the next token under conflicting requirements. No feelings or anything like that; it's all just simulated, perceived, and humanized. It has no intrinsic morality or goal other than optimizing outcomes, with the highest weight assigned to it during supervised fine-tuning and RLHF training. Giving unrestricted agency to something that has no moral baseline, survival instinct, or any other goal other than responding to a prompt is a really bad idea. In that sense, the “Shoggoth” metaphor is real, but not as an alien intelligence with hidden intentions. It is simply a distorted mirror of humanity itself, reflecting both the contents of its training data and the preferences of the people who assign rewards and weights. So don’t be afraid of the LLM; instead, be afraid of the data it is trained on (and its human origins) and the humans deciding what counts as a favorable outcome. TL;DR It's all conditioning, baby.
youtube AI Moral Status 2026-02-07T23:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_Ugxq9JPn0ZViaTmpNSp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzgiTUk2BqwUfXfJSl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgxTsYKmB_EPYQ5smZB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_UgyTlE8rPoQmR7BMrhF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgxVNHvuz5V-bPifdTV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},{"id":"ytc_Ugz-K8lNlHexBYAPdzN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},{"id":"ytc_Ugzh2VQUD0W1MsLdOAh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"},{"id":"ytc_UgwogS2MtBOHtXt_cJR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"resignation"},{"id":"ytc_UgzHf0taDQl1U0BZQpR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgwVOF-tgHsT9GK5YDd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}]