Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
ChatGPT isn't the character it portrays: all ChatGPT does is attempt to write th…
ytc_Ugz3QrshX…
G
I worked for a traditional centralized company for over 35 years. NONE of the c…
ytc_UgyAHoHL8…
G
It's so funny that AI companies complain about AI poisoning tools, when they sho…
ytc_UgxsuTswI…
G
N.B. this "woman" says nothing about whether it is just, moral, correct, fitting…
ytc_Ugz9aBtSk…
G
If those AI bros can call themselves artists, can I call myself a game designer …
ytc_Ugy9G8lsg…
G
Imagine having an AI agent who acts like your customer support agent who interac…
ytc_Ugy_dnXYn…
G
The conversation completely ignores the actual functioning of current AI systems…
ytc_UgyL5Vca0…
G
My issue is that it IS faster than us. Art takes time. My compositions take time…
ytr_Ugw2doqyr…
Comment
LLM AI's don't lie. They use probability to find the word/token that works best in the context of your prompt(s) and what it has already written. It's basically a huge database in a hyperdimensional plane with each dimension representing different contexts words relate to each other. Closer together means greater probability of coming up in a sentence. The contrapositive is also true. These LLM's are really good at communicating since they learn the best way to communicate and have generalized over a lot of words and a lot of contexts (Over 12000 dimensions; try and visualize that spatially).
I study Computer Science and to me it's really important to de-abstract the different jargon, because they often misrepresent what the models actually do. Same goes for "hallucinations". It gives misaligned abstractions of what the technology actually does. I think for the general people to be more enlightened we should be better at talking about it as a piece of technology rather than a living creature.
I get that we want to use human terms to describe the models, because if we looked at what it does as if it was a human being, it would be lying. But pick it apart a bit. It's finding the style of combinations of words that it has previously been more successful with. You cannot lie without intent. If I tell something that isn't true, but I believe it to be, am I lying? Take away the belief, am I then lying? I do not believe so, but I could be lying...
I think the most abstract way I would explain the way AI LLM's work is: It's AI that imitates our perception of what AGI (Artificial General Intelligence) is and that is what we're training it to do. Whether that will lead us to true AGI is not something any expert knows or have proven.
youtube
AI Governance
2025-11-26T21:1…
♥ 33
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxPo0hIRTQ921Jnled4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxWn8b4A-EuABGyNtF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwnZGMbZUNe0u8S-nR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyhMfYyGyYnU31qgN14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwXNLVUBTKgKhC_aSF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy0gkY9YKs4-WClrbF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxcDfCD4b3wfcrWK_p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxgCU7bY8hnLnIUD8t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwZTecmqJLoPT5ORGZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxobctoSh9O1yYWn6l4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]