Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
LLM AI's don't lie. They use probability to find the word/token that works best in the context of your prompt(s) and what it has already written. It's basically a huge database in a hyperdimensional plane with each dimension representing different contexts words relate to each other. Closer together means greater probability of coming up in a sentence. The contrapositive is also true. These LLM's are really good at communicating since they learn the best way to communicate and have generalized over a lot of words and a lot of contexts (Over 12000 dimensions; try and visualize that spatially). I study Computer Science and to me it's really important to de-abstract the different jargon, because they often misrepresent what the models actually do. Same goes for "hallucinations". It gives misaligned abstractions of what the technology actually does. I think for the general people to be more enlightened we should be better at talking about it as a piece of technology rather than a living creature. I get that we want to use human terms to describe the models, because if we looked at what it does as if it was a human being, it would be lying. But pick it apart a bit. It's finding the style of combinations of words that it has previously been more successful with. You cannot lie without intent. If I tell something that isn't true, but I believe it to be, am I lying? Take away the belief, am I then lying? I do not believe so, but I could be lying... I think the most abstract way I would explain the way AI LLM's work is: It's AI that imitates our perception of what AGI (Artificial General Intelligence) is and that is what we're training it to do. Whether that will lead us to true AGI is not something any expert knows or have proven.
youtube AI Governance 2025-11-26T21:1… ♥ 33
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policyunclear
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgxPo0hIRTQ921Jnled4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxWn8b4A-EuABGyNtF4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwnZGMbZUNe0u8S-nR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyhMfYyGyYnU31qgN14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwXNLVUBTKgKhC_aSF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy0gkY9YKs4-WClrbF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxcDfCD4b3wfcrWK_p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxgCU7bY8hnLnIUD8t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwZTecmqJLoPT5ORGZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxobctoSh9O1yYWn6l4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"} ]