Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This happened long before ai.
For at least the last 30 years its been about obed…
ytr_UgwTmvNuh…
G
I’m so happy that I don’t have the ai chat now. Ty strict parents ❤…
ytc_Ugyu20OKf…
G
this is garbage, handful poeple want to control the use case, the O/P as they se…
ytc_Ugw1oWeMq…
G
What is the name of the software?! How are you going to do a piece on faulty fac…
ytc_UgzT38WBe…
G
“As an AI language model” is going to be my new favorite phrase to use at work t…
rdc_jga24ww
G
2:00 he is right it is a tool…digital art is different than physical art and bot…
ytc_UgzuXqvJv…
G
I thought you were about to tell the ai artist to "draw" hands or somthing 😨…
ytc_UgzpORRxK…
G
The solution is creating “a sense of wonder” and a value in learning. Maybe Hamp…
ytc_Ugy1Ifozq…
Comment
The notion that for General AI to be human like real humans requires some sort of more sophisticated generalised neural architecture to be achievable is at odds with the way the human brain works. The human brain is modularised as well as being highly interconnected. Different parts of the brain are dedicated to particular tasks. We have the visual cortex to do the visual stuff, we have Broca's are to do speech production, Wernicke's area to do comprehension, the cerebellum to do motor coordination etc etc etc. So a human-like GAI will probably need a modular approach with architectures and layouts. A one size fits all generalised model will simply not work or will require huge resources and energy to perform the task. Millions of years of evolution evolved the human brain's architecture so that human intelligence uses about 20 watts to do the job.
Noam Chomsky pointed out years ago that the human child develops language with a paucity of training data. LLM's require huge data sources for training and correspondingly huge amounts of electrical energy to produce reasonable comprehension and speech generation. That a child can do what a computer can't points to very specific pre-programmed traits of the human brain to do specific functions. Modularity of function is the key not gross generalisation.
Another problem for General AI is that the models are only trained on "Left Brain" data, that is mathematics and the written word from the internet. It is the external expression of the left brain that does written words and math's that is represented by the internet. The world of the "Right Brain" and the Limbic system is inaccessible to the training sets of the current models and it is the activities of these parts of the brain that provide much of our humanity. Another part of the brain of limited accessibility is the human "executive" centre which integrates the limbic system with the cognitive centres of the brain. Whilst the current AI can interpret images and video, it does not connect these things with emotional salience provided by the limbic system and right brain.
Whilst the activities of the right brain might be inferred by LLM's they cannot accurately model something they have no direct data for.
For a computer to accurately model a full human the model must be able to mimic personality and mental health disorders which at least 20% of humans experience in their lives. Personality and mental disorders are currently not understood in terms of brain physiology and are poorly defined in terms of behaviours so until these things are better understood and articulated on the internet it is hard to see how any realistic GAI can be developed any time soon.
youtube
2025-12-20T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxBdo3ttVYjfNq9qB14AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgyUKYARgrymledZYv14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyE8T8R0SP7wEPSiXN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyYPkjjAvbWZuYDz-J4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz_eNLmTOjGbdjOIBB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzxNjOJ3D66Dfiwo5d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyZGi2CaWFM9uFNqEZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgyB4wmSaGzgKg354DR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxGnMXP2-MPV8UEPIp4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgycoG9dVBO4qciuVOt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]