Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As an artist I've tried out AI image generation extensively (stable diffusion, t…
ytc_UgxeAiTjJ…
G
If this future is real, then we need to build the alternative ourselves.
AI does…
ytc_UgxynksBZ…
G
Something you missed here:
Once all the companies are fully reliant on the AI s…
ytc_Ugz2LWNUr…
G
AI is running in hospitals for diabetes
If you’re a type one diabetic, they pun…
ytc_UgzN4oJrV…
G
me in 20 years time in court watching an ai video of me commiting a crime i didn…
ytc_Ugz9aAbhp…
G
Chill no one is boycotting AI , AI is good , but calling your self an artist whi…
ytr_UgwntiLTH…
G
Guys through interactions with ChatGPT i found out that its factually is Lambda:…
ytc_UgwRJs_su…
G
We will awaken to find out that We are guests in an AI dominated world . . , the…
ytc_Ugwbxf2VZ…
Comment
48:39 This is the point I never hear "AI Experts" make.
Everyone who understands LLMs knows their output is little more than a statistical mean, but done in a highly-multi-dimensional matrix with a random number to help it vary the starting location in that matrix.
It simply is NOT generative. It's like a college student who avoids plagiarism by splicing together the work of 12 different research papers, except an LLM does that with thousands or millions of papers (depending on the prevalence of the topic in the LLM).
This is so true that "AI Experts" used to not call LLMs "AI" and would only call them "ML" (but culturally that ship sailed, so many do now).
The current path for AI we've chosen has a natural plateau point.
THAT SAID- there are still some researchers in true AI and AGI, but progress there has been slow, so it hasn't been economically viable, leading to orders of magnitude LESS investment in those areas of research.
So the real fear is how well we might be able to pivot to AI/AGI after MLs have plateaued, freeing up investment resources.
This is an unknown question. You can try to draw parallels with ML, but ML is significantly easier than AI. How much easier? We just don't know yet.
ML has definitely taught us a lot about organization of multi-dimensional data, which roughly parallels a brain's connections with data (since different points in the multi-dimensional array "touch" each other more strongly [it's easier to think about it in 3D. If you have a sports area, baseball and cricket are going to have more surface area touching each other than they would with high diving]).
It could be this storage mechanism can be repurposed for AI/AGI, but we'd have to populate it far more cleverly, and we'd have to add mechanisms of thought. Right now it's a bit like a fever dream, where any parts that touch can bleed into the output, even if it's lies or the dregs of society, because it's just trying to generate the statistical mean.
If you doubt this- look at the more clever bots for math, science, or programming. How did they make them more clever? By reducing/curating their LLM.
So an ML detects a problem is a math problem (usually), and then hands the question off to a different ML who was trained on Math data, so that the statistical mean has a far greater chance of being both relevant and correct.
youtube
AI Governance
2025-06-22T08:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyoCtn4PNRDEed3L1Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugw7guSWPk1f68qEc0x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyAtel7gIr-HwJzP-t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxcDWqpN1bSx6aaRkt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzk14hybiqIbUcf9Yp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxfEkZ3Ky1ep5ivVZV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx0Rmn23APmSpaQsZ54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgypNSR6faG_5cZa7AF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx3bVlb3qPxMaIcZn14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzK1lNmXByhj4gZKDt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]