Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Here are my questions for those who think that ai is considered human
1. Does ai…
ytc_UgyUDl9l8…
G
you better look again, one robot wears backpack, gets knocked down and gets b…
ytr_UgzqlAYVc…
G
If teachers use AI to check papers, why shouldn't students use the AI instead of…
ytc_UgymwVkge…
G
It's already being used for good. And I definitely want a self-driving truck. Wh…
ytr_UgzwLL7z3…
G
Apko ai knowledge Lena chaiye
Ap ek ai course lelo tab Pata chalega ai kaise Ka…
ytc_Ugyuho-CJ…
G
I noticed at 15:36 that the speakers are calling AI "they"
. For me it is an it…
ytc_Ugx3Zjw9T…
G
You don't need 50 pages to explain how this goes bad. At the point at which AI n…
ytc_UgyBtDptm…
G
This is the first video ive seen of yours. I just read your YT handle... I burst…
ytc_UgzWs1pba…
Comment
There's a big difference between the kind of model that suffices for agentic use vs frontier models. The US focuses on frontier models with the highest intelligence, which is why the US leads in cutting edge models like Mythos. The AI majors and hyperscalers have been focused on training and inference which can only run on high end expensive hardware. The models literally cannot fit in the memory of a commodity GPU.
But a commodity video card can run Deepseek or Qwen with a modest parameter count. They're not as smart, they get more things wrong. But we don't need Einstein for 99% of tasks, and it would be inefficient to pay professors to moderate comment sections. Agents need to do simple tasks like evaluate the content or tone of a message and categorize it or trigger an action. US hardware would be wasted on this task and we've ceded this low margin business because it is entirely contingent on the cost of energy.
The real metric is Watts per token and China's national energy policy executed over decades is paying dividends now. Cheap power enables their less efficient token generation to still compete on Watt's per token and that's why they're dominating the agentic inference market.
youtube
AI Governance
2026-04-21T15:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgzfSnzWst02KsznDNp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzY8mdUvNIObd0uSLN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugy0AO0fNWQwyXR23b54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzZ-nltPK1eP9yln694AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzlI_R6I6jh169VVr94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgyuU8SMkG6McRhNgaF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgyUA_DQcDFZ6VUdb214AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzHcRRdHIF518tDVil4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwwPhpUcnxHfa8b2cF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgzHy8eKwczW26pQIG54AaABAg","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"}]