Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So these losers liked an AI piece so much they think remaking it is like an own?…
ytc_UgyPbMCpG…
G
This school have put a lot of thought on how to revolutionize learning and they …
ytc_UgyEYk9jW…
G
Ai is great in a lot of fields. I have a master in astrophysics and i work as an…
ytc_UgwLbbivE…
G
most (if not all) AI/text to speech videos are crap! and seem lazily made overal…
ytc_Ugw8xCPM2…
G
Tesla's buried "Full Self-Driving" warning admits it isn't full self-driving. Th…
ytc_Ugw25t2Wf…
G
Put AI to work now, figuring out how we can survive happily when there are no jo…
ytc_UgxmMmGP0…
G
These AI salesmen are all bs artists.
There is not age of abundance, some lala l…
ytc_Ugw4aFSYF…
G
Funnily enough everything in this video is just prediction. Right now AI is stil…
ytr_Ugznzo4b1…
Comment
This is outdated and inaccurate in some important ways. Hallucinations are not caused by AI not understanding what it is saying. An LLM is a token prediction mechanism. It has no capacity to "understand" anything. Hallucinations are caused by variances in the batch size (ammount of data processed at one time) the next token is predicted with and temperature settings (the probability range the next token generated will be). The major issue that is being highlighted here is an LLM's number one weakness: non-determinism. This means that it is impossible to debug any one bug and then implement that fix for similar bugs in the conventional maner. By using a fixed batch size and a 0 temperature you can create a purely deterministic LLM as shown by Thinking Machines in their blog post on September 10th of this year. This will result in the vast majority of the issues cited here being solved because it all boils down to one core issue. Previously you could not effectively debug a LLM and now you can.
youtube
AI Responsibility
2025-10-03T16:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgynfEijUvzZe0ZqF3V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw7CQLpJ1FPVqf_d_l4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz9jSxtu37K-mdjEZd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzI76bty-Vihfexy8N4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwSFCw_0ZNBCr5KYqJ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxCnkHlQ0JnxyYWBgF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_Ugx8gCQANMHqGcsVoi94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyfF1_xlEH-8xrHFjZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugx74C8wtpt97sedo8R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxsOlEqqzcYytpaEDV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}
]