Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Bro it's definitely the first one, that thing in her hand isn't an IPhone 16 Pro…
ytc_Ugw4LA8fQ…
G
I agree with the Dr., there is now utility in worrying about the future with AI,…
ytc_UgzqU6RsK…
G
There was a woman commenting on a different video "my BF is an AI" and people ap…
ytc_UgxdHzq__…
G
Even at my light usage, I’m aware that ChatGPT gets it wrong and it apologises a…
ytc_Ugy3H1mEH…
G
People will first go to their AI doctor and then consolidate their diagnosis by …
ytc_UgyRDaEzd…
G
Can't tell you how many times I was shot at by a robot with a Tommy gun. Guess …
ytc_UgzevUt8T…
G
Is this why AI appears to hate me and is so rude all the time?…
ytc_Ugy6T85uI…
G
Absolute BS, likely paid by AI company. This is an advert.
It is a thing it is j…
ytc_Ugy7T4cvJ…
Comment
Scientists are trying to duplicate a human mind by providing it with vast amounts of data. Indeed, the amount of hardware used to train LLMs is staggering as is the SIZE of the dataset. ChatGPT4 had 1 Trillion parameters for training and was trained on over 45 Terabytes of data. The output of that is then, for all intents and purposes, "the AI".
Consider how a child develops. Each and every child goes through a common set of training in order to succeed. No child can survive without a caregiver. Children learn to survive while also learning abstract concepts like science or math. Eventually, if everything goes well, the child ends up as a healthy adult member of society who is capable of caring for itself, and perhaps many others.
What could happen if you put not only the knowledge, but the analysis of the knowledge into a 5 year old kid's hands. They would immediately become intellectual adults. THAT is the problem. The MODEL will always be put into the equivalent of a new born child. This is EXACTLY why they are afraid. In this video, one person worries that we might not know if AI is being deceptive. Why? Because when you apply weights to data in order to arrive at an outcome, veracity goes out the window.
Like a good conspiracy theory, you can find plenty of supporting information.
youtube
AI Governance
2024-04-12T18:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyjsZavbDnuZjvsZh54AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxVShchXzguWy4sndh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxAlW58yfa0-uqD5_d4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgweoTnhEkKW6mGpdh94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwDhcWSWJ_VDyjOTLh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwBJIYFXoP-8eFz9X14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwQYDQVI5opZKhzUfR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxcUKe3rphgkBmbJ0N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzdT3oTjob9BznSVaJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyPNCl042wrLRYvzLt4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"}
]