Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Very good. Those who had Absolutely no idea abou what is AI this video instill …
ytc_UgxIUwOjN…
G
Video Title: Why artist don't like AI.
Pikat: Literally uses an purple haired A…
ytc_UgxuwoR34…
G
Ya gotta communicate with ai in q certain way, just remember that they have almo…
rdc_n0nagpf
G
If AI is increasing productivity, then we should shorten the work week. That wou…
ytc_UgxlPIFgn…
G
If AI really ends up “doing everything,” the real question isn’t how we make mon…
ytc_UgylHfIwl…
G
One day we become slave to AI, the same way we are to mobile phones, but much bi…
ytc_UgyEwQtzj…
G
If ai can do all of this then why are humans subsidizing it so much?
The data c…
ytc_UgwF4FQr1…
G
Why are you even watching the short ? Just let AI watch shorts for you…
ytr_UgxWJXeUA…
Comment
Between 2:14 and 4:33 - Here, he made it clear that billionaires who invest in artificial intelligence development are a global threat, just like terrorists who don't care about other people, fact!
Between 6:05 and 7:25 - Anyone who talks about super artificial intelligence and doesn't mention this, be sure that you will be front an enemy of the human species, your enemy (unless that ignorant, of course).
Between 8:15 and 33:41 - Basically, he summarizes absolutely everything about the topic of super artificial intelligence and the problems arising from its insertion into our society. I repeat, anyone who talks about super artificial intelligence and doesn't mention this, be sure that you will be front an enemy of the human species, your enemy (unless that ignorant, of course).
Between 42:32 and 45:52 - Here he mentions psychopaths in power. Psychopaths disguised as technology visionaries. That's why capitalism needs to be controlled, that's why!
Between 46:24 and 52:47 - Here he again talks about the real danger of super artificial intelligence, but makes a mistake in thinking it's okay for individuals to earn billions of dollars (billionaires want to have their own army of robots). A single individual earning billions of dollars, in a world like ours, is equivalent to encouraging the development of super artificial intelligence. Other real problems of our species, are: conspiracies, lies, disinformation agents disguised as digital influencers (YouTubers), manipulation of the masses, sabotage, financial crises, misery, poverty, hunger, wars, diseases created in laboratories, Hayflick limit, suppression of disease cures, pacts between traitors of the human species and non-humans, alien (non-human) agenda... and so on.
Between 1:21:29 and 1:22:01 - He repeats that it's impossible to control a super artificial intelligence, and also questions the moral standards of billionaires and their respective companies.
*All countries in the world should treat the development of super artificial intelligence as a potential genocide crime. The only way to test this, and other dangerous things that our species needs to face in the future, would be to build a laboratory on an uninhabitable planet, absurdly far from Earth, with protection systems linked to a thermonuclear detonation, in case something gets out of control. Before that, we need to solve several more important problems here on Earth, including becoming a space civilization, only then to test dangerous and unpredictable technologies.
youtube
AI Governance
2026-02-05T10:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw6HSSJYJ55MOTekXZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxKcp7BVdbh2_80zZ54AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzqRmYGBd7z-LHtmdZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgwUDwSpJE4iQfjD7bh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzLsE4soWgbpqFRhad4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzuhz5n1Z-VUdNDhBV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugxbjv5wFvlFUSGo7ql4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugw_x1d3lQJ9UGeK7DV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzyngvoQRqQOC4hqMR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwgDghXWxlqsSMgwh54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}
]