Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The weakest player in the whole eco system is human that can be replaced by AI. …
ytc_UgxMwaYug…
G
This exactly. Everyone is angry AWS or whomever is selling facial recognition to…
rdc_gvaj9l5
G
Robots will be superior to humans.
1. Our biological bodies are only 20% energy …
ytc_UgiMAV2WU…
G
Did you know that a vector of numbers can make an AI like sheep? And those same …
ytc_UgyF97tmI…
G
They are already using LLM's to create training datasets that will ingrain their…
ytr_UgycTSlwA…
G
Money speaks .
Previously it took western so called ally 50 years to provide po…
rdc_lubo9gw
G
I am death
The Destroyer of worlds
Oppenheimer after the first nuke test.
An…
ytc_UgzHadEsK…
G
The infantilization of disabled people by using them as a defence for AI art is …
ytc_Ugz3lOsvi…
Comment
OK, hear me out on this one. You know how everyone talks about the elite that run the world, the powers that be so to speak. Do we really think that they would allow something that would eventually lead to their financial ruin, a global plague that would kill them, their family members, their friends? Or, what if the whole narrative that AI taking over the world and no one will be able to stop it is exactly what they want us to believe. What if they use it as an excuse to do just that, end the world as we know it? Bill Gates believes the Earth is overpopulated, he’s said that for years. What if they use AI to cull the population and then turn around and blame it on AI and act like they had no control over it? I think they’re going to use this as an excuse. If they truly believed that this was out of their control and that they would be annihilated along with everyone else they would put a stop to this right away. Anyway, just a thought.
youtube
AI Governance
2026-01-08T20:5…
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwWW7uu4faWK9YiBix4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgylhNzUTbe6R23Felx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwhQJMxegBs4FaMsGd4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwZ5flsnCggMu-ZEEd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy5UWVLQEfdaQyUGfp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwk8QPLX6US6-kI4Fl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgweL7zioowZ3BH9kRJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugy7zkiLKGtmn93mwQd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxETHT0nuGAvImuQoF4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwGNzHryhVgzCMhfjR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]