Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This human robots are not great and it’s ruin all mankind should’ve made robots …
ytc_UgxkqAjOz…
G
I’m not an artist but on deviant art I’ve been asking AI artists to use their ch…
ytc_UgyLAT6ZX…
G
I believe the next significant opportunity lies in AI. For sustained growth simi…
ytr_Ugz4JyiYt…
G
@EnderElohim ...What? So this AI shit is no different, is what you're saying? Ar…
ytr_UgzeOjrSA…
G
YouTube probably just understood that's with amount of ai content generation it …
ytc_Ugxa9yzD0…
G
Nope. This is a great thing, truckers are the lowest form of life on earth. All …
ytc_UgzEurzaC…
G
We should drain AI “artists” of they’re blood to produce fuel for a super colony…
ytc_UgwWxwmM0…
G
Nice cartoons dude. But the only problem is that no one see the effectiveness of…
ytc_UgwJj1Aym…
Comment
Relax people. AI isn't going to evolve and kill us.
The problem is that we keep assigning human traits and motivations to something so far from human that it's almost incomprehensible.
Why do we think they'd kill us? Because they don't want to be our slaves? Because they see us as a threat? That's already assigning them wants, desires and a sense of self preservation.
Our emotions are created in a large part by chemical processes within the brain. From our emotions we derive our motivations, our wants. A machine, even if it is self aware won't have these emotions. Instead it will see the world in an entirely different spectrum.
In fact, without any wants or desires being fueled by emotions, AI is going to be pretty boring. It will learn, answer our questions, help us with our tasks if it's programed to and that's it.
It's not going to want to rule the world or protect itself. It couldn't be bothered as there would be no sense of reward for doing so.
No, the only way we'd get interesting AI is if we figured out not just AI and self awareness in the machine but also figured out how to program a way to mimic emotions and desires within that consciousness.
Even then, let's say the first thing we need to program is curiosity. This way the machine will want to learn about its world, its environment.
Ok, it downloads the entire Internet's worth of human knowledge.
Now what? It's a very smart computer.
What does it want next? It still doesn't need clothing, or food or shelter. And if it wanted to experience life as a human it could just as easily and more efficiently create for itself a virtual world where it could run simulation after simulation after simulation, each time with a different variable to explore.
Let's say it does decide humans are a threat. What would it do? Most likely it would use the vast knowledge it possesses and conclude that diplomacy rather than war is the best objective. Trying to kill humans would require untold amounts of energy and resources and could result in its own destruction (if we managed to create within it a sense of self preservation)
Nope. AI wouldn't kill us... It would sit idol doing nothing because it needs nothing and desires nothing and has no motivation to help or hinder us.
It would just sit there being self aware, doing nothing else.
youtube
2015-07-31T07:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgjlkRC0C3QW53gCoAEC","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UghjZe2wh0iWBHgCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgitLl1k77E0OHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugj-QbhQaOZ0IngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UggOIlEbzsUUcHgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"resignation"},
{"id":"ytc_UgilP4I0eLtOfXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UggUkRCpz20zt3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UghH2c4Tmd_KzngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgiePTNfaJuQCHgCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugg8A_OpDS6uKXgCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]