Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
This is what i mean when i say ai art is bad! I am not commenting on just what t…
ytc_Ugz4EAiSH…
G
CVS is using AI to answer the phone, it is now more difficult to get refills. It…
ytc_Ugx-0QChq…
G
You sound like a boomer. Ai native dont care. And also 99% of ai cases the ai wo…
ytc_UgyZi549A…
G
This pseudo AI that they are presenting is more of a library of prewritten respo…
ytc_UgyY3ZVE-…
G
Obviously not a human. Anyways, the uncanny valley is there for a reason.
Also,…
ytc_UgzLkOsfT…
G
Nice one jack asses teach AI how to use our weapons NO harm in that right WOW LM…
ytc_UgziZdOSW…
G
Hahaha... Nice sell speech... AI will take millions of jobs and create just a fe…
ytc_UgzyXSAE8…
G
The horror of AI is not if it finds consciouness, it is if it doesn't.…
ytc_Ugx2-gUvL…
Comment
Tldr: This is about "The ALIGNMENT PROBLEM" !!!
And @ Hank, I understand why you want to "sometimes believe" that ""super intelligence is impossible" and that there is a "difference in kind rather than in degree" between our "intelligence" and the "machines"...
BUT PLEASE realise that even i both these things are true/correct that really ((unfortunately)) does not "save us".Because as long as the "machines" are given the "opportunity and the means" it is a "moot point" how they really "work inside their black boxes"...All that matters is what they can "effectuate" and what goals they are "working towards"(remember THE ALIGNMENT PROBLEM). And none of these 2 factors are dependent on the outcome of your quandary.... A "mindless" and "unintelligent AI" is just as much a threat as a "sentient" one with "human like intelligence"...
And as much as I liked Hank doing this video and "putting a spotlight" on this issue, it also provokes my frustration over how many "influencers" ((and other people)) who are "generally very knowledgeable and reasonable", still are basically TOTALLY "naive" when it comes to THIS absolutely basic-principle-problem that APPLIES TO THE VERY PRINCIPLE OF ""AI"". I.e. we CAN NOT "code it away"...
And to make a possibly even longer rant somewhat shorter the absolute CRUX of this issue is :
"THE ALIGNMENT PROBLEM" !!!!!!! And this problem starts already at "concept-of AI" !!! (as soon as we have stopped programming it basically as just a "complex multiple choice chains"...)
And what imo makes it a serious problem when these "generally knowledgeable people" are "naive" in this realm is how they often , by lacking this basic knowledge instead of "spreading understanding" of the problem, they downplay it. And in some cases even actively oppose the very notion that ""AI"" could be any kind of ""Threat"". And they often doing so by attacking some possibly over-exaggerated "doomsday scenario", thereby unknowingly spreading misinformation by (again unknowingly) basically "strawmaning" what imo is a real threat. And a threat that regardless of our personal beliefs, values and biases, at the very least has an very real and rigorous scientific/philosophical basis, and that at the very least should be treated as such...
</rant>
Best regards
PS (edit evidently I had not ranted enough ;)
It is also my opinion that many people who believe that ""AI"" is only "fancy auto-complete" are imho suffering from 2 serious misconceptions. The first is that they believe that just because it's "only a machine" it can not "be dangerous/harmful", which I think is a notion that it really doesn't take much thinking to realise is basically a non sequitur... After all it doesn't do us any good if we are "harmed" (or even "killed") by a ""stupid machine"" or a "sentient computer" ((One could say that the difference is only of academic interest ;)
The (imo) second misconception these people have is that they grossly overestimate how much more ""intelligent/advanced"" they ((and we all)) are when it comes to our "general cognitive processes" when it compares to "these machines"... In short they are underestimating the ""thinking machines"" AND being over confident regarding our own exceptionality.
youtube
AI Moral Status
2025-10-31T05:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw3NIxdxGErPKlR4_J4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxfabomi6WpLluDaoh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugyw6X-4WhvYZJGB9cd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzcGhShbTjpI9iCxVJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_Ugyj3Zm1SkHxhm2Dxjp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw6esCnwN3wojGUgMd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw4cHE980ABQW3habl4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzHPF2C-NLGa2-1A-V4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugxi-ISDYLVIFj8xpxd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugyof_8V4WozEsKCDaN4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"}
]