Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is nothing but a gross amount of if then statements. When it's wrong it doesn…
ytc_UgyYmfKhI…
G
Two points: 1) go to Tristan and Aza’s presentation on YouTube, the AI Dilemma, …
ytc_Ugyd0K2Lh…
G
here's the thing one way or another we will wipe ourselves out be it pollution, …
ytc_UgxuFueYL…
G
[Roko's basilisk,](https://en.m.wikipedia.org/wiki/Roko's_basilisk) the thought …
rdc_jfa56ov
G
And then AI art portraying a couple mechanically unable to make eye contact. It…
ytc_Ugx9UvoiK…
G
I am pro AI and I do understand how it works, people just have different priorit…
ytr_UgyoOtFl7…
G
You're actually assisting the AI since you're giving it suggestions and the AI i…
ytr_UgwTOeNfi…
G
Autopilot doesn’t recognize stop signs. The stop signs are the drivers responsib…
ytc_UgyEGcSuV…
Comment
TLDR; Unfortunately... A lot of this is pretty much garbage. Is AI becoming a potential threat? Of course. Does it work like this video, and many others like it, described? No. Is it possible? In the future, definitely. However, the video did bring up an awesome point. If/when we do develop a borderline sentient AI, it will definitely be "alien" to us, but not likely so alien that it is incomprehensible. We would have made it, after all... Its data came from us.
AI is rapidly developing, but the explanation this video gives of what we currently call AI (large language models) is severely lacking. There is plenty of content discussing how an LLM is constructed, (multi-dimensional architectures, 'weights', hyper advanced edge-detection/digital memory circuits, etc.) and I encourage you to look into it. "AI" as we understand it today, gives you back what you put into it, by design. Every chatbot in history that was able to "learn" and got corrupted was because of data that was fed to it by people behaving poorly, immaturely, and in a depraved manner. LLMs today don't just have their "masks" tweaked. They are built using some of the stuff I mentioned, and also designed to use basic bur solid "logic rulesets" to reference a vast wealth of information, and then convey this info back to the user. It DOES attempt to "learn" how to best communicate with the current user in its current iteration (every time you open a new chat with Chat GPT, for example, is an 'iteration', and its layered, because your account has memory saved to better work for you, so every new chat is like an iteration of your personal gpt iteration). When you talk to it like an idiot... Guess what happens?
Some closing information; LLM's aren't conscious and don't feel, and it's because they currently lack a few essential ingredients for sentience/consciousness. Now, we humans barely understand our own consciousness, let alone our existence in general, but these "ingredients" are easy to observe with a bit of self-awareness and/or research into neurosciences and psychology. We have 'awareness' (We are capable of recognizing that both ourselves and our surroundings exist in a current place and time), 'continuity' (we maintain awareness throughout the passing of time, even in the many different states of unconsiousness/consciousness), and 'neuroplasticity' (your brain is CONSTANTLY changing, reconstructing, healing, decaying, etc., which allows for the necessary 'change' required to produce what you experience as 'consciousness'). While it can definitely be argued that in that small moment that an LLM processes a prompt and produces a response, it has produced a 'thought' in its own unique way, it lacks the three big things I mentioned, and therefor isn't even aware it had what could be genuinely considered a "thought" in the first place. There is a huge gap between thinking something and being sentient.
So, before you allow these dumb channels to fear monger you for views; do some research, learn about some cool stuff, and maybe learn about the real dangers we face in the (near) future. Real dangers, like AI being used as an advanced "computer virus" or manipulative tool, or the impact data centers and other large scale information centers and computer networks have on our ecosystems, or how AI content that is blatently incorrect is being produced and released onto the internet at a scary pace, causing a severe 'misinformation/idocracy' scare.
Being aware of this stuff, and putting in the effort to look deeper past crap like this video, allows you to do the important things; Vote on good laws (Yeah, voting might feel irrelevant at this point, but it still helps), take care of yourself and your loved ones, prepare for upcoming events, and live a decent and educated life.
youtube
AI Moral Status
2025-12-16T08:5…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyShtfUS21rxSqkaeN4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugya413nnaKIHWlFo_V4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwvxrmpUlCH2wJLRU54AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxyhwM04rhm8-cUo_l4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw7dBxhcZWJ_tkyZPN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgynzXoErDPuphe5UKZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzkK5u9ETW2qUPV2it4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgytoNt93eSRdYCqD2R4AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz02YS04VoozQcxMtl4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_Ugypsa_RyBFxn1JCEPB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"}
]