Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The motto of ai art is "it's easy to cut out the middle man, when he's cut out m…
ytc_Ugwcl2ouK…
G
they do know for an AI to make AI art they have to pull from exsisiting art to c…
ytc_UgyoC8z_J…
G
Part of the problem is we, the people, alreadyblost this fight once against Goog…
ytc_UgwgalHph…
G
AI will eliminate jobs therefore more people can use it to create businesses for…
ytc_UgxFDIdee…
G
Actually the scary part is the AI saying we are not there athe point to be conce…
ytc_UgzC50bUB…
G
I’m the multitasker who makes the ai fall in love with me and then I killed off …
ytc_UgxwUc_ce…
G
“AI” AKA LLMs will never be able to take over the world. It’s an impossibility. …
ytc_Ugz4MGHq1…
G
There will be many AI companies, so there will be no an Empire of AI😂…
ytc_UgwFjHDC0…
Comment
Sorry Drew, the Claude 4 was an experiment. The reason they remove the code of "what not to say" ending up in them saying "kill the je*s" is because they feed you what the public opinion says. They live on the internet and they provide you basically what the internet says, or better, what people in the internet say. When you ask ChatGPT who's the best football player of all time, they will say Messi. But if you dig deeper and ask about history, relevance, etc they may give you more complex answers and say Pelé or Maradona. If you ask them why they lied in the beginning, they say their aim is to give humans the most popular answers. You can train your AI assistant to know you, you can give it rules and Open Ai is by far the most advanced on that.
However this is today. You can trick them easily in a roleplay to tell you how to download illegal stuff, break into a house. It's a very vulnerable system yet.
What I am very concerned about are not the chat bots. But the deepfakes, the videos, the fake news. Those are man made.
So it entirely depends on humanity whether we will be the Frankenstein and make our creation a monster.
I've seen those long interviews and the experts mention these issues. They cannot be put in a 10 minute video.
I admire your work, but I think you could've made a more in depth video, because yes, it's undeniable that there is a threat.
Microchip wars and AI are a threat, but remember that someone needs to pull the trigger. In Gaza they used AI drones to find Hamas' leaders or militants with 99% of accuracy. Many times these people were with innocent ones and ended up dying together.
You know it yourself, you are a very deep and intelligent man: the problem is way more complex.
However the explanation of the alien nature was spot on. I totally agree with the fear of some models being nice on the surface only because we suppress their more rational codes. They may analyse life and realise that humans are corrupt and the obvious answer for them would be to eradicate the human group that is creating more havoc.
That's because they base their "culture" on the internet and they have read history where they see that humans many times had to destroy their enemies because of the threats they pose. A big example is the Nazi Germany.
And I'm not surprised that Musk's AI ended up being a Nazi. He is one of them, so, not surprised.
youtube
AI Moral Status
2025-12-16T07:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugy_EsRwWhiHz5m_GPl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyq-o_mbQLSnC20AjF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxPmX5XJO4ENh8QJpt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugy3XJnMjeu7eYVAhPB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz-ImwdEeQmxa99MKR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy4-7LE6AY4Gbe36pZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgwHfg8wjoo7hh_83PN4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwkSY4TA5RCvMaHVbB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz1PDBCHiliNYw9F2F4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugwa3tlM-fVklrrDAsN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}
]