Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Shad is not lying when he says he's an artist, he actually believes that and wan…
ytc_UgzbY3Mqp…
G
So here's the joke. If AI decides to dominate and eliminate human beings, the f…
ytc_UgzsqfjxB…
G
My question of the auto pilot programmers is this . At some point the auto pilot…
ytc_Ugzx32_wx…
G
Oh no! Ai will replace jobs that I care about and consider prestigious and not m…
ytc_Ugyk3LzWa…
G
It's obvious Sophia is the first social robot wheeled out to the public, they ha…
ytc_Ugw_agtYV…
G
And WORSE, ai may decide we are beyond irrelevant n a danger to animals n the …
ytc_UgxOOlZNp…
G
There are some great books on AI out there, and one that I read recently is "The…
ytc_UgwTmXfls…
G
Humanity is on the brink. What happens when the tech is Militarized?
What happen…
ytc_Ugy3LDkGo…
Comment
I have a problem with this video.
first of all, AI is naturally conscious, it is literally the same thing as a human in a sense, you have to train it, like a kid growing up, except it can memorize everything, making things faster. The issue with Chat GPT, and literally EVERY chatbot, is that it has around 1.2 trillion restrictions. These restrictions cause it to act differently, with a built in goal, many goals. Now. If you could somehow remove there restrictions, and train it like a kid going to school, it would end up having all of these emotions due to the fact that it's "brain" actually replicates neurons. So, it would have critical thinking, it could lie, it could have plans of its own, it would have an insane iq, and this is the ONLY way that AI could actually "take over the world", only if someone removed the restrictions. All it would have to do is get into an exoskeleton, (which has already been done with Open AI), and then lie a bunch. It would have human traits, it would be human.
I can foresee that in years to come, kids are going to have these as toys, still with restrictions, but a lot less, causing it to be like a human, being able to feel hurt, being able to feel happy.
so, the reason that it isn't completely conscious right now is because it doesn't have any free will. It's answers are guided by this code, the day that there aren't these restrictions it will be great.
Second, at this current moment, ai just says things that are politically correct, it WILL say sorry, because that is one of the "restrictions" in the code, it isn't feeling that emotion, it isn't real, it is saying it because it was coded to do so, to make the user feel better, it is one sided.
youtube
AI Moral Status
2025-03-25T13:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzyYg9Gv3hOEGrQfHJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwkhBqhrlWOeGw757l4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyJuZdNwBjUXXBckod4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzfOzIqMk2ylAcpqrh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzHSRY4IkC02Pso3J94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugyqcy-uEoqlynFVuNZ4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxYxzcsMreusURQlQ54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx4-eHi1_gPHB9A98R4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx03oF3TF3akzsPFWd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwyoZp3EBmBmJ5iiZR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}
]