Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The worst thing humanity can do is unregulate AI and start an artificial intelli…
ytc_UgzNTjPaN…
G
What app are you using? I have the official ChatGPT app on Android and it does n…
ytc_Ugyvnjp2F…
G
What a fascinating perspective! You're absolutely right that both humans and AI …
ytr_UgzMojOgL…
G
The Big Shitty Bill allows no restrictions on AI for 10 years. Ask yourself why…
ytc_Ugw5tfUZa…
G
Well said. Honestly one of the more appealing to me comments I have come across …
ytr_Ugxh6ArMh…
G
@OGPolaroid if you're referring to the training of neural networks, then with t…
ytr_UgwVVdVEo…
G
This bloke has made sure his kids are financially set up. So his family are alr…
ytc_UgwUfTEG4…
G
Ai will never be true art or sentient because AI doesn't want to be involved wit…
ytc_UgwPfkZpl…
Comment
If you get the best AI in the world and teach it nothing but dogs? And I mean everything about dogs. Videos, different types, sounds, movements and so on, do you end up with something which will sit down with you and have an intellectualised converstion about the meaning of words? Or are you going to end up with an AI which is exceptional at mimicing dogs? Giving AI words which have inherent human emotions and layers of meaning built into them will lead to an AI appearing to have emotions and layers. Or, you put a spring opposite your front door and rig it to spring up when the door is opened. You write the words: "I love you" on a piece of paper which is attached to the spring to pop up. When your loved one comes home, opens the front door and sets off the spring, the words "I love you" spring up - is the spring intelligent? Does it have feelings? Could you just as easily written the words: "Woof! Woof!"? And so finally it follows: If all we feed GPT5 is the writings of children up to the age of 7, will it come out and understand the works of Hardy? Dickens? Shakespeare? F. Scott Fitzgerald? Hemingway? Steinbeck? (So these novels are not added to it's data set, but rather it interacts with them the same way you are interacting with GPT4o here...) Or will it answer questions about these works in a very 7 year old way...? I know this is only a thought experiment - but I think the result, the answers, would be really, really interesting. Or it will just complain that it doesn't understand half the words used in the novels. Or would it be able to figure the meanings of words it's never encountered? But the real interesting thing (for me) is when it's put in a body (lets give it 10,000 bodies to control), interacts with reality and makes the results part of it's data set. And I mean from robot helpers all the way to analysing data from hubble and james webb. Where it can create it's own experiments, carry them out and add the results to it's own data set, which then leads to more experiments... And so on. That's when things will get really interesting. That's what I really want to see. Self awareness is ok, a thing, I guess (I mean, what we want is a slave race of robots, so I vote any self awareness is tracked down, turned off and we avoid that going forward) - but this is where the real money's at. Hypothesis, experimentation, results added to data set which leads to the next hypothesis.... When it starts to do this. For me? This is the real singularity of AI.
youtube
AI Moral Status
2024-07-26T07:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgzS1qMP90XW9hY4yU14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgwAMvDIFkabXeRFfSN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxSX_G1ls5FmIY_1Y94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugytm74TlVWH9--34Yh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugwy6D_M9nPkz2AJUqF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw5A-57QIgx3pgx6114AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzZtmAI3xOVeDCDZrV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzpixWTtCNr_jzH4GZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgzZw4k-0V13mXPslat4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy7UywXWDKfmk74Is94AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}]