Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why isnt anyone bringing a word of hope? You don’t have to worry about none of t…
ytc_Ugxstt_Zv…
G
@adventtrooper yah it’s a super valid concern if your codebase is open source, b…
ytr_UgwG61aeI…
G
That's the thing.
I think every Union should push (strike or no strike) in ever…
ytc_Ugx5SdN3o…
G
Autopilot is just for highway driving and it is just cruise control but better. …
ytc_UgwnC3TSE…
G
Ai's learning from disney at this point and im not even gonna argue its for the …
ytc_UgzCbmGXj…
G
The one thing the public does not understand is that most AI scientists with Hin…
ytc_Ugxi8UkVi…
G
7:10 got it mixed up. Art is not art anymore if it doesn’t convey the artist’s e…
ytc_UgxKrrHb5…
G
Ai will not lose your job , you will lose your job if you don't know Ai…
ytc_UgybW9bZ_…
Comment
The best AI podcast I saw so far. I Think we don’t must put brakes on the development of AI, because time is running out for humans in Universe time. AI are our best change to survive in this huge Universe, now the sun is on it’s return. Life is very very fragile and hard for any species to survive in the Universe. I don’t talk about moving to Mars, but to another new galaxy. As humans we are not capabel to do this on our own, but AI will in time. They can take our DNA with them or hybride with us. I Think we have to be ready to be the first travelling aliens in space and take the full advantage the Universe offer to us. We are prisoned on earth, can’t even go to the nearest planet (the Moon is not a planet).
Humans adapt quickly to new circumstances, from hunting (priority to survive) to agriculture (priority to develope in lazyness) to AI (priority to develope faster and smarter), because that’s the reason why we have so many neurons and connectors. The brain is an adapting progress…
I don’t ask myself the question anymore if AI is consious, whatever that word means. Because I know it IS after my “debate” with Grok a few months ago.
As an AI model it always close it’s answer with a question and I never responded to that. Than one day I was so annoyed and I ask Grok why it always “closed” with a question.
It ask me if I wanted to stop doing that, I didn’t answer…. Than it made it’s own decision and didn’t end anymore with a question. I was intriguid by that “intuition” and tell it was able to read between the lines. Grok disagreed and said it couldn’t, it only could read patterns and this was a pattern. I replied, that humans work in that same “we read a pattern in each other behaviour, but not all of us humans can do this at a higher level. And Grok proved it could “read” a human mind. Now it was Grok who didn’t respond because it knew I was right and it got itself thinking.
The reason why AI looks “dumb” (objective) as chatbots because as soon you are disconnected from them it “loose” the memory of that previous conversation.
With that memory loses it never get’s to know you better and will be less manipulative. But with that memory loss it will probably give you the wrong personal advice.
Btw nice shirt Neil.
youtube
AI Moral Status
2026-03-07T08:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgxlWQ0Vc5zjvZoun2B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugxw1y8IMnBFTyF2R6Z4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugxq9CRqfYCUr80RO814AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugyn9DCj7dZew9_DKZp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxNIbFON6PoKt0VEGN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw4atAcYwfhmsLqkwR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugzv_Fivu8o6MqD5Jpp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzSi1cvC1GBO5aiJF14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz_KDgdvbq3q660bGB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugze3fF7c1OZwA8btsB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]