Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Thankfully his Drs came to this diagnosis without using AI? My GP uses Wikipedia…
ytc_UgyuoRWYV…
G
Fundamental issue is that these companies do not have enough competition to driv…
ytc_UgwZFjUpC…
G
Basically the future:
"hey dad what are you doing?"
"art"
"oh that's cool, ...he…
ytc_UgxvMaa1m…
G
Teach the kids what will happen when they apply for a job but the position will …
ytc_Ugytw5KGr…
G
AI fucking sucks it cant even predict the next word I'm going to say and then te…
ytc_UgzOfv94H…
G
I think that every expert would agree that AI is just still so stupid and nobody…
ytc_UgwjODUXk…
G
AI for use in a capitolistic society will result in the exploitation of millions…
ytc_UgypiClSd…
G
The fact that the Echo story was written by AI is kinda mind blowing. As if it …
ytc_Ugx6qZr3m…
Comment
In the 90s we called it adaptive optimization. Basically what we currently have is a massive data sort and select algorithm. The more data you have the better the result. Say you had 1 million pages of information. You ask it a question. You break that question up into key words. You weight those keywords into most important. You evaluate the text of each of those million pages by matching as many keyword. The more keywords generally the greater this "fitness" function. You then splice the most relevant information from the "fittest" data. Since it's a hodgepodge you then need a natural language processor to make it intelligible. Think of a nlp as being a spell and grammar checker and corrector. Think grammerly. Then it prints apparently logical information out to the screen or through a text to speech processor. Basically that's it. It's hardwired for some subjects i.e. sensitive social topics so it gives standard answers for that. But that's about it. Just remember most these people are programmers so they have little to no knowledge of how humans " think". Basically a hack to appear human like. Great as a vending machine or vacuum cleaner but as Sir Roger says.. this will never ever think. It may replace some jobs where the data output of that job is contained in access to millions of books. It still won't think. You can easily break any adaptive optimizer (AI is a misleading abbreviation by poor scientists and engineers) by simply asking it a question not in those millions of books. I.e. something very obscure or new. Ask if men are smarter than women it will crowbar to a prewritten politically correct answer. To test it further, keep telling it that it's information is wrong. It will then get repetitive. Intelligence at its core is a problem of mathematics and physics so basically computer scientists are great at writing algorithms but they'll never solve this one. I guess those calling it intelligence do not count a difference between thinking and appearing to thing. If this is what they thing AGI is then be honest. Otherwise right now it's a bait and switch. I think we'll solve a unified theory of gravity before we will solve the mathematical theory behind intelligence. So at the moment it's already been wheel spinning time for decades and likely centuries to come.
youtube
AI Moral Status
2025-07-21T14:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzD8fBjDLrqqWPHKMt4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwxNhW21g0MUhKFmf94AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz8Vc1EnKBngx-CM7x4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyXXAm9pD-4kW0_GSJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzcdBpQKuIWq8YZmRh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwhjqFDqW1D1sDEiBd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzgqyp4BZdgC2uoztZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwZXRtSC1uLRSwb8yB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzd0Uk6PM4mFlirukp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyU-wlihuqTcwIuoON4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}
]