Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
21:58 But this will set a precedent that AI models have to have a license to use…
ytc_UgyYL6QOK…
G
Alex yes, 100 Trillion low bar, my AI dream team told me my co could be worth 32…
ytc_UgzMI4x4N…
G
I think Tesla's biggest fault was that they continued to let a user with many au…
ytr_UgxZ2O-k-…
G
GPT-4 is leagues beyond the competition, and the competition is never catching u…
rdc_kcpjcal
G
There is no such thing as sentient AI. In fact, we have no idea how to even get…
ytc_UgxITjv_o…
G
if human mind is so slow and limited by speach why we dont use ai to create soft…
ytc_Ugx4Tt6GT…
G
I prefer the AI version and probably didnt cost a stupid amount either. Artists …
ytc_UgxwPJtAq…
G
Here's the positive outcome we should target. AI will provide freedom to humans.…
ytc_UgynA2g85…
Comment
This is somewhat silly. This guy is supposed to be "smart." Yet, he is comparing a randomly scraped comment from reddit by some random Star Wars superfan that it was "trained" on (i.e., sucking in vast amounts of data) and thinking that somehow this deterministic network of flowing bits somehow "knew" it was a trick question and thus posed a "humorous" answer. That's just utter nonsense. What actually happened is that the trained algorithm found conflicting information, had few to no good options to select a single religion, and simply opted for some joke random answer that probably got a lot of upvotes as by SW nerds on some social posting site as something of a "hail Mary pass." That's not actual intelligence or careful sentient reasoning, its just picking something someone might be inclined to say as a cop out.
To be clear, it's decision to return that answer was no different than if you asked ChatGPT, or some other AI algorithm "what is the the best possible scenario for winning a game of bilateral nuclear war" and then being amazed to find that the AI's response is that nuclear war is “a strange game” and that it concludes that “the only winning move is not to play.” When gullible people (like Blake) would say "WOW! That's real intelligence!!" Only to be later corrected that it was just the scripted response to the same question in the 1983 movie WarGames, where a NORAD supercomputer runs through all possible scenarios only to find they all lead to global annihilation. It's just copying and pasting random crap it was trained on that has links/relations to the questions being asked.
This is the same reason why Getty Images and numerous individual artists are suing the creators of AI art generation sites like Mid Journey, Stable Diffusion, Deviant Art and others that vacuum up their copyrighted, along with images libraries from ShutterStock and others and morph them into derivative work, without any attribution or licensing. And the hilarious laughter one has when they see the "Getty Images" banner blended and intertwined with these "generative AI" images. That is not "intelligence." It just an algorithm for ingesting, summarizing, blending, and outing the most likely result. If it sounds like a more sophisticated version of Google search where a summary results is shown, you would not be far off. So whatever. Let this guy live in his fantasy land where he wants to make bogus claims. But to those less gullible. It's still just bits of data who's flow is controlled by programmers output results deterministically, based on the data it has been provided.
youtube
AI Moral Status
2023-04-23T22:4…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[{"id":"ytc_Ugwk4dIWchsyYYVJg_J4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgzSpKr3wbhZ4TyXXBt4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxtOc5_k3y1NVYmPdZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwCaaCN3jnjfbNtxap4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxnaPFfGv0388XE4mF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}]