Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Oh no a racist got to the coding tool and now all AI is racist…
ytc_UgxZ0bbis…
G
literally every single one of these "real" art pieces look worse than the AI pie…
ytc_Ugwq7XFTr…
G
@j9dz2sfProgrammers are required to put algorithms into a robot. Robots migh…
ytr_UgyYcvlck…
G
Piracy is piracy, be it done by a human or AI. We need laws to control the use a…
ytc_UgxxOyWmu…
G
homework was the thing that ruined my life when I was a kid. teachers a bit but …
ytc_UgzuvXbrd…
G
As I understand, there is no real AI to this day. What we DO have are algorithms…
ytc_UgxvutnYv…
G
If we accept that it's going to take over, why don't we use the current versions…
ytc_UgxipV-23…
G
Hmmm... Like the scammer who showed me an advert on this video for a solana aird…
ytc_UgxseMZl1…
Comment
The Turing test is a terrible way of testing for what it wants to test. It was conceived over 80 years ago in a time where the ONLY conceivable way that anyone imagined having a "normal" conversation with a machine was if the machine was sentient. Of course, 80 years later and with computers a gazillion times more powerful than anything imaginable at that time and with access to hundreds of millions of examples of human communication, a complex data model can emulate (and very accurately) normal human response. NONE of that means that the algorithm is sentient and this guys knows it (or should know it). Either he does and this is all a personal publicity stunt (he seems to be launching into a "speaker" career) or he's just naive and fell for the AI (not the first time this has happened). There's no way a data model springs into consciousness without it being explicitly built into the system. And we are nowhere near today to even understand how that would work, so....no...pretty sure the AI is not self-aware.
youtube
AI Moral Status
2022-07-01T09:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[{"id":"ytc_Ugy--nBrbwfUY0dBkOV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugwzqpzm30HX4C3wyNN4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},{"id":"ytc_UgyuIMACQCCGvmzg0pt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgyC6viuep8ppeuEP094AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgzpHU5Gxc7f97ZvyU54AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}]