Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI generators are like the Borg from Star Trek. They assimilate but they can't c…
ytc_Ugy4FO2az…
G
Taking people's jobs and lively hoods aside. The fact driverless cars have less …
ytr_Ugzc4lsnU…
G
People have been photoshopping for A WHILE NOW. What’s the difference if it’s do…
ytc_UgxPp4AgX…
G
Wow, with this automation, the price of a diagnosis would probably cost a few do…
rdc_f1eqlbl
G
They are getting there almost, this robot will be truly human like in the next 5…
ytc_Ugz_26qtC…
G
@datcheesecakeboi6745 yeah just ignore the thousands of likes & shares that post…
ytr_Ugy7q1LZf…
G
“He who dies with the most toys, wins!”…Tristan Harris, co-founder of “The Cente…
ytc_UgzPKwCsw…
G
Personal opinion: conceptually speaking, much of the talk about AGI and ASI is v…
ytc_UgyOV3SMj…
Comment
Please don't fall for the hype around LLMs!
LLMs don't strategize. There is no self-preservation involved. Its all a mirage and a result of biased training data. Feed a model a good load of training material where self-preservation is in causal relation to described actions. A lot of social interactions evolve around self-preservation.
We DO know how LLMs work. What we DONT know is the internal state of the trained LLM and what decision trees look like for each prompt. The state is the black box, not the algorithms and the basic technology.
When Geoffrey Hint is talking about Artificial Super Intelligence (ASI), he is still talking about some hypthetical technology, about science fiction. LLMs are not it. Recently he is talking loads of BS, like Musk and Altman. Musk and Altman want to make us believe that the technology will lead to ASI, because they profit off that hype and mythology.
ASI is like fusion: Since twenty years, it will happen soon.
youtube
AI Governance
2025-08-26T14:5…
♥ 85
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgxFOUN4gPONrhYzE094AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyTI57CyZ69NVCOFQ94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzfbP0K-G9VWxIogXd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwecEtxhU4ujeDOmSd4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxItsGE8Y1gUKtxts94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}
]