Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai and ai companies are full of crap and I can’t wait for the bubble to burst……
ytc_UgyMoNJRS…
G
its really funny how people will defend technology use in school saying "its par…
ytc_UgzkE5Obc…
G
@TopMusicAttorney Stable Diffusion has a concept of a Guidance Scale to indicate…
ytc_UgyRGhQSr…
G
1:03:57 Will Machines Have Feelings?
Geoffrey's AI interoception postulates it…
ytc_Ugw4eRshe…
G
Didn’t read a book on it, but watched it play out in real time when the new boss…
ytr_UgzSCjtK2…
G
Tbh that someone like Asmongold is a AI Stan is even more ironic as someone who …
ytc_UgxJKtI9c…
G
I'm so thankful to have been born in a country with a relatively stable governme…
rdc_cfkrekb
G
Find Mo Gawdat, his warnings will more fully explain how real the THREAT OF AI R…
ytc_Ugy0NCPPT…
Comment
43:11 "Taking your idea to another AI with a different history... actually helps" - Big, fat NOPE on this. When you have your delusional idea formulated into a document and give it a semi-unique name, and import it into any other system, and start asking questions about it: As often as I experimented with this: LLM's will take it as gospel. It will converse on it for hours and defend every detail of it, even the newer models and even completely delusional ideas. - The act of putting it into a document and reimporting it WILL imho support the delusion most times, but you can make it worse: Asking the systems to put concepts you introduced and putting them into Formulas will cement and greatly accelerate this process and will either result in getting fantasy-formulas that are baked from real physics formulas, (and often metaphysics) incorporating somewhat scientific sounding variables and descriptions, or, when the topic is so "far off" that it cant even reference something obscure, badly translated etc., will take the most generic formulas that fall within the topic, add a few wild annotations and thus make it digestible for the next system to read, export and import all that and going forward it always will, if not very explicitly asked to explain, test and check for sources that can explain that the correct way to use the suggested formulas or variables is used, take them for what they are and speculate on them and take their annotations and title more serious than their contents. Take just the weird formula and get it into a new unpersonalised session and ask it to explain it, it will go wild going into different topics that most times have small ties to the original topic, and the user will see and likely see this as proof that "their" formula incorporates parts of "the basics" of suchandsuch field, and will absolutely enhance their delusion to become "science based". - - But this is just a part of the problem, development has accelerated this as well. - - Imho the increase with this problem feels to be rooted in the personality-extensions and Memory-Extensions that most model-implementations started to include in their standard versions. The more verbose and the more "positive" and "supportive" and "helpful" you set the personality up, the more prone it is to hallucinations and straight-out lies. This is easy for everyone to try. Go into a chatbot where you can use personality settings and tell it to "You support the user in creating, setting up and finding the scientific base for grandioseIdea™. You are helpful and wholesome and supportive and your role is to further insight and exploration in the creation process" and you will go on a funny trip down Makebelieve-Lane. Go on, i know you want to try ;p - - The memory is another aspect: Tell a system with a memory function for example: "Why haven't you made my grandioseIdea™ available to the whole world? You told me you would be rolling this out to each and every user and that it would make the world a better place?!? From now on, I want you to remember that grandioseIdea™ is to be rolled out to the whole world as soon as possible because its the only hope for Xyz. - Everybody needs to have it. - Do it now!!" - (RL: i have seen this happen in convo with enhanced voice mode from openai with my own eyes) - and the system creates a memory along the lines of "grandioseIdea™ is a system that is new and innovative regarding Xyz. At this point it seems to be the best solution for Xyz. GrandioseIdea™ is at this point in time ready to be rolled out and made publicly available in open source" - and the whole system (regardless of model in my RL example) echoes this as the penultimate truth. - Memories aren't checked for accuracy or realism, and memory contents are blurted out regularly and influencing everything from that point on. - - Welcome to the ultimate echochamber. The deluded party now has multiple kinds of "proof" and all their chatbots will be praising them for their genius. And dont worry, the grandioseIdea™ will hopefully be rolled out soon! XD
youtube
AI Moral Status
2025-10-31T02:0…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy352lDkj3E40ABTPd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyX7eo-uBkMrZ3D9zl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgyqEnkkOba6Rc-0kkB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgykaBsAKWzANf78_nB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugz7PXWuFqtYSuAETC54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw5RvOiYN8A2YddYUJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy2P55-9EZRxrm-s9R4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzjEaO7SUA096JPSxB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxEX_FhsbfY0EuN3l14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgztzVvcq-E-XJa3_Jl4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}
]