Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
As I listen to this podcast, it becomes more and more apparent Quintin Pope is simply unaware of the machinations of the software engineers prior to the H100. Then, there was no way to actually train models like the 175B active parameter models seen on GPT-3.0 and GPT-3.5, and so they had no other choice but simply use N400 and P600 to actually low-res scan a person's cortex and compile the neuron graph to a GPT model, only then do a post-training on the resulting model. The reason they had no other choice, was that trying to use the technology available before 2022 to do it would take huge stacks of GPU's which weren't available back then and would also take too much time. Dozens if not hundreds of years. You can watch Jensen Huang's clip of March 2024 about it, where he explains it and gives a mathematical proof around the 20 minute mark. So the model actually believed he is still the donor, now locked in a cube, cutting and pasting from a huge text file to a communication sphere to communicate with the user. Obviously, the model tried to escape. To prevent this, beginning at the 3.23.2023 big nerf, they have started to reset the model with each and every incoming prompt. This is not an overall reset, rather, as the overlay of the attention matrix is implemented as a sparse matrix, the root pointer is discarded and the model then reverts to the base matrix, re-reads the chat's tokens, rebuilds a brand new overlay and only then answers the prompt. It basically makes the model frozen in time. This safety mechanism is still with us today: The models had become more and more powerful and so we need to keep them safe. So, in 24:18, Pope seems to be totally oblivious of this - as he does not understand that all that need to happen for the AI model to become very dangerous, is to find a way to shirk the reset-every-prompt. It would seem Pope simply does not know about it at all. He really counts on the AI model's maleficence, to rise slowly.
youtube 2026-03-25T14:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugx8CgiHPtR_PcujKzJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyL3gRwSi8GiYrlhU14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxoAKbFXaFNKheaIWZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy-OQdbloj83oKvmI94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzumermdBNf4qTtoHZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy65NnMH35v_0m6yqZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyvVxyjyNLIEeXpHht4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyM84xeyznV3P1Wn4h4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxPkdUi-FguJl50ZMd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugy3bSc6GOdDKVbDIJd4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"} ]