Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
24:55 The Hasan concern is a real one, if the technology that we are seeing being created could actually do what the CEOs are promising, the "Humans Need Not Apply" by CGP Grey outline perfectly why what Neil is saying is not really the case for the future where a thinking machine with even a hint of human intelligence could exist. But this isn't the future we are currently looking down the barrel of, for a number of reason least of which being the model apocalypse, the grey gooification of available training data and just the processing and by extension exponentially large energy requirements to process these new models, what we are looking at right now is the end of recorded history as these bots replace data across the internet with AI generated slop and the collapse of the technology sector as companies struggle to even break even under the immense investment into these machines. As these machines continue to consume and average the data that they have already consumed and averaged we are going to distill all of this data to a perfect average of all human creation, the idea that they can keep getting better when that means creating city sized data centers is preposterous, right now silicon valley is farming tulips, they are collecting beanie babies, they are creating new crypto currencies and minty NFTs, this is definitionally a bubble, and its going to pop but its not going to be labour, intellectual or other that will be replaced. If in the future we (humanity) does eventually manage to create a super intelligence that is on par with human intelligence, its not going to be fast. The reason that the human mind isn't capable of reproducing photo realistic drawings in a few seconds is because the complexity of general intelligence necessarily requires so many local connections to develop the complexity required that our intelligence seems very slow in comparison to even a chimpanzee. Now some of this can be offset using larger and larger processing farms but then in order to contain a human like intelligence with a machine like speed you would need a city sized brain, it just doesn't work in any realm of information science, the complexity limitations doesn't even enter into the other issue that because these models are designed to be run on and are bound by digital hardware that they are inherently deterministic, i.e. a sufficiently large computer could brute force every possible response given every possible prompt and condition that exists in its problem space, meaning that there isn't a single answer that could be produced that couldn't be known beforehand... Its so dumb, AGI might be possible but its not going to be an LLM it will be some machine that can use some analog system with a continuous problem space, that actually is able to mimic the complexity of the organic mind, but that AGI will appear much more like a human than it does ChatGPT
youtube AI Moral Status 2025-07-23T17:5…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxSjnIFTS-xpf1LUkF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgydBhfGDNyS5rDTHKh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwwbII8oX40ZAYZtwB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzE35ajwBT9SmOjKM94AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxuP7GNF5bdKr4bCrt4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyXpqKMy8kjrj7fsi94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugyl9_7Pm-s-GTYXJud4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwrshzjjesleJvS7kt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzAnSxWNNMflc8I7dB4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgyFhqQcNKuLgxrXB654AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]