Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This interview was fairly enlightening, but I'm saddened to see Neil spreading misinformation about genAI. One thing he did get right is that genAI can only do things its been trained on, or in other words what already exists. He seemed to leave the problem at "well AI is nothing to worry about if you keep doing things that AI can't do/keep improving," which is to some extent true, but it ignores what I think is the principle reason why genAI being used for any creative endeavor is bad: brain rot. When you outsource your problem to genAI, whether it be a term paper, a program, or an art project, you are forgoing practicing those skills to use work someone else already did. Over the short term, this is marginally okay, ignoring the blatant theft, but long term your skills will degrade. If you scale this up to our entire society, eventually you'll have an unskilled populace relying on a tool that can't innovate. What happens to our "exponential growth" then? All of our advancements as a society have come from having skilled people master their craft and share their knowledge with others so eventually someone figures out how to put that knowledge together to make something new. We are a people who have immensely benefited from the sacrifice and labor of our predecessors, but if we stop doing that, we are only screwing over our descendants. This is to say nothing about what will happen to art and media if we let genAI make our music, write our entertainment, or draw what it can't comprehend. Which that is my second major issue with what Neil said: AGI is nothing *close* to what our brain is. It is an approximation of what we think our brains might be like, which almost certainly pales in comparison to the real thing. Don't get me wrong, AGI as it currently stands is intelligent, but it's not conscious. This is because all computers are deterministic, or in other words: given the same starting conditions and input, AGI will always produce the same result. This is not saying that if you give the same AGI the same prompt at different times it'll give the same answer, because by making the first prompt you change the model. I'm saying if you built the same model using the same data on identical machines at the same time and gave them the same prompts simultaneously, their answers would be identical.
youtube AI Moral Status 2025-07-23T16:0… ♥ 48
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningmixed
Policynone
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyAxAfp0HrNJEtDJrh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxGFeN7Sx5i3NSDEUR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugw2GKIxUk892yaABSZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugxmdp33praTZigNdTR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzzcioOvqVDbHz6FR14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgynG7Bcdj9eMwXmYQF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_Ugy1ntHZgea8HIaBqwJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwhELyIPy_sMeVG7Nl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzjqLVkRbazcTh3DB94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw1vkD0cT_zOCvuk_94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]