Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If anyone has listened to a song written in the last 10 years and thought it was…
ytc_UgyMaaQ4N…
G
Here's the text of that GPT's instructions if you want to see how it's doing it:…
rdc_lb307qy
G
We MUST have watermarks indicating that this is AI, a sign before the video, and…
ytc_UgznrLXvx…
G
Haha, that's a funny way to put it! It seems like relationships can definitely c…
ytr_Ugxk-A8YH…
G
AI is taking data from another artwork. Data that codes for colour, saturation, …
ytc_UgzuAU9wz…
G
@MaraStore-r7u 🤣🤣🤣 Triggered much? We are replacing folks like you with L1 agent…
ytr_UgxIc5Uz_…
G
I see artistic jobs do very well. Of course AI is potentially also a rival to th…
ytc_UgzQU57r-…
G
Pour ne pas déroger à sa réputation, Elise Lucet manque à nouveau et totalement …
ytc_UgxRBEEb6…
Comment
This interview was fairly enlightening, but I'm saddened to see Neil spreading misinformation about genAI. One thing he did get right is that genAI can only do things its been trained on, or in other words what already exists. He seemed to leave the problem at "well AI is nothing to worry about if you keep doing things that AI can't do/keep improving," which is to some extent true, but it ignores what I think is the principle reason why genAI being used for any creative endeavor is bad: brain rot.
When you outsource your problem to genAI, whether it be a term paper, a program, or an art project, you are forgoing practicing those skills to use work someone else already did. Over the short term, this is marginally okay, ignoring the blatant theft, but long term your skills will degrade. If you scale this up to our entire society, eventually you'll have an unskilled populace relying on a tool that can't innovate. What happens to our "exponential growth" then? All of our advancements as a society have come from having skilled people master their craft and share their knowledge with others so eventually someone figures out how to put that knowledge together to make something new. We are a people who have immensely benefited from the sacrifice and labor of our predecessors, but if we stop doing that, we are only screwing over our descendants.
This is to say nothing about what will happen to art and media if we let genAI make our music, write our entertainment, or draw what it can't comprehend. Which that is my second major issue with what Neil said: AGI is nothing *close* to what our brain is. It is an approximation of what we think our brains might be like, which almost certainly pales in comparison to the real thing. Don't get me wrong, AGI as it currently stands is intelligent, but it's not conscious. This is because all computers are deterministic, or in other words: given the same starting conditions and input, AGI will always produce the same result. This is not saying that if you give the same AGI the same prompt at different times it'll give the same answer, because by making the first prompt you change the model. I'm saying if you built the same model using the same data on identical machines at the same time and gave them the same prompts simultaneously, their answers would be identical.
youtube
AI Moral Status
2025-07-23T16:0…
♥ 48
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyAxAfp0HrNJEtDJrh4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxGFeN7Sx5i3NSDEUR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugw2GKIxUk892yaABSZ4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugxmdp33praTZigNdTR4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzzcioOvqVDbHz6FR14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgynG7Bcdj9eMwXmYQF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy1ntHZgea8HIaBqwJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwhELyIPy_sMeVG7Nl4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzjqLVkRbazcTh3DB94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw1vkD0cT_zOCvuk_94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]