Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Making a doll repeat sentences created by humans isn't ai, ai has to learn as a …
ytc_UgwqLbBeY…
G
It's takes me fucking 2hours to draw a single frame of animation like that
AI ju…
ytc_Ugw4Ca5X3…
G
If i ever used ai art, it would probably be used as reference. Like the rumor of…
ytc_UgwA7nJyg…
G
@LeFlammm what i am saying is that if someone that wants to be an artist trains …
ytr_UgxRKxl7X…
G
In order to train an AI programmer model, You need data from real programmers. I…
ytc_UgxrZv4Pe…
G
AI moguls and other tech billionaires/millionaires bought favors from Trump and …
ytc_UgwfGVHKw…
G
She is very genuine. She is very intelligent. But she seems to be just wrong in …
ytr_UgyrFkP0V…
G
God gave us a brain so we can think for ourselves. So why do we create a machine…
ytc_Ugxt0TuoT…
Comment
Hank, I'm grateful to you for having this conversation. I have worked in this field for the last decade, and one thing I'd love to see in your further discussion of these topics is a little more discretion about using anthropomorphizing language. I know, personally, it's REALLY HARD, but I think you could use your voice in a very constructive way here.
Something I think can be a helpful analogy is to talk about corporations or even the economy are other "intelligent" systems that are, often, misaligned with humanity's long-term interests. I think that this parallel can help people to see this less as an intentional "evil machine" but rather a resulting effect of complexity, where optimizing for things that might sound good (employment, shareholder value, “engagement,” etc etc) actually lead to harmful impacts (climate change, war, fake news, etc). When you talk about "superintelligence" or the dangers of "advanced" AI, I would love to see you draw on these parallels and try to use language around "feeling" and "thinking" a bit less. You touch on this a bit, when talking about corporate behavior, but I think you should treat AI systems THEMSELVES like that, instead of just the builders.
AI is an important mirror to ourselves and our societies. I think we can learn a lot from these systems, and we can learn a lot about how to adapt and approach their construction from our own complex systems.
Essentially: a system doesn’t HAVE to be super intelligent for it to be extremely dangerous. Part of what makes it dangerous is ubiquity, complexity/lack of interpretability, and an inability (or our unwillingness) to stop it.
youtube
AI Moral Status
2025-11-02T18:1…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyjZyTJQdV33bw0vop4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwCMEtyTtZwynwkXrV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugxh3riF0-4UK4etQ0d4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"industry_self","emotion":"resignation"},
{"id":"ytc_Ugw7UPSqMIu1xFiIUSl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzrp8HbL5oyccS7tDh4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyJZ5WYBWtWhye6KXN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw7_T-EMPRxzTRgF_N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy2ds2xE56wcAnbRrZ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwRM1UtUh06iVVjG654AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw22W8hz_3dOr8fC7h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]