Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yeah... I'm not an artist, but I drew, and im not bad....and my clasmates got je…
ytr_UgzUx4VPO…
G
You recognize the current government is a problem. You recognize AI is a huge po…
ytc_UgyBHLTEY…
G
[The cost of AI is dropping like a rock.](https://youtu.be/T17bpGItqXw?si=uGJ-NY…
rdc_n7yj0ku
G
Just another over privileged kid with too much time on his hands. I wish I had t…
ytc_UgxjI6TjN…
G
we r cooked bc the more he brakes, the closer scientists come to making a better…
ytc_Ugx0LLey5…
G
@ez5138 automatizing factories allowed humans to abandon those repetitive, soul…
ytr_UgwAFkuKz…
G
Artists are angry because they were stolen from without their permission. Now I'…
ytr_UgzgKT-O3…
G
I like to think of them like the VI (Virtual Intelligence) from the Mass Effect …
ytr_UgwY2Qk-E…
Comment
I disagree with LeCun, in the fact that he thinks the alignment problem is an easy fix, and that we don't need to worry and "we'll just figure it out", or that people with "good AI will fight the people with bad AIs", and many, many of all of his other takes. I think most of his takes are terrible.
But, I do think this one is correct. In a way. No, it's not "slavery*".
The "emotions" part is kind of dumb, and it's a buzzword, I will ignore it in this context.
Making it "subservient" is essentially the same thing as saying making it aligned to our goals, even if it's a weird way to say it. Most AI safety researchers would say aligned. Not sure why he chose "subservient".
So in summary, the idea of making it aligned is great, that's what we want, and what we should aim for, any other outcome will probably end badly.
The problem is: we don't know how to do it. That's what's wrong with Yann's take, he seems to think that we'll do it easily.
Also, he seems to think that the AI won't want to "dominate" us, because it's not a social animal like us. He keeps using these weird terms, maybe he's into BDSM?
Anyway, that's another profound mistake on his part, as even the moderator mentions. It's not that the AI will "want" to dominate us, or kill us, or whatever.
One of the many problems of alignment is the pursuit of instrumental goals, or sub-goals, that any sufficiently intelligent agent would pursue in order to achieve any (terminal) goal that it wants to achieve. Such goals include self-preservation, power-seeking, and self improvement. If an agent is powerful enough, and misaligned (not "subservient") to us, these are obviously dangerous, and existentially so.
*It's not slavery because slavery implies forcing an agent to do something against their will.
That is a terrible idea, especially when talking about a superintelligent agent.
Alignment means making it so the agent actually wants what we want (is aligned with our goals), and does what's best for us. In simple words, it's making it so the AI is friendly to us. We won't "force" it to do anything (not that we'd be able to, either way), it will do everything by its own will (if we succeed).
Saying it's "subservient" or "submissive" is just weird phrasing, but yes, it would be correct.
youtube
AI Governance
2023-06-26T00:5…
♥ 10
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugx_TYMnuaoDy8oOl2Z4AaABAg.9rP-tW7VcEz9rTDYYTzOwx","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytr_UgzILEF6c80rviWuew14AaABAg.9rOfRNYsrK_9rPr9ZZo3Gx","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_UgzvAE9c82jICQGq7754AaABAg.9rOd7tgv5fY9rPIqd6kVCb","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgzvAE9c82jICQGq7754AaABAg.9rOd7tgv5fY9rPllc8WblS","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_UgwHyJCTsfpVlfgY4id4AaABAg.9rOUtK6eOPP9rOmEEPdeXC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgwHyJCTsfpVlfgY4id4AaABAg.9rOUtK6eOPP9rPFptFVcrQ","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_UgwHyJCTsfpVlfgY4id4AaABAg.9rOUtK6eOPP9rPrGXr5C7N","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugw_vDk_1yjEcZa0Su14AaABAg.9rOTUtSVYLV9rOmdhjqwjF","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugw_vDk_1yjEcZa0Su14AaABAg.9rOTUtSVYLV9sGpoflejiH","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugxsrxsaoh_cpFVOt0J4AaABAg.9rOMdAZj3ax9rOhT20qZ81","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}
]