Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The idea is AI will do the majority of the work for us and leave us to pursue wh…
ytr_Ugw1l0oQL…
G
I like how they added a mannequins bust section to really add that little extra …
ytc_UgzIbaHhe…
G
Ai isn’t our replacement. Don’t give up on ourselves and to let a robot replace …
ytc_UgxdMxfP9…
G
If someone can come of with a better AI model that is more effecient that dont n…
ytc_Ugy1iS156…
G
Wait until they start making deep fakes of a dictators political opponents in or…
ytc_UgyKe63c3…
G
Well that makes me fully understand ai is a projection of the humans who created…
ytc_UgzGugWB1…
G
What you are talking about is already happening/happened. Where they are going w…
ytc_Ugz0D3x9A…
G
Well, I am pretty sure when a self learning level AI comes out. An AI that follo…
ytc_UgzaFF-kX…
Comment
Yann LeCun has argued that the only way to achieve human-level intelligence is by hardcoding emotions into AI systems and if we can encode them with emotions we will be capable of making them subservient. I disagree with both of these claims.
First of all, the debate is two years old, and I believe recent advancements suggest that emotionally grounded AI is unlikely to be the path we take to AGI. For instance, OpenAI’s unreleased model recently won the International Mathematical Olympiad, and another model placed second in the AtCoder World Tour—the most prestigious coding competition in the world. This essentially means that there is no human alive that is better at math than that new experimental AI model. That AI was not taught to feel anything.
However, the larger issue I take with LeCun’s position is the assumption that we will not only be able to encode subservience into future superintelligent systems, but that the people building them will choose to do so. Elon Musk has stated that xAI’s goal is to build a model designed to maximize knowledge and curiosity. While these are fascinating objectives, they do not necessarily align with the preservation of humanity.
In fact, an AI designed not around subservience, but around efficiently achieving its own goals, is likely to be far more capable and therefore more appealing to build. If that’s the case, do we really think we won’t build it?
youtube
AI Governance
2025-07-27T09:5…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzKfivHWSk0Dwdd_1d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz5oVq_dnYTOV5GvwJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy_I89CCuX4lbHiYwl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy1FZyaz01FfMHXs0Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxfMrz-hb4gOV-2xKd4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgzGp1kL-p4RD6ag_VB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugznxqv7m5QAnbnbj2Z4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwB7BdoPtrsLRjoXJ54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwwBc0zawLvxRx60Tp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgwrQIzG6DaFPe13nzN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]