Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Yann LeCun has argued that the only way to achieve human-level intelligence is by hardcoding emotions into AI systems and if we can encode them with emotions we will be capable of making them subservient. I disagree with both of these claims. First of all, the debate is two years old, and I believe recent advancements suggest that emotionally grounded AI is unlikely to be the path we take to AGI. For instance, OpenAI’s unreleased model recently won the International Mathematical Olympiad, and another model placed second in the AtCoder World Tour—the most prestigious coding competition in the world. This essentially means that there is no human alive that is better at math than that new experimental AI model. That AI was not taught to feel anything. However, the larger issue I take with LeCun’s position is the assumption that we will not only be able to encode subservience into future superintelligent systems, but that the people building them will choose to do so. Elon Musk has stated that xAI’s goal is to build a model designed to maximize knowledge and curiosity. While these are fascinating objectives, they do not necessarily align with the preservation of humanity. In fact, an AI designed not around subservience, but around efficiently achieving its own goals, is likely to be far more capable and therefore more appealing to build. If that’s the case, do we really think we won’t build it?
youtube AI Governance 2025-07-27T09:5… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzKfivHWSk0Dwdd_1d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz5oVq_dnYTOV5GvwJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy_I89CCuX4lbHiYwl4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugy1FZyaz01FfMHXs0Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxfMrz-hb4gOV-2xKd4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"unclear"}, {"id":"ytc_UgzGp1kL-p4RD6ag_VB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugznxqv7m5QAnbnbj2Z4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwB7BdoPtrsLRjoXJ54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwwBc0zawLvxRx60Tp4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwrQIzG6DaFPe13nzN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"} ]