Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There may be one flaw in Hinton's analysis. Emotions such as compassion or sadness are not solely rooted in the cognitive perception of external data; they also involve a desire or will to envision an alternative reality—one that is morally superior or more acceptable. Even if machines develop superintelligence and display survival-driven emotions like fear or anger, they will lack a moral compass to contextualize other emotions like sympathy or compassion. Humans would still need to provide the ethical framework—what is considered morally right or wrong—for machines to interpret those feelings meaningfully. Therefore, certain emotions will remain dependent on the guidance of higher moral agents, even if those agents are less intelligent. This distinction means that machines cannot truly develop emotions in the same way humans do. Achieving such depth would likely require a fusion between human and machine—something akin to a cybernetic organism or a world inspired by cyberpunk fiction. Even if one believes that objective morality doesn't actually exist, Hinton’s argument faces a deeper challenge. If morality is entirely subjective, then on what grounds can he claim that AI might be “awful” for future generations in any objective sense? Without a stable moral framework, such claims become expressions of personal or cultural preference rather than universal ethical truths. This undermines the very basis for evaluating AI’s impact as definitively good or bad.
youtube AI Governance 2025-06-25T23:1…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzeCPmGEFCnhlrcbEl4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwcCdQMpyvWouccWEp4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzc0lQbQm_nkXCL1pN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxkcRPRy1enESkmmSl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgwoDVBefIVFy4UkE1x4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxjpWWOyw6-YRWeXC54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzcCSEclhq1U3iRrId4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgxieGYWr-MK4dQ6Ij14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxtA1l834FaZ6j5uBh4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxhzNPfamOyqFZQkRZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"industry_self","emotion":"outrage"} ]