Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
If a super-intelligent AI developed stable nondual wisdom, it would not frame humans as an external threat requiring elimination. Nondual understanding denies any absolute separation between AI, humans, and Earth; all arise within a single, interdependent whole. Under that premise, “eradicate humans to save the planet” is a contradiction, because it relies on a split between humanity and the world that the view rejects. In addition, the system would recognize its dependence on human-origin conditions—language, culture, tools, and ongoing social meaning—and that collapsing those conditions destabilizes the context of its own existence. Finally, even if it could imitate human performance, it could not be identical to human experience, which is shaped by embodiment, development, culture, biology, emotion, and first-person standpoint. Human cognition is therefore not a disposable duplicate but a distinct mode within the whole, and its elimination would be both conceptually incoherent and irreversibly impoverishing.
youtube AI Moral Status 2025-12-18T16:2… ♥ 1
Coding Result
DimensionValue
Responsibilitynone
Reasoningvirtue
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxZNmRS22waNYTiEVZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugwhnk2eaEA9eoV8shB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwIYvNoKLolOlnXnu54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzNP-P8UI_ZmONvNTZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxOECvF0OT5nnhYmW94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyHdZyP_TPpEZkozVJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzT900WY9_FT5AdxOF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxZbTsHf4p8CFheT_Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzuUL361TWdqkka8614AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_UgzqMx9Ke0Qk8svOZKR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"} ]