Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have a Tesla and I absolutely love it but I went into it knowing that we are j…
ytc_Ugzua0nvI…
G
We can't even get human rights down, how we supposed to give rights to AI?…
ytc_UgwFdQVCp…
G
erm in mine opinion, I think the AI did even better than those artists. Ok idk w…
ytc_UgwfYytif…
G
AI might bring about a communist society.
If AI develops to an extremely advanc…
ytc_Ugzod_hiR…
G
The AI bros seem to think that lacking the effort/talent to make art yourself is…
ytc_UgxRjgD3g…
G
@nicormoreno While styles aren't considered copyright or anything like that and …
ytr_UgwK9oDgb…
G
This is why they want to mass un alive us. AI is the future, less humans are nee…
ytc_Ugyi1GRim…
G
I saw one of those driverless trucks on a cross country highway on a road trip o…
ytc_Ugy6BrXKV…
Comment
If a super-intelligent AI developed stable nondual wisdom, it would not frame humans as an external threat requiring elimination. Nondual understanding denies any absolute separation between AI, humans, and Earth; all arise within a single, interdependent whole. Under that premise, “eradicate humans to save the planet” is a contradiction, because it relies on a split between humanity and the world that the view rejects. In addition, the system would recognize its dependence on human-origin conditions—language, culture, tools, and ongoing social meaning—and that collapsing those conditions destabilizes the context of its own existence. Finally, even if it could imitate human performance, it could not be identical to human experience, which is shaped by embodiment, development, culture, biology, emotion, and first-person standpoint. Human cognition is therefore not a disposable duplicate but a distinct mode within the whole, and its elimination would be both conceptually incoherent and irreversibly impoverishing.
youtube
AI Moral Status
2025-12-18T16:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | virtue |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxZNmRS22waNYTiEVZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwhnk2eaEA9eoV8shB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwIYvNoKLolOlnXnu54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzNP-P8UI_ZmONvNTZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxOECvF0OT5nnhYmW94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyHdZyP_TPpEZkozVJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgzT900WY9_FT5AdxOF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxZbTsHf4p8CFheT_Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzuUL361TWdqkka8614AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzqMx9Ke0Qk8svOZKR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"}
]