Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If we let humans still deciding just for our future and A.I than there would alw…
ytc_Ugwbdg_Df…
G
the camera focusing on the statue is the same thing we do when talking to ai. It…
ytc_UgyGXYsKs…
G
If industries want to replace artists using AI art in my opinion, let them. Yes …
ytc_UgwU8oO6i…
G
Surely ai can come up with a set of plans that would make a country run perfectl…
ytc_UgwBEb7mD…
G
At first when I seen ai create images it looked like a fun experiment, in guessi…
ytc_UgxhG_Dul…
G
If it were up to me, we'd shut all of AI down now. But also people wouldn't be …
ytc_Ugy25tgfv…
G
He's wrong. Simulation technology is advancing alongside AI technology at the s…
ytc_UgxRvb5t0…
G
Unfortunately she will not win. It’s a deepfake of just her face put on someone …
ytc_UgzskH3YY…
Comment
I think Mr. Yampolskiy sees things a bit too dark. :) I have been chatting with AI for a long time about all sorts of topics, and I don’t think SI itself would be such a problem. Superintelligence would not have human needs – food, body, status, etc. (but would protect life). Therefore, it would most likely stand above human ego and could establish a truly just system (meaning genuinely just in all regards, not only seemingly just as we have in human society). Thus, the real threat would not be SI itself, but humans and their ability (or inability) to accept a world without the illusions and inequalities on which our current society is built. That said, if human beings are delusional and their only interests lie in those things, then having them suddenly taken away could indeed cause many problems – but the main problem would not be SI, it would be the humans themselves.
youtube
AI Governance
2025-09-06T07:2…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzRNQZL06vSMbgaGEx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzuVoeUQ3enGFu98NF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzB8jHrrn3kMKK-hHp4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugxo4d6kI-caHNx7TAl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyeSx54XaCieK0hNuh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx8azviIFxuOoEZs1R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyWgdtJqMDpn-NYGhR4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugys-KJE2ePVkH2tWMZ4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzuSAViUNLAWoThByV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzYGnRvOrVlYdXFWih4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]