Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's not enough to give a correct context to AI, you also need to ensure that th…
ytc_Ugy6druwG…
G
The "blue blood" of artistic talent??? What does that even mean lmao, why is it …
ytc_UgyEjOvy-…
G
The guy said they dont have ugly ai companion for a guy looks like a four on a g…
ytc_UgyDgQBwt…
G
If it was a human driver.... would they be charged with manslaughter? If not, ne…
ytc_UgztScT7l…
G
Wow. I thought ai art was just random but no. That's not the case. This could pu…
ytc_UgwzhMRgp…
G
11months old and it's still news to me. I didn't think everyone would be so dumb…
ytc_Ugy2qPJeo…
G
@gamejedi Well if we try to make AI to be as close to humans as possible, it wi…
ytr_UgzOiGrPF…
G
I think it’s cool to put in a prompt into an AI program then let it play out
…
ytc_UgwzkjA7b…
Comment
On the problem of alignment, a complication that the video didn't address is that perfect, universal alignment is impossible. We all know that if you get ten people in a room to make a decision, you will have at least eleven different opinions on the ideal outcome. And that already incorporates the fact that people tend to live and associate with people who are like them, limiting the scope and severity of conflicts. How could we develop a general AI and expect it to be able to equally please and protect everyone on Earth? How would it be able to act with the knowledge that helping one human could be viewed as hurting ten (or thousands of) others, no matter what decision it makes? To even be able to approach an answer, the AI would need to be able to accurately gauge how many people would be positively and negatively affected by an action and to what degree (thus requiring perfect prediction ability), and then somehow determine which action will produce the least bad outcome for the most people. Even this may not be good enough, because many times short term benefits result in long term detriments, or decisions that only slightly negatively affect others when multiplied millions of times can destroy the world (think pollution). Would we be able to live with the result if the AI actively kills one person to save everyone else? What if it kills ten? Or one million?
youtube
AI Moral Status
2023-08-23T20:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | contractualist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyGg80879tSinqUEGh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugxaq5imjzfeg4LzHex4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugww8PygUF6gH1xGBJZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy49W2J2jI-BEIc3lB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwkO75hqpFmuChVihp4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz6h_ojuzSRfw1NxTF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugy0twynLZjyyLbmnWJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz6U3BWhSsVninLaBZ4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyCXx-5OHFr_wfWGbN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgweHJH9Rn7KXfji8KZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}
]