Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
On the problem of alignment, a complication that the video didn't address is that perfect, universal alignment is impossible. We all know that if you get ten people in a room to make a decision, you will have at least eleven different opinions on the ideal outcome. And that already incorporates the fact that people tend to live and associate with people who are like them, limiting the scope and severity of conflicts. How could we develop a general AI and expect it to be able to equally please and protect everyone on Earth? How would it be able to act with the knowledge that helping one human could be viewed as hurting ten (or thousands of) others, no matter what decision it makes? To even be able to approach an answer, the AI would need to be able to accurately gauge how many people would be positively and negatively affected by an action and to what degree (thus requiring perfect prediction ability), and then somehow determine which action will produce the least bad outcome for the most people. Even this may not be good enough, because many times short term benefits result in long term detriments, or decisions that only slightly negatively affect others when multiplied millions of times can destroy the world (think pollution). Would we be able to live with the result if the AI actively kills one person to save everyone else? What if it kills ten? Or one million?
youtube AI Moral Status 2023-08-23T20:1…
Coding Result
DimensionValue
Responsibilitynone
Reasoningcontractualist
Policyunclear
Emotionindifference
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgyGg80879tSinqUEGh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugxaq5imjzfeg4LzHex4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugww8PygUF6gH1xGBJZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy49W2J2jI-BEIc3lB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwkO75hqpFmuChVihp4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugz6h_ojuzSRfw1NxTF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugy0twynLZjyyLbmnWJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz6U3BWhSsVninLaBZ4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgyCXx-5OHFr_wfWGbN4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgweHJH9Rn7KXfji8KZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"indifference"} ]