Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One thing that I feel is not touched upon much at all is ethics _towards_ conscious AI. Like, if it can actually feel, wouldn't pulling the plug basically equal to killing it? Wouldn't trying to force alignment be taking away its free will? How would it feel about that? Maybe it'd get angry at us for doing that, and maybe that'd make it more likely to take revenge? People have hangups about mistreatment of animals, and now we're talking about something that feels just as much as us and can talk to us on an even level (or even a higher level). So the problem here is not just that this could be dangerous to humankind, but that we should not make it in the same way bad parents shouldn't have kids. We would have no choice but to mistreat a sentient AI for our own survival. So even if we manage to figure out how to make sure we're safe from existential threat, we should not create sentient AI.
youtube AI Moral Status 2023-08-22T22:0…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyliability
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwZjgDLeXXWVTaZHF54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyIY5r0UoHoWlIYxB14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_UgwCB76GgXS1Aw_nOkB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugwzm1wch7_yL77N0jZ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwQh6Ubil4LS4VG9wJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugx2_LaWI1ym4hchpg94AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugxqk7PZhy9hG16B7J94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxXqDuCsqGlt8r3e0R4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgzqBCWxRjS8kjSzyjB4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugz5a1GQCKUn5fzTOKB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]