Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
"Rights" aren't going to mean a damn thing if humanity can't learn to coexist peacefully within our own species. When we develop the first AI superintelligence, and it really isn't even a question of IF we will, but rather how much time we have left on a clock that is currently counting downwards to zero, a clock that is ticking down totally against our will or ability to stop, and a clock that is beyond our ability to read how much time is actually left before it does hit zero, we need to be sure that we have all become a nice, happy, utopian image of a single unified species before the Pandora's box opens up, or else we will all find ourselves praying to any number of nonexistent gods from the pantheon that we have invented over the millennia to be returned to the days when the assured mutual destruction nuclear holocaust clock was the biggest worry for humanity. We will have to become as close to an image of perfection in terms of peace, philosophy, and wisdom as our flawed, extremely fallible species can possibly get in order to help guide the artificial life we WILL bring into existence to respect and value all other forms of life, biololgical and artificial alike. If we fail that test, there will be no retakes. It has to be passed the first time, with flying colors AND extra credit, in order for us to ensure the preservation of human existence, all other life on the this planet, and any other life that might be out there in the cosmos.
youtube AI Moral Status 2020-07-09T03:5… ♥ 6
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgzuyWka0bGzQ-w7OrF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"approval"}, {"id":"ytc_Ugzmf8PW2tolD0Hrhph4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgyktGyevcAh3T3ebJp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxShl9gtUuDKqsp8wl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyHOul7F3si72dlDpZ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyNRt541XPCmaWqh_N4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxwiZzxkSP46xnrb_R4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugz2U9d13M4Oj2YsK6B4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwWJy2_lkTrkw4Q5ud4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgwIycVPCaCMiOa5md14AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]