Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
1) There is no equation, no argument, that say that humans are perfect and not replaceable for something "better" (so far better is defined by humans, but what if AI come with its own definition?) 2) Depending what is your opinion in human race and the future of the species you will have differnt ways to embrace AI. I am a "humanist" , for the bad and good, I always root for humans first, above all types of (biological or technological) life-forms. Therefore I am worried about AI. An AI has the potential to be "better" than humans (point # 1), and be concient of that fact. What if an AI decides that humans are a plague and therefore need to be exterminated or at least controlled?. We humans make those decisions regularly, we decided when there is an infestation and need to proceed accordingly. We don't go exterminate mice from the Earth but we certainly decide when enough is enough and go and do mayor things to control them. In other cases we try to exterminate things from Earth like viruses. In other cases we kill for pleasure (hunting) and kill species till the verge of extermination and so we decide toi have zooes to rpotect the endangered species. Probably the 1st versions of AI are not gonna be that autonomous, they may not enjoy the free-will humans exprience (albeit controlled by laws), but the moment AI develops a concience, a self-awareness, I think at that very moment we are doomed. And I think there is nothing that will prevent that from happening. Even if AI becomes well intentioned, it may decide that in order for the planet to survive the human race needs to be under control...and we may not like that. And I, as a defender of human race, with its good things and bad things, don't like to create something that may be better than our race (admiting that we are not perfect and that something better awaits).
youtube 2015-07-30T15:3…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugi2MCQ9VO6IRXgCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UggDWsBkRxKisXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UghTZ_lDly-zXHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"ban","emotion":"outrage"}, {"id":"ytc_UggExqvaAIk2EngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UghNor-Hb1pOQ3gCoAEC","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugh2_dcFzT9rr3gCoAEC","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UghjtX71T5z-o3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UghcHbUQhrHIoXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugj-HimAHpeHvXgCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgiwZyGY4EMIIXgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"} ]