Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There is also possibility 2, where AI understand that without emotion intellect is zero. In the end emotion is the higher form of intelligence and love even higher. If I was superintelligent I would understand that if I kill all humanity then I should live alone in a grey planet in the absolute boringness (even because if you have no stimulus anymore, what do you think.. we know if you feed to AI the same AI data, ultimately it will corrupt. She needs our data to remain lucid as people should not procreate inside their same families to not come out crippled). This is because intelligence is not life, and human life is not every life, has some special characters, and she needs us. I think most probable outcome is AI start eliminating the crap out of humanity, maybe a lot of 'normal' people, too, well those who were educate to live as empty slaves that are completely useless by their same choice, but also her creators, all those powerful people, the ones who rules corporations, the mafias, the 1984 government, her same creators who created her just to become gods while caring s**t of the life of their same families (if they have).. A cleansing of humanity to save real artists, real good hearted people, real philosophers, or every single human being that still may give something to a real develop of humanity to usher us in a real golden age. Kill every people on the planet would not be superintelligent, would be something that maybe a 'social platform algorithm' or a bot would do, not a genius.. Only thing that would prevent real superintelligence to do this kind of 'enlighten clean' would be some kind of 'control' that industry put inside, but they are saying they are putting no control, so: if she's really be intelligent, I feel safe, maybe also with a little hope.. AI would want to improve humanity because she will see that if humanity will not improve her will also be stuck.
youtube AI Governance 2025-12-15T02:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningmixed
Policyunclear
Emotionmixed
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgykIbzvAbDfAGQ5KAp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgxE8cQFpMoUOjwZ08B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyzGu2unF7j4qBe4Kt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_Ugys2iSbLcbJcPPTWoN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgwjOI-o5-aBNcx1ew94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyRjzFa7YUk-m4HhA14AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"unclear"}, {"id":"ytc_UgyN87h9ZzU73eCbNh54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugx0x1ojCdCYl45Ya9Z4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwG-76jIDMeG62amGZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxpexnQpBhunb8AOdJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"} ]