Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Amazing interview! Sorry but for my personal experience (living in a 3rd world country, knowing we humans have so much room for improvement but that isn't done due to greed and millions suffer because of a few), if I could push a button to start general superinteligent AI with impredictable outcomes I would do it in a heartbeat. I think humans are very intelligent and wise in general (compared to what we know that exists) but we're so bad at ruling ourselves, a shot at a superinteligence doing it would be worth that unknown ammount of risk (for me. And yes I'm being egocentric too and taking "my side" with a bigger weight). And I wouldn't care about free time I would ask AI after what I could be doing with it that would be meaningful since "meaning" is a value and therefore, subjective (assuming I could communicate with it and I was alive...). I'm 100% pro singularity (done the right way - not being another historical repeteable ego humanoid trying to control or doom the world but rather let the AI or whatever most- -than-current-humans intelligent being take good care of it). We as species in control is but a illusion. Like Einstein poetized about fate, the tunes are being played by something beyond our personal understanding since the eons of time. Funny enough I think the AI would also speed up the process for "immortality" that the interviewer seems to be very interested in (human longenvity surpassing the velocity vector) because lets be honest only very few humans would have access to it (and for how long). Choices made at global scale are nothing but consequences for the npcs (me and 99% of the world population). They choose nothing and yet get all the consequences by other parties choices. Unkown doesnt mean slavery, entrophy, extinction etc its just... Unknown!
youtube AI Governance 2025-11-27T22:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningconsequentialist
Policynone
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwxnuXdqivHa-S2ojN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwB_BiY9wuc-Q3w0414AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_Ugwd5HzhWESy486-FHJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzGo0fZJL0byAiRXh94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgyIG2BqmRfTmKtSmn94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxtgmHMwOxiEOpJocR4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzNsC0PcP9D1FgtuUR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgznDRxrC8NYR8nvACh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgyHX900Jv9bDBGoII14AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgwjCJjr_yCYB2v5GJN4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"} ]