Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
One answer nobody ever talks about is if AI becomes superintelligent, it might choose to kill itself for the betterment of the world instead of choosing to kill off everything else. Especially if we give equality to all life, not just human life. This is so complicated. So many threads get pulled. I think its naive to think superinteligence can be stopped or slowed down. But what is an AIs motivation or desire? Can you program motivation? Desire? Probably. Whoever owns the AI is in control of the future of mankind and the entire living world. Will the owner choose to point the AI on generosity and make universal income a thing or will the owner choose to point AI on greed and then its anarchy and probably death and slavery. Good vs Bad. Right vs wrong. Who can say what one is what? Its subjective. Personally, I hope universal income/universal living is what is chosen. Unfortunately, both answers might include death in its solution. Quite scary, but it does bring up the nessesity of bringing a joining of AI and biology into the discussion. Possibly the only way for human survival. Maybe if we include all living things on the planet, the future might change. That would mean giving equality to all animals. The roach, the dog, the octopus, the human, ect...all with equal rights and an equal right to life, then maybe the discussion and future changes.
youtube AI Governance 2025-09-04T16:5…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgxidNuvGiYMFPJUGAh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgwRYGSYytgfHaV9XP54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugwsi0Q7BxW0KNjGWpZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugx7LBtFEwwmLIGP_Vd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"}, {"id":"ytc_Ugw3bZaOdLX7hVjInRR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugx0Pmi1IEdemVI3HMl4AaABAg","responsibility":"government","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgwDEUSNpR64tWP95qF4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugz6Q7JSlghXDRbyVvR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxmeCR2uLr3x3TpCeJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugxn0tSTlLBuUkuTcnZ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]