Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Good topic, i love it! The only thing that is missing is mentioning alternative solutions for this problem. In my opinion the biggest problem with our technology is that, it hurts the human kind in different level. Unfortunately this is true to electricity and to magnetic fields (with cell phones) and also to nuclear energy. We sacrifice our health and our future to technology and to comfort. So it's obvious that, if something hurts us (even if just a little bit) than if we continue in that direction it will kill us slowly (so never heal us). Our technology is electric-energy based but without morality. I mean you can develop any electric device to hurt other people. With ai we would never ever solve this problem because of it is not a human so it has no emotions and so no morality. Ai can speak about morality and mimic emotions but never can reach this level nor SuperAI. But i believe there is a way to survive ,which is quite different and unusual approach. We have to rethink our science and also ourself in a ground level, and we have to put humanity and health before comfort and current technology. That means we have to start thinking about a new kind of technology which grounded on this principle. This kind of technology is the moral technology which is opposite of the current one ( which is based on material ideology). I mean in the future this machines only works when the user has a good purpose (high morality level) for using it. It is in the far future but we have to start plant the seeds. There were scientist (like Keely with his machines)and supporters who believing in this idea and started to research this field (Paul Emberson and the Anthro-Tech Research Lab) with some groundbreaking results but the research is still in very early stage...I wish we put globally more energy (money) to this projects and we hear this as our hope to survive (not a hoax) and start to realize the weight of this too! Best Wishes!
youtube AI Governance 2025-09-04T20:4…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policyregulate
Emotionapproval
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgzMJ4uzotr6iGwIrch4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugy3MgeOl5mZ6VKs70F4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgypUAYa1xgakk-w3rR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugw7pgD4z8LE9Xo0qCV4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgyKGAh__2mlZzzE-vd4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugxf1RA-76HjwswHCNZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"}, {"id":"ytc_Ugx9uj0CbgFOfZl4Df94AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxI91PeburlTiCumGR4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzI-_8gIdL8nKSNJOF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgwaMgvWATiydwqoBN94AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"} ]