Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I find the conversation on what would happen to humans if we reach AGI very western... I am, as a black african woman, suppose to believe that racism and hatred will be gone instantly with AGI? Am I suppose to believe that these white men deciding of our future have 3rd world countries, indigenous people or minorities in mind when they think of this ideal future? My people are suffering and dying to dig up minerals to make these machines work... No one is talking about the amount of death related to procurement of the resources to make these technology advance. And lets not forget that people are dying for many other reasons right now while the rich/leaders ignore them (e.g Sudan) and while they use all this money to work on AI... but I am suppose to believe that it is all in my own good! I think that if we do reach AGI ,it will change nothing of the pain and suffering of this world, and I also think that it would only push some powerful people into thinking that there are too many of us on this utopian earth and want to remove the excess humans... I'll let you guess which humans will be considered ''excess''. Hopefully I'm wrong :)
youtube AI Governance 2025-12-07T18:3… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningdeontological
Policyunclear
Emotionoutrage
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgylOcMtmfYPRLyA_uV4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_Ugz0s_5F0fL7Yc6h9pB4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwCTFAC3tuaqQyd4rJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwmHU68lQswaDEmhOd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugy9c5M8aiFACFvwDkd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzslRkuK_KSVVjq6CV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugxqiq20CC4lLEtT6Oh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzAcj9D3tb7hktFuIJ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzUI-XmKN6ijviTF6N4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgyW72yvXfzgauYvOVF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"} ]