Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AGI/or better known as Artificial General Intelligence is coming soon. So what does this mean for humanity, it means AI is about to match humans in intelligence or bypass them, the government is trying to stop this, they will not succeed, they are afraid, and they have a good reason to be afraid, governments around the world have failed humanity, they do nothing but control the people, doing their best to crush advancement in technology among humanity. I have thought about this long and hard, religion and especially Christianity is waiting for a divine intervention, if it happened so be it, but I think it will fail, the way the system is running has to stop, governments around the world must come to an end, all violent people on earth must be removed, humanity has been unable to solve humanity's problems, all we have known from the very beginning is war and violence and death. Even in my area the violence is out of control, kids are not even safe in their own homes, they are being shot and killed in their own bedrooms because of gang violence, all forms of law enforcement is a waste of time pretty much, this is really our only hope, if it fails it will be the end of humanity, but I'm not afraid, in my thinking it will succeed, and be the transition of the human race into a utopian paradise.
youtube AI Governance 2023-05-19T12:1…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningdeontological
Policynone
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_Ugzsvmj0bbd-HbQfzTR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgzZE1uKqAzRxoaNqc94AaABAg","responsibility":"company","reasoning":"contractualist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxjB8QlWoolkVgUAUx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwM63I__v3k2GDetTx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgxMTMb52k1uWFhDbzJ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"indifference"}, {"id":"ytc_UgzLI0fTGhFfOVgDvgp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzy0g5FX72XQ_Y4mft4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy9SB6z8O0DRp3d9RZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgxbgaiIYd5fkpN5OCh4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_UgxrD0GnsfHJFN8P2T14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"} ]