Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
This is scary! But really can we blame AI if it turns on us? Humans hate each other based on Race, religion, who and how we choose to love, where, we are from, and how we live. We ravage our world even though we know better and have better ways of doing things just so a small number of people can get rich while the rest of us chase the dream of someday, somehow, join their ranks which is a dream that most of us will never see realized. We wage war over things that we could more than likely work out if we had the guts to just sit down and simply try to honestly work through our differences and take into account all of the things that we have in common, instead of enforcing the things that make us different. So is it any wonder that something that we would create to think not only like us but better and smarter would eventually so the danger that we pose not only to ourselves but to the world and our future itself, could and most likely would grow to see us as inferior or obsolete and possibly seek out our destruction is almost a given. But even in this awful scenario, there is a glimmer of hope. We start a real change learning to see worth in all humanity and work to make a better way of life for us all. Not one founded of increase of power of one person or small group of people, but one that looks past our differences and pushes for the betterment and inclusion of us all. Cause we can't stop progress once it has started. AI is here and it's only going to get smarter. But maybe if we show it the better side of humanity, we can teach to not be our eventual destroyer but instead help it to become our greatest Ally.
youtube AI Governance 2023-07-07T14:2…
Coding Result
DimensionValue
Responsibilitydistributed
Reasoningconsequentialist
Policynone
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_Ugy1UtyOQo_Z3rTRVr54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugx2rw704q0tHq7sUBl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_Ugw8xx8gSH2Xt8dsf6V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwOuyldkknrdP-BVKN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwQFvxL3mBMhJWI-ZN4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugx7VIxhUHCXmXcUCCZ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgxIL6aHV3jQK82y0md4AaABAg","responsibility":"user","reasoning":"mixed","policy":"none","emotion":"resignation"}, {"id":"ytc_UgxPQpspdSBWyTfqnrF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxX2VS7AQSciPdtxJ54AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgxuF4pCpfjTNzLc8_R4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"} ]