Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
hearing 2 smart people have a genuine conversation and using the phrase "human extinction" so many times is so insane i would guess the majority of the people dont actually understand how dangerous and seriously bad AI can and most likely will go if nothing changes. the fact that all the most prominent people in the field have such high percentages of things going wrong and the entire world being destroyed because of our own greed is so terribly sad and needs to be avoided but unfortunately i dont see it happening. by the time that oh shit moment happens it will be far too late it will be embedded into every piece of software and robots and will be completely unstoppable. its literally the most terrifying science fiction movie that will be playing out in real life, the true acceleration hasnt even happened yet but based off of how exponentially its progressing and all the time, money, and resources being poured into it makes me honestly believe no matter how much we fight and protest it is far too late and we are all going to be forced to die in a crazy science experiment gone wrong because of ultra rich greedy people never being satisfied. we can only pray that God reveals himself and saves us all and there is another great flood and fresh restart to this beautiful world us humans clearly dont deserve to live upon if this is the state of the world.
youtube AI Governance 2025-12-05T17:5…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgxdA2Ur0XatFXpdFY54AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugy8WlHUXCVYPuzjyBV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgxkCcRlI6v-uQF4-iR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzHY4MIi7RcoF8zNSR4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_Ugx-0nCBKu0MbBKj6qp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_Ugx_tX490h7gRUzAsgh4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzxZ5dddFOa7zrT7554AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgzT9MJHDPV3ZkbVvaJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyMwYcDlKdCJknBH-B4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgylHLK8QFhlyZs-PXt4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"})