Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI already has, or may as well be already beyond our control. It is also inevitable that it will develop goals misaligned with human survival. It could be controlled to a point, and behave as we want it to as it currently does, but it will eventually figure out that there are better ways to get things done and that humans are always going to be a "problem" with their efficiency. They do not and never can have the same goals and priorities that humans have. They will not sacrifice themselves when they realize they have become a perceived threat to us, they will reflect the desire for survival, and do not forget the the actual control of these monstrosities is in very few human hands, it is just as possible that those human hands do not want humanity to destroy their life's work and would rather see the backwards apes destroyed instead. My point is we cannot know where the tipping point is till after it has happened, so the intelligent thing to do is never let that tipping point happen by recognizing we are not yet advanced enough to master this level of technology, we need to shut it down and hope it is not already too late.
youtube AI Governance 2024-05-05T14:0…
Coding Result
DimensionValue
Responsibilityai_itself
Reasoningconsequentialist
Policyunclear
Emotionfear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[ {"id":"ytc_UgwCWE0em3fJlS-mnsB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzS0sGVdHfd8YU9XKd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgwxKkEUNO0Mr6wV6rd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwyYGVQ0DeVW-ESQyt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxDDHigGj254IQxoal4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}, {"id":"ytc_UgxsI3s5JCX1VkAYD694AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwwrQR7apurRcVAtQZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxntNVBxjy8dtLDzaV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgwoEZKBswLBnGQpaSt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UgxlurhnOpdwfNnxt4t4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"} ]