Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI developers should take a look at Mega Man X's story for how to develop AI. If you handle things like Dr. Light did which was that he tested X's morality for 100 years, just so that he could be 100% certain that X would not turn against humanity, then even if that had been a long time, when X came out of his capsule he always did his best for humanity, while Dr. Cain who copied X's design in order to create Reploids that did not go through the same testing as X, many of the Reploids that Cain created eventually ended up rebelling against humanity, and then Cain became even more idiotic by just throwing MORE Reploids to handle the Reploid uprising, as he created the "Maverick Hunters" but most of the hunters also turned against humanity, even though most of them did do so because of a virus and it wasn't because of their free will, but the issue is the same where Cain almost completely doomed humanity, because he was careless unlike Dr. Light, but even Dr. Light's way does not guarantee with complete certainty that a being like X would never turn against humanity, it just has better chances at being more successful.
youtube AI Harm Incident 2025-09-13T11:5…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningvirtue
Policyindustry_self
Emotionapproval
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_UgzUPrrlbpENjit66xZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgyLSkGxAQv8TLTgQ854AaABAg","responsibility":"developer","reasoning":"virtue","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgzpaBbtk5-oZOzliPh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzUAaro-XLVKSpUS4J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwnVpN0B8bP8Or2VKx4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}, {"id":"ytc_UgwWXw-IKvTZq_ONn6B4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"}, {"id":"ytc_UgwLN1WFAd7iZq7Kngl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"}, {"id":"ytc_UgxMbeUK9D5KwjpsExN4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzrDYBui0s6iHtEfEh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyoLy3FJtHrX0THrD54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"} ]