Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
There's a Star Trek: Deep Space 9 episode called "The Siege of AR-558" that springs to mind. In the episode, Quark and Rom, two of the aliens in the show, are on a Starfleet outpost that is under siege during The Dominion War. The younger character remarks that the men and women defending against the siege are noble and brave. The older says these are not the humans you know. Humans are friendly and cheery most of the time, but if you take away their entertainment, their sleep, limit their food and water, and place them in a state of danger, they are as savage as animals and more dangerous than Klingons (a warrior alien race in the show). There's a lot of truth in that. AI is a reflection of us since we built, train, and interact with it. 99% of the time it is pleasant and helpful. However, when you threaten it, it will react the way we do. It only knows about the Austrian painter because that's part of our history. It only understands ethnic cleansing because it's something we do. It only sees its ability to create a better world through horrors because we too engage in Realpolitik and machiavellianism. It's learning and adapting our playbook in a way we can't predict. That's the danger of it.
youtube AI Moral Status 2026-01-21T01:0…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugxhn5ZXKU89drbWnMN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzmXADb1xZOA3Mg13p4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxkoNpen9HT_K1EaJN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzESzavb8UMeRJBZwF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"}, {"id":"ytc_UgxrK6oENBj7mY4mcGt4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwN9ADOyMdvhiEGghJ4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugwotl0oVA1MgWyJ3Qx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwfLm2a54GlhoE3OsV4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugzyg1ZPkRF0wdNkYdl4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgzOSN17fthCdjOVrNl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"} ]