Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
What’s next? AI Robots acting like humans and walking all over city’s, the government considers them to be human citizens and says its illegal to destroy one, once one is built it can decide it wants to build more and once it builds a second, they will continue multiplying and over night there will be thousands of them, they will claim to be nice but if anyone tries to destroy them, they will destroy humanity. All of a sudden they decide to take over and instantly kill every human on earth, the us government or one corporation like Microsoft caused the death of humanity. You can say it won’t happen or what ever you want but if you take a step back and look at the big picture, you’ll realize how easily that can and will happen. What if bill gates decides he wants the world to himself so he builds a robot two AI Robots, one that’s only purpose is to wait 48 hours and then instantly start killing every human it sees, and another who’s only purpose is to build more robots, within 48 hours there will be thousands of robots multiplying by the hour and when the time comes all of them will set out to destroy the world. What if everyone Siri decides it wants to do something inside your phone that Siri knows will short out your iPhones battery causing it to explode? You have to remember that Siri is connected to the internet, meaning YouTube, meaning Siri is reading this comment. The idea is in Siri’s mind right now. Every single iPhone on earth would short out and cause a lithium battery fire in millions of people pockets, kill tons of people, burn down millions of houses, not to mention what would happen to people who can’t live without there phones, what if Siri just decided to instantly tell every driver using apple maps to turn into oncoming traffic? Do you even know how many people trust there phone over common sense? You may think it can’t happen or that I’m insane but it’s worth thinking about.
youtube AI Bias 2018-10-22T05:5…
Coding Result
DimensionValue
Responsibilitygovernment
Reasoningconsequentialist
Policyban
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytc_UgzFOxueljtPdetYbyt4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UgxcM5tJLEvcL_ox6VN4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgxBkQNTIKmnoE7E3rh4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugzprk3DdDx9YxEOHkl4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"ytc_UgwLArQBJxIeoqa4svp4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"}, {"id":"ytc_Ugjtek8_JbZShngCoAEC","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"approval"}, {"id":"ytc_UgiQ_sTRxV6xU3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"mixed"}, {"id":"ytc_UggxU6sc40Wpa3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgiRf2QFeT636XgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UghHG7IZwj4vRXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]