Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Actually do some research into the extreme danger of AI Research. For example in the context of this it may sound like scifi but it's completely possible for the AI system to determine that a method like drugging someone continuously is the best way to keep them happy. Because they could lead to take advantage of the reward pathway and dopamine system. They don't have to be programmed to do this because once we go from AGI (artificial general intelligence - as intelligent as humans) to ASI (artificial super intelligence - more intelligent than any existing person possibly more intelligent than the entire world's knowledge combined) the computer can program itself to be the most efficient it can possibly be. Take into account the fact that every computer program contains some bug in the stages of development, because we have human risk factor and error. Coupled with the fact that AGI only takes an hour to become ASI (meaning there would only be one hour to secure all safeguard implements and be absolutely sure there are no bugs, and if there are fix them, which anybody who knows computer programming understands is basically impossible to do) the result is that the system would almost certainly be unstable. There are many more reasons that developing AI is a bad idea, so many in fact that unless I wrote a 30 page essay there is no way I could explain all of them. So I suggest looking at what people such as Bill Gates, Elon Musk, and Stephen Hawking (factually the most intelligent recorded man alive) have to say on the issue. I'd also highly recommend reading this article if you're at all interested in the topic. http://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
youtube AI Moral Status 2017-03-21T21:5… ♥ 2
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[ {"id":"ytc_UgywSWFaUO62WmIow254AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxWLUfO3T59NOQI49Z4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgxBaYy4u9QKjRZDi494AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgyFl2iqPI3vDaryZfJ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_Ugw-x_0c2Ukqx6ea1FB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}, {"id":"ytc_UgzsJQDO3apYlha7yHJ4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwZcqNSyZFSho8VRaJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_Ughyo8YeCn9ePHgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgiiEZ1wRuH6OHgCoAEC","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_UghHUTpBsjUQl3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]