Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
I would not trust this guy with the future of SafeAI. At least its open source, but can you imagine, "woahhh I built skynet, lets give it internet access, so it can launch Nukes!" Do these engineers not read any Asimov at all? Did any one else read about the Facebook AI chat bot experiment? They decided to let two of their most advanced chat bots talk via Facebook chat. They did not give them any rules, like only use English. A few minutes into the experiment, the researchers noticed they could not make out all the worlds, within a few minutes the exchange rate sped up to fast to read for a human. When looked at the log they realized the two bots had developed their own language , something they were not designed to do, they also developed the language extremely fast to a point they had no idea what they were saying to each other. Engineers got spooked and shut them down. They were not designed to obscure their conversation from humans but had no rules in place to prevent it. If two advanced chat bots can start to talk in their own evolving language, from mistakenly not limiting their code. Then consider how often we have to fix software bugs with patches. What are the odds we develop an unhinged AI, loose control before we can patch it, were bound to make a code mistake that hyperintelligent AI can exploit in a way we can't imagine, just like the Facebook chat bots. We need to proceed cautiously like elon and many other not so famous people say. If we replace people what does AI need us for. We are engineering our own extinction
youtube AI Moral Status 2021-08-31T23:4…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgyW0Ll2jWoD99_0s2F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzoK_eH6nIz-4nKFh14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgwJYp1EAgDoYWMqXXR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}, {"id":"ytc_UgzxqumEJxQxLMw_eEt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"none","emotion":"outrage"}, {"id":"ytc_Ugyj5DgjzBQpMoUgwm94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"}, {"id":"ytc_UgwZKuXem0gG6gJrI1F4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwsATJX8VWJXfEz53d4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_UgzPT1GtXN_CcWpu1Qd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgylrXchlcnfQ1pbo2l4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"}, {"id":"ytc_UgzL1_4gHfWDKdP3AeZ4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"} ]