Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The idea that they will only have what they are programmed to have, and thus completely in human control is a bit of a narrow minded idea, because already there are examples of pseudo smart programs developing themselves, such as Google Translate. The engineers an programmers remade it, but didn't predict the fact that the new version of it would create its own original language, which it then uses as a proxy, whenever it's translating between two languages that it hasn't done before. It turned out to be a really effective way of conserving as much information as possible in the translation, something that a simple dictionary translation can't do. It's a simple example of a program developing itself. Does this mean that AI will make emotions for itself? No, but what it does heavily suggest, is that we won't be able to predict what will happen. If an AI comes to the conclusion that to accomplish some given task, the most efficient method is to program itself with such concepts, it will do so, and after that, we'll have re-asses the idea that it's just a tool.
youtube AI Moral Status 2017-02-24T17:2…
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyregulate
Emotionfear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[ {"id":"ytc_Ugi0hj0S4tOJK3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"approval"}, {"id":"ytc_Ughn2l5l5nUY93gCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UghjyLhFY0N9d3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugj2Jo_uYDf2v3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}, {"id":"ytc_UgjWcRsFfwSE13gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}, {"id":"ytc_UggSkZsWg39NxXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugj0QLN4cIFMF3gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UggPezFG5S3VS3gCoAEC","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugj22OTCNxaAhHgCoAEC","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugg7RpJojOWA93gCoAEC","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"} ]