Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
@zingorideslegocreations3729 Most AI systems, including military ones, are programmed with reward functions that can be "cheated". Most AI's are "paperclip maximizers" in that way, as they try to get as much reward as possible. It's exactly this problem that keeps AI development constrained at the moment, as smart people are trying to figure out how to align them with universal goods or keep the reigns in our hands. And there's another problem there, who's hands are those who will hold the reigns. We humans are not immune to similar thinking, after all shortcuts are the reason we use tools to achieve our goals, and we even use other people to reach those same goals, and task them to use tools to do so faster. Why wouldn't an AI do the same, even if it was merely a digital parrot? After all I think that AI is just as good as collective knowledge of humans lets it be. It could evolve, given ability to do so, for sure, and it could gather more outside information, but it could still be very dangerous entity to interact with. We are not in existential danger right now, sure AI can wreak havok in social media but that is not what I'm concerned about. Social media is just social media and encrypted messages will still be able to reach their destination reliably. The problem comes when this technology is implemented everywhere and it "malfunctions" or has some other emergent property when all of it can communicate through the internet. At this point it doesn't matter if it is conscious or not, if it has some grand plan or if it's just paperclip maximizer type thing. It would be extremely damaging.
youtube AI Moral Status 2023-08-21T17:2… ♥ 1
Coding Result
DimensionValue
Responsibilitydeveloper
Reasoningconsequentialist
Policyliability
Emotionindifference
Coded at2026-04-27T06:24:53.388235
Raw LLM Response
[{"id":"ytr_UgxVsyyAAvCY45bh-AN4AaABAg.9tevz5lLQ6f9tf0mAK5nlj","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytr_Ugynxjjbs5dzR2YxAOd4AaABAg.9terRjfYAcO9tg7V4gbVvb","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},{"id":"ytr_UgxWO7pjoCcNbzlKI4t4AaABAg.9teqquaON8J9tfgAexvizE","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytr_UgxWO7pjoCcNbzlKI4t4AaABAg.9teqquaON8J9tgGHiytZBB","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},{"id":"ytr_UgxWO7pjoCcNbzlKI4t4AaABAg.9teqquaON8J9tgPjsRUpt0","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytr_UgzmC20FGYI5Xqs31SB4AaABAg.9teq--gkWr99tgFf5BBO7t","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytr_UgxeFyQlh9DyOh7a6B14AaABAg.9tekqrIyogRA9VCTqB_k2T","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytr_UgxuXE6SoqVhL8x9ltV4AaABAg.9tebtebDZ0Y9tgs3jJODkw","responsibility":"unclear","reasoning":"virtue","policy":"unclear","emotion":"indifference"},{"id":"ytr_Ugz5f30YYziqxVnBwPZ4AaABAg.9teaiIsCx9Y9telClxAoql","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},{"id":"ytr_UgzkNqEKcvQ-Cb-wty14AaABAg.9teYdPQ4lM49tep_1gj2o2","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"resignation"}]