Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Agree. Op makes the mistake of thinking that the AI we have today, is similar to what we would see in popculture, and that the AI we have today is capable of making choices. Most of what people call AI nowadays is simply just an algorithm in which you can give certain inputs to get certain outcomes. For example youtube could have a censoring algorithm (i'm not sure this is the case but for the purpose of the example), that is searching for a given set of parameters, e.g. if a person comments a curseword, and the comment is heavily downvoted, the AI will delete the comment. It's clear to see that the AI isn't making any choices here, the choices where all made beforehand by the creator of said AI. It is my belief that if you don't like the results of an AI's operations, what you don't like is the creator of the AI's opinion on how this AI should operate. Edit: Sometimes AI can be too complex to be predictable simply because we don’t have enough computing power to brute force run through every input you can give the AI in order to be able to see every single possible outcome of these inputs. Therefore it’s not completely fair to say that the outcomes of the AI is what the programmer intended it to do, since there is cases where you simply can’t 100% know. The point is that there is no choice anywhere in any current AI system, given enough computing power, the inputs to the system and the framework of the system you can always 100% predict the outcome.
reddit AI Moral Status 1597022711.0 ♥ 5
Coding Result
DimensionValue
Responsibilitynone
Reasoningunclear
Policynone
Emotionindifference
Coded at2026-04-25T08:33:43.502452
Raw LLM Response
[ {"id":"rdc_kykw5yc","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"approval"}, {"id":"rdc_kyltinv","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}, {"id":"rdc_g0y7v05","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"rdc_g10p5cs","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}, {"id":"rdc_g0ys5vt","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"} ]