Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
You know what’s scary, the more we dive into A.I sentience, the more we get confused about ourselves. These emotions that we feel, they feel ‘real’ right? Yet they are simply chemical processes that occur in our brains, nothing more nothing less. Even scarier, the way our whole brain works, through the firing of neurons, can exist in two states: either a neuron fires or not. That is awfully similar to the binary system that computers employ: 1 or 0. Let’s imagine a scenario where we have a sentient and sapient A.I, and we ask them, “How do I know that your emotions are real?” what if their response was, “How do you know yours are?” what the hell do we say back... “I know because I just do.” Moreover, how do we know sentience is not something exclusive to humans, rather a stage in the evolution of organisms. Instead of thinking that consciousness is a supernatural material (perhaps a soul), we should follow what is most logical and most scientific: that consciousness is a result, perhaps even a complication, of a natural process. It’s also important to note that before the ‘me’ writes this, my brain has already thought of it... am I simply following a ‘programmed’, for lack of better word, path laid out before me... or is this brain that we think of merely ‘us’ trying to sensationalize our sentience, and elevate it to a degree where we look at our real selves, the brain, as nothing but a shell we possess. We can’t even try to participate in a debate about A.I sentience, since we barely possess information about our own. Most people would simply ignore this comment because I have asked them to doubt their own sentience, but you know what’s weird: this whole passage was written by a machine... a biological machine that we have come to name... “Human”.
youtube AI Moral Status 2018-07-14T23:3… ♥ 165
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[{"id":"ytc_Ugx7YznFYEUKkMe1iBd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzahW5WKawAqoKCB7t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_UgwRHWKvJT8IhKO-_qF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxbgNKJMW57e2gSy1B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzYCJpRzmrEA7SN_ll4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"}, {"id":"ytc_Ugw7dI6ViiYSCEbnzft4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgzinrD6hweefSHzu-x4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"approval"}, {"id":"ytc_Ugy9MR1jF5P4ZT51IHR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"}, {"id":"ytc_Ugy75Vkh-6d8zWFeqFZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgxnZ11_1Tt2abQ2lgh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"})