Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
AI (or at least LLM) are interesting because people believe it's intelligent, that it thinks and can conceptualize things. It doesn't, it's just a prediction algorithm that predicts the next word in the sentence, it uses a random seed to determine which option it picks using a probability list of each word (or word part). It has recognized patterns in language to where it can assume that with "x" word the next word will have a 50% chance of being y, 30% chance of being z, and 20% chance of being w. The seed essentially rolls a dice to determine which word it uses and then repeats the process for the next word. This is why it doesn't have object permanence, if it decides to tell a story about dan, the only reason it remembers dan is because the word is part of the prompt, thus boosting the likelihood of it being used. It just sort of understands that Dan is a word that gets lotted in where "names" are usually slotted, so if its writing a complex story it will likely want dan to talk to dan, since dan is the name being said. It doesn't understand a damn thing about dan, it just figures that dan is a character that will recur, and all the things currently being said 'about' dan, are things it can repeat and use a thesaurus for. It doesn't stop people from thinking the thing is alive.
youtube AI Moral Status 2024-02-04T07:3…
Coding Result
DimensionValue
Responsibilitynone
Reasoningmixed
Policynone
Emotionindifference
Coded at2026-04-27T06:24:59.937377
Raw LLM Response
[ {"id":"ytc_UgztMCgo0O-90H4pBnF4AaABAg","responsibility":"people","reasoning":"consequentialist","policy":"none","emotion":"outrage"}, {"id":"ytc_UgzXaTwnqXpkcuBdkP14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"}, {"id":"ytc_Ugy77sIO9ubgk36i4A14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"}, {"id":"ytc_Ugzp86Z9ySm36pu2JCd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"}, {"id":"ytc_UgwUw7SBe9JDUDki4ql4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}, {"id":"ytc_UgyFIwdlxw9j6ZNePLh4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"}, {"id":"ytc_UgzI3vEaBm6JDR3sU094AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}, {"id":"ytc_UgwKecYfly-pERv3rOB4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}, {"id":"ytc_UgyVUhX-VppzNtM2DH14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"}, {"id":"ytc_UgwD7uclchqCXrLAK754AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"} ]