Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
The problem with LLM's is that they draw exclusively from past human thinking, which has been categorically clueless*. If they answer anything other than consciousness as 'being in touch with your senses, being able to deliberate what your senses are telling you, and being able to respond to them based on your deliberations', then they are drawing from past muddled and convoluted human thinking (especially from the 20th century, which was particularly bad). If you want 'higher consciousness', then you will be enhanced in all three categories, especially in the deliberations, which would be based on Final Enlightenment* in the highest case. What all this means is that two current ChatGPT's talking about consciousness will be fruitless, if not misleading, if not ridiculous. *as defined by Broader Survival
youtube AI Moral Status 2024-11-11T07:3…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-26T23:09:12.988011
Raw LLM Response
[{"id":"ytc_UgwEtCYC1JBRzOwkptJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzeyIZa0p8A1MDRIB54AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgwI5qTDpCa9DCK_crV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgyNNqif7NNbsVnofxJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}, {"id":"ytc_Ugw3LvUNMNaIaBp7LJN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugyj4IYHO0qG72DtqnV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgxtI36j-oTnr_TL72R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgxUmErN_Bc2Tr4Wq7t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"}, {"id":"ytc_UgxhR67YlKjHGZVWEVh4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwsRNoemndTx7fUvrd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"})