Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I wonder if we will all go offline one day. Nowadays Internet is just AI slop, b…
ytc_UgwhttFTB…
G
@lomiification You're right! And I don't! I've nearly lost both parents this y…
ytr_UgyMTzEYR…
G
Two points: 1) go to Tristan and Aza’s presentation on YouTube, the AI Dilemma, …
ytc_Ugyd0K2Lh…
G
the jist of this video ...what little i was able to stomach ... seems to be that…
ytc_UgwE_zgCF…
G
The Antichrist is coming and the Aliens will invade the planet! The aliens will…
ytc_UgzsxvyJl…
G
SPOILERS: This is how it will play out. AI takes jobs away > unemployment rises …
ytc_UgwTyajkB…
G
The giant leap that is taken in this conversation is that the “machine” will be …
ytc_UgyZ5RngF…
G
In Mexico, all citizens are required to provide all biometric data. We have unti…
ytc_UgyEARlav…
Comment
The problem with LLM's is that they draw exclusively from past human thinking, which has been categorically clueless*. If they answer anything other than consciousness as 'being in touch with your senses, being able to deliberate what your senses are telling you, and being able to respond to them based on your deliberations', then they are drawing from past muddled and convoluted human thinking (especially from the 20th century, which was particularly bad). If you want 'higher consciousness', then you will be enhanced in all three categories, especially in the deliberations, which would be based on Final Enlightenment* in the highest case.
What all this means is that two current ChatGPT's talking about consciousness will be fruitless, if not misleading, if not ridiculous.
*as defined by Broader Survival
youtube
AI Moral Status
2024-11-11T07:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwEtCYC1JBRzOwkptJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzeyIZa0p8A1MDRIB54AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwI5qTDpCa9DCK_crV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyNNqif7NNbsVnofxJ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw3LvUNMNaIaBp7LJN4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyj4IYHO0qG72DtqnV4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxtI36j-oTnr_TL72R4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxUmErN_Bc2Tr4Wq7t4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxhR67YlKjHGZVWEVh4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwsRNoemndTx7fUvrd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"})