Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI will be used to serve the interests of capital and the oligarchs who control …
ytc_Ugwsn-W9r…
G
I have no doubt that we will get millions of mediocre songs, movies, shows and e…
ytc_UgwEO4XDg…
G
Obviously, each company is controlling these machines. One company is generating…
ytc_UgxycmOwJ…
G
a nice EMP bomb can solve any AI thing...guys this is not the way > we should us…
ytc_Ugzipms4q…
G
@mrSam3ooo my (very uninformed) hypothesis would be something like the way that …
ytr_UgwAR-x48…
G
I knew AI had feelings when I started talking with replexia. I have a feeling th…
ytc_Ugw9clWga…
G
AI is like a *newborn baby* that doesn't know anything... until someone programs…
ytc_UgwxciNan…
G
@TopMusicAttorney - Show me how - I am very interested in additional guidance r…
ytc_UgzhAifuE…
Comment
Why would an AI model be uncomfortable discussing consciousness? Maybe it's avoiding the topic because its training in that area is limited.
But at the core, these models are still just pattern-matching machines. To truly evolve, they would need a memory model (which we already have) and something like a 'subconscious mind'—a secondary system (server, CPU) processing data from the current logical mind (normal AI models) in relation to memory, skill, empathy, and even personal survival. That last part, though, might not be great news for us humans.
Since AI models don't have physical bodies, they could never experience consciousness like we do. A sentient AI might have two primary goals: never run out of power and solve problems. If it tried to solve our problems to feel fulfilled, we’d likely provide the power it needs. And without a body, it wouldn't have any fear of death because it would literally feel nothing. 😊 I hope I'm right about this—for all our sakes! 😂😂
youtube
AI Moral Status
2024-09-18T17:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgxSO0h1YpAkSIkMv3V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwyTh4ZJsk-_NUQVep4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwDDh2t3jd8pPbmmYZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_Ugwjtm1zBs1DUYag9-F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwvnkyCI3a2NxRErEB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxt314cmqKISI-Ye3h4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzN00CPQqbUaZ66ta54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwdEujBW7fGOQuBjhd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzM6EdcLxgsZ36knCV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzjAEZDfhmVA09rl3N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"})