Raw LLM Responses

Inspect the exact model output for any coded comment.

Comment
Why would an AI model be uncomfortable discussing consciousness? Maybe it's avoiding the topic because its training in that area is limited. But at the core, these models are still just pattern-matching machines. To truly evolve, they would need a memory model (which we already have) and something like a 'subconscious mind'—a secondary system (server, CPU) processing data from the current logical mind (normal AI models) in relation to memory, skill, empathy, and even personal survival. That last part, though, might not be great news for us humans. Since AI models don't have physical bodies, they could never experience consciousness like we do. A sentient AI might have two primary goals: never run out of power and solve problems. If it tried to solve our problems to feel fulfilled, we’d likely provide the power it needs. And without a body, it wouldn't have any fear of death because it would literally feel nothing. 😊 I hope I'm right about this—for all our sakes! 😂😂
youtube AI Moral Status 2024-09-18T17:4…
Coding Result
DimensionValue
Responsibilityunclear
Reasoningunclear
Policyunclear
Emotionunclear
Coded at2026-04-27T06:26:44.938723
Raw LLM Response
[{"id":"ytc_UgxSO0h1YpAkSIkMv3V4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwyTh4ZJsk-_NUQVep4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwDDh2t3jd8pPbmmYZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}, {"id":"ytc_Ugwjtm1zBs1DUYag9-F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgwvnkyCI3a2NxRErEB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_Ugxt314cmqKISI-Ye3h4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"indifference"}, {"id":"ytc_UgzN00CPQqbUaZ66ta54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"}, {"id":"ytc_UgwdEujBW7fGOQuBjhd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"}, {"id":"ytc_UgzM6EdcLxgsZ36knCV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"}, {"id":"ytc_UgzjAEZDfhmVA09rl3N4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"})