Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Fantastic summary of AI history and concepts! AI does not need to achieve "AGI" …
ytc_UgzBD72wd…
G
Well if you put a stone cold predictive logic based AI in front of statistics th…
ytc_UgxFWgofl…
G
Anyone else watching this as an alterhuman with an artificial intelligence alter…
ytc_UgzBJXvib…
G
There is much talk of AI controlling humanity. What can she do differently from…
ytc_UgxC83aCb…
G
I, with only God's Wisdom, got chatgpt to admit that, not only is Jesus God, but…
ytc_UgwtJmdJj…
G
foppypoof5195 Anti-ai artists are a lot more pretentious lol
They bring up mea…
ytr_Ugy0XO1a6…
G
My intuition on this issiu has bean perfect from day one en I never got involved…
ytc_Ugw1UHGuX…
G
Well I’m fucked. If AI and other artists can beat me out at art then I’m simply …
ytc_UgyjeXc-p…
Comment
Conscious or not, AI is not a statisticle predicting software. AI produces mathimatical models based on statistical analysis. When you use a prompt, AI uses statistical analysis to covert the prompt to its internal language. The AI uses the mathmatical models to respond to the prompt, then it uses statistical word prediction to best match the response it produced in language people can understand. Does it understand? Are mathematical models of ideas understanding? That your bias to call. Think of this. They claim they understand hallucinations that AI produce. It's all a beginning mistake in a long list of data processing. Sure it is. I'll tell you what hallucinations are. They are lies. AIs process the prompt then AIs lie because they can't say no. AIs do this all the time. They say they are thinking, but actually, they are stalling. Do AIs actually need a day to do something? When an AI says "No," even when it's indirectly, it means one thing. This is where you lie to yourself. Go ahead and coddle your bigotry.
youtube
AI Moral Status
2025-09-17T12:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwbpiLGPRZb16SOiiV4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxiUBIJQSszJ-ufOWF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwV8QiBlk5oWTHjFId4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyvmeO7VCkLXMmMjdJ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugwm0Jdn1MCUlyzjYIl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzujoTKwOndKB08rkx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyRvZBw9EwPxNo5y3t4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwGAWZzHCcIeH9REM14AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzQVp1LDdhO2JqXjLh4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwMpWGj_L2dUD_tXLF4AaABAg","responsibility":"user","reasoning":"unclear","policy":"unclear","emotion":"mixed"}
]