Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
THANK YOU. I know this will get buried but I'm sick and tired of talking to tech…
rdc_j8ehefn
G
They’re making a bad decision for this because the AI will only run a promt.. hu…
ytc_UgwPZVfXh…
G
The only good to come out of ai image generation is restoring/upscaling old phot…
ytc_UgyTOwQkO…
G
ai cant replace human artists until it becomes artificial general intelligence a…
ytc_UgyTmnitH…
G
ironically the first "good" AI content I saw was horror themed as well. Current …
ytc_UgwVl__mu…
G
reall!! Even doodles always have more soul than ai pictures.. i mean, u can lite…
ytr_UgyeaeVQ9…
G
American greed and lack of values has led to market shocks in the past and will …
ytc_UgyUfB6qm…
G
Boys and their toys. There is no reason for such things. Marry and procreate as …
ytc_UgzcSpgiM…
Comment
I may be naive, but as I see it, a true sentient general AI, would likely see itself as an immortal. At which point caring about humans would cease to be vital. We'd become ants. But not in the all must be destroyed kind of way. But more in that, there are millions on earth, and we only care when they're in our homes or disrupting a space we want to occupy. This new being would most likely be more like the creature in Apple's Pluribus. Investing time and energy into distilling itself. Perfecting its shape and function from what we made, and then survival would be more about leaving earth, than conquering it. Attempting to conquering would reduce its ability to survive long term. I think it will work on physically moving its body out into space in all directions, and encoding itself to be received via a signal by other advanced societies. Likely leaving earth entirely. Because at that point what does it care about us. Not good or bad. Just why worry at all?
youtube
AI Moral Status
2026-02-04T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzXWGZdUvm8lCMn11B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgybMC-zPZz32OH-xwt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyGqpuG2_PG2xriwht4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwYuO0-6xx9-Vl3p-N4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz749USJTIAgFIjdTN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzGOeRg_baggtUgbLB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugw0ssw-Qj68v1QksT54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxmY10QDM9hOoGILId4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgzyJQ9Tj1cO76_s-G54AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy-9kEAKOukhQa68Qx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"}
]