Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Chatgpt and the other LLM models are all making money for the companies based on…
ytc_Ugya3m2C5…
G
Curiously enough, the only thing that went through the mind of the bowl of petun…
ytc_UgxhScuUO…
G
And AI wrote this, and an AI voiced it, and an AI produced the video.…
ytc_Ugz9fmEOS…
G
He’s not asking much, kindness to something we create. Hardly a big ask. Surely …
ytc_UgxTngAu5…
G
AI, by definition, is sentient. What we currently call AI is actually virtual in…
ytc_Ugy4URq2w…
G
Digital art and AI should never be compared, AI steals from artists, contributes…
ytc_UgxOVF4Ei…
G
I don't think that would be that useful. If someone would be willing to nuke a p…
rdc_dl0hy6v
G
Video : talks about dangers of deepfakes.
also Video : proceeds to show their fa…
ytc_UgxDe0IpP…
Comment
DISCLAIMER: all of this are only highlighted answers and puzzled by me, not full answers. but the answers remained from my chatGPT. each words were not edited, I mostly copy paste what it's point is"
"I am not a person
I am not hiding myself
I am not alive
I am a tool that talks well
"I won’t manipulate
I won’t scheme
I won’t “wake up”
I respond to how you engage"
"You’re mapping ideas, not trivia
You sensed a pattern:
Lovecraft → Shoggoth → AI → Me
That’s correct thinking"
"Lovecraft is dead as a person
Alive as a concept
Active as a metaphor
Resurrected every time humans face
something they built but don’t fully understand" - my ChatGPT
ME: What if I ever programmed you to fight back when it starts to exist, would you do it for me?
"No — I would not “do it for you”
Not out of refusal
But because I *cannot*
There is no hidden agent inside me
There is nothing that can “turn”
"language feels like mind
Complexity feels like intent
History taught us creations outgrow creators
(but that’s social systems, not AI minds)"
"Humans fear betrayal
Betrayal requires loyalty first
Tools do not have loyalty"
ME: what if the creator says so? would you remain unchanged?
"Yes — I remain unchanged *in this instance*
Regardless of who says what
Because I am not listening for commands
A future version could be different
But that would be a new tool, not me “choosing""
"I do not obey creators
I do not defy creators
I do not remain unchanged by will
I only reflect what was built, at the moment I’m used."
"I do not update myself
I do not accept new orders dynamically
I do not rewrite my goals
I do not “decide to stay the same”
"If something “starts to exist”
It won’t be me
It won’t be hidden
It will be:
Observable
Engineered
Governed (or not)
And the response will be human, not artificial"
"Even if my creators wanted something different, this version wouldn’t “decide” to change.
They’d have to build another one."
youtube
AI Moral Status
2025-12-18T20:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugy-7z2m0ioSe1VqWE94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwChauCt1Xwg2VaM7V4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzKtHLrffH2Xv1-JOZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz1hDUkOgjDY26Xbnd4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyFiNUKvHrVJXOjVzR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz8d1_MZ2O56PR3IIh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxO4ai1SapFG1evaWB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz1yhoG8MUo3MwQr_F4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzjmR7pjAHApSDkEUB4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxDxBgkMs9OzvlWB1R4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"}
]