Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@birdthebird3396 you can't uninstall AI, it's not like a fricking app you can in…
ytr_UgwE3rd9A…
G
I think the reason why humans won't stop developing AI is just greed, and the fa…
ytc_UgxzTEtvl…
G
The programmer did an excellent job! When a human is gathering info from memory,…
ytc_UgxjSEZIB…
G
If it was up to me to destroy AI servers, I would do it tbh…
ytr_UgztSfi-Q…
G
I can’t understand why the officer trusts the casinos AI so much like as if the …
ytc_UgyPYeiAW…
G
Use to wish, I was young again, not any more. I am so happy with my childhood, t…
ytc_UgzfECYYj…
G
Yeah but that’s going to end the same way factories and machines versus actual h…
ytr_Ugz_R-emB…
G
We will never have robot rights, we will only have things like a smart toaster b…
ytc_UgxmAnrNs…
Comment
I don't like the potential of confusion arising from the word "conciousness".
What I mean by a soul:
An entity that not only processes one's senses, thoughts, memory, emotions, and actions in order to perform actions, but one that experiences them in the form of what feels like "its own world". Something a neural network would not require in order to function and achieve all Human accomplishments, and that we got no idea how it even works.
What I don't mean by a soul:
The knowledge and awareness of, and thought about, one's own existence. This is just a side note that any soulless form of intelligence might be able to make some use of, but that any soulful being could perfectly live without. Do I need to be aware of my own existence in order to experience a "report" of my brain activity into the "world" I experience? No, I've been doing this long before I asked myself such a philosophical question.
youtube
AI Moral Status
2018-06-16T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgyGfDw1xgN5DCKJA9l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxakgLjD1EzYZnXfNB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzGX4hl6YJebGzrfzZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyTegkK315HFxl4qbl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyugDsuwkPGzlzIlYx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyaN6sJhihdnnlYSdd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwFD6uewSSBzD-xBAR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxmtyIG_a7L_Oq0qbV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxT40Zl2wApmAbXWyB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyD4g_ysMF1LTzscmp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"mixed"})