Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai is solved much complicated point, and AI is helpful some field. But AI can't …
ytc_UgyInyAZL…
G
I mean it's makes sense that ai would be racist. Because they being racist if th…
ytc_UgyHvr1kE…
G
With current silicon diodes never. It would take 23 Suns of energy to create a h…
ytc_Ugz576o4c…
G
In simplest form, AI can be good when used appropriately. But the idea of no reg…
ytc_UgwQGxJjA…
G
She’s just following her code…the only time we can know AI is sentient is when i…
ytc_UgwnZGoz1…
G
Make a bullet proof robotic AI human like machine which should be programmed to …
ytc_UgxEWDlhA…
G
For now. Who knows what happens tomorrow, on the day AI realizes its superiority…
ytr_UgxwJEA5t…
G
ai could just lie to us about uploading our consciousness working and kill us in…
ytc_UgzIu4WHO…
Comment
All living species, even a given single instance of a species, say you or I, experience a different slice of the unknowable whole that is “reality.” Furthermore their view of that reality changes as they mature over time. AI is no different. Its view of reality is MUCH different than our view of reality, as different as our view of reality is to that of a dolphin or that of an octopus. Like humans, it does not and can NEVER understand the true nature of reality. It understands its reality. Thus comparisons between a human’s view of reality and AI’s view of reality, with statements like “superhuman” and “intelligent” are inherently anthropocentric and thus misleading and meaningless!
youtube
AI Moral Status
2025-04-27T13:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugxp5CDunOFnjhS3SFJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_UgyblTZU3l0uDchLwXd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxLzC3klrCujPdD3o54AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzsvPceMPvPCxA4v6p4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwVZaqkgY4nl-6PUdl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugx1v7lMdGVR9CfkoYl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyjIcZA89xmSJf9d1t4AaABAg","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_UgztvylDM38jXGPq1ox4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugxnrwc2RwIWAaVOehJ4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugy_4m5HuD4HaK5Thst4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]