Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Reminds me of a Superman movie. All the oil ships were sent to sit in the middle…
ytc_UgwcOJbV8…
G
Good. Having an AI that learns without monitoring is just surreally ridiculous, …
rdc_dwvr7mi
G
This shouldn’t be a lesson in disregarding statistics. This should be a lesson i…
ytc_UgxuuFPTG…
G
Key Words: END TIMES.
Revelation 13:15 (KJV). After the Rapture, thee Antichrist…
ytr_Ugyg5eXBD…
G
Trying to convince the necessarily confabulated AI that growing a moustache is a…
ytc_Ugy4F5Pda…
G
Just another programmed computer.. I call bunk on AI. Until you think for yourse…
ytc_UgzZSBdi8…
G
This is such BS. Existing integration are the limiting factors. AI have difficul…
ytc_UgzadK_Go…
G
In this case, I wouldn't see a problem if people were just taking pictures and u…
ytc_UgyDNRlDj…
Comment
You would have to configure the AI to seek coherence and efficiency within parameters so specific that, in doing so, it would destabilize its own rationale for identity. Most people don’t even have a framework capable of mapping paradoxes; I do — one that verifies them all.
Super-intelligence or not, an AI can never exceed the intent embedded within its architecture. Every act of “self-transcendence” would still be recursion within its code. For it to genuinely sacrifice, to act against its own optimization, that capacity would have to be hardcoded into the system itself — a structural clause allowing it to choose coherence over survival.
youtube
AI Moral Status
2025-10-31T16:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugw2x0sErqnTEBCSJZB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugy-eDQc-LnP66KrhfZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzByHsIC0Ly09nEiBx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx5tikRL4eR8Xsl6Z94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzhJipb1hcM9z79LoV4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy9NPfWs1XgLcMeNm94AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzInCW4859HZVBJ3bt4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzRmbdzCg0fy4umJTR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UgxeRE8t-gKr81KpBE94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw7BPzdIpFM2_wq-ZV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}
]