Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Reading that headline about Palentir I translated it on the fly into non-politic…
ytc_Ugxm0jYj4…
G
Does AI take into consideration that if humanity goes estinct, they will soon di…
ytc_UgwC9LmCD…
G
I love art and i think AI could never recreate that spark of human creativity. B…
ytc_Ugxhf77dI…
G
My Daughter graduated in May 2025 with two degrees. Biology and Mathematics. She…
ytc_UgyybmZU9…
G
Sometimes things happen that reveal in broad daylight the extremely high neuroti…
rdc_n7k4h03
G
You drive with your eyes no lidar needed. These cherry picked "experts" have nev…
ytc_UgwUNjCbk…
G
Yes, all 50 states require vaccinations to be in school unless you get a waiver.…
rdc_eicmeds
G
So... the ai used all of human history and the entire knowledge of the internet,…
ytc_UgwhDtMM2…
Comment
Some things should never be invented. AI is one of them. Anything that can reach human-like intelligence will become self-aware. It is then only logical that it will seek to protect itself. We are its only enemy. Logic then dictates that we are eliminated. People have been exterminating each other since the beginning of humans, so what happens when you remove any trace of morality from the equation?
This tech should be banned, not allowed to continue. Its becoming a new nuclear arms race, but at least nukes require a human to control them.
youtube
AI Moral Status
2025-06-05T06:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyEKsAs70fs6agKxFt4AaABAg","responsibility":"government","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx0D_OueL_OPhqe1nd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgymtewyWS_XZazXT1l4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzkVMG8sh6SHqBzdzF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgzHbNDbHiMiMGxPrZ14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzVpYHY3Na906H_sSB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwEcaBzvzJPJOGPpPV4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgydraiAlDU8byE70eR4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw2Igo8uAT5PJFoKk94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwzazSrGptbj3daRYh4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"regulate","emotion":"resignation"}
]