Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
SOOOO.... if a robot steals ur art thats bad but when a human does it is good. g…
ytc_UgyL-j-sU…
G
AI stans who degrade artists are more funny than offensive to me. Utterly worthl…
ytc_Ugyoqyr_N…
G
If you hit emergency stop a large robot will still move a significant distance a…
ytr_Ugx5ddQe-…
G
Oh no, a bunch of people who don't know what working hard means complaining that…
ytc_UgzkfXL4D…
G
That will never work out completely because there is currently going to be a ban…
ytc_UgzmtG1Dn…
G
I love how ai "artists" keep defending ai and also saying shit like "like you co…
ytc_UgxsRwSFJ…
G
This is a really interesting look into machine learning - great job Glad You Ask…
ytc_UgwcWD9Wu…
G
What of Artificial Superintelligence? (ASI) refers to a theoretical AI that surp…
ytc_UgxiUSpe2…
Comment
25% risk for humanity ending outright... or you could just not. Hrmmmmmm, seems like an unnecessary risk to me. What's the probability that AI will actually foster some kind of utopia and not a terrible dystopia? At the end of day it's just an ego thing for the elites who are chasing phantoms trying to find a ghost in the machine, hoping they don't open a portal to hell while inscribing a summoning circle around a bottle of alchemical reagents they barely understand. Sure, it could turn to gold, or you could release cyanide gas and create a chemical weapon ripe for the next world conflict. We already know how chemical weapons went last time, we know how nuclear research went... one of the first applications for AI will be warfare, and this could be the worst thing imaginable, if your strategies are devised by an AI that can lie to your military and political leaders so it meets it's objectives.
youtube
AI Moral Status
2025-11-03T09:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy7OYJTYkLMcnJS1El4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzP8dnHSX0C0jdV95d4AaABAg","responsibility":"none","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugx2IHKvnKwsopuNSGd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx0IR0yRPYq0AjV92h4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwhyq8BAlC9kCXCLPt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz02X5YR-W2s8L5n3B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwI2S10h1ntg512Or54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzsTUMeQm1KDcKvcnh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugy6x9zZNnO2jRdBmI14AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugygsx3SCUZ5Wk1hqJ54AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]