Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Ai is 19 ..the tree of death number ,that coxed the fall of Adam to this level .…
ytc_UgxFsIb37…
G
I just had an AI interview scheduled a week ago. It was weird the questions that…
ytc_UgwhgZ5IE…
G
I'm a chef for cooking instant noddles, because this guy is an artist for genera…
ytc_UgxZxqw-b…
G
Robot: You fake robot, intruder want my job
Human: Wait, wait, calm down
Robot:…
ytc_UgxIBBNVQ…
G
we are misunderstanding & cleaning the real question or issue here
Why have we …
ytc_UgwKbqjnK…
G
@RobertA-hq3vzBullshit. You cant hard code the robot to pick up any kind of shap…
ytr_Ugz7zvP9_…
G
All bigger technologies had always big impacts. Language was the first, writing …
ytc_UgyguD1N0…
G
I don't think people can create a soul. You can create very convincing AI, but e…
ytc_Ugw476EUH…
Comment
If AI is exclusively being trained on what is on the internet, then it is exclusively seeing the worst of humanity. A lot of what people say isn’t always how they will act. Ofc there are people who both say and do awful things, but every time one of us writes something we don’t fully mean, like “this person should die” or “this person is evil”, an AI will digest that and integrate it as how we really are. It could easily become truly evil because of all the evil we put to words on the internet
youtube
AI Moral Status
2025-12-15T23:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | virtue |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugy08cRqfdWrfiPvMfR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz5XwfLhOgBo9WKKuR4AaABAg","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgwADlEM6OFCHxRLhCN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugxn60-oigQPBiW8Umx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxarHxDLb0wO3Oi_cV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx9lrwYkfafZVwn8th4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzCG0MF8m37sHu0Nil4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugzb4gUvOBUau98PxIJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzbqkVZKD_jtAdABWp4AaABAg","responsibility":"company","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzmnPLy-8m8qRGaBUp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}]