Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I have alot of respect for this man. Even if his obvious bias is inclined to the…
ytc_Ugyc6IjXg…
G
This is the HITLER REGIME all over again. Hitler did the same thing. Before you …
ytc_UgzlvOdWM…
G
Let me put it like this: A gallery of handmade art is a showcase of skill. You…
ytc_UgzMMnrzm…
G
AI works with datasets like documentation, so it doesn't create knowledge from e…
ytc_UgxC03cOS…
G
We recently had a radiologist come into my tech school to tell us how AI will in…
ytc_UgwzKrjuZ…
G
Or maybe the statistics used to teach the AI simply don't care about what they l…
ytr_UgxJmDLIg…
G
If the lower package is as good as the top... Sold! 1 question though, Can she g…
ytc_UgwB7ux1d…
G
Not AI but greedy tech giants.
So much attention and investment in AI compared …
ytc_UgyN15xD_…
Comment
They're not aware of when they're being tested, it's a different tip off, that they're trained in a certain way for tested tasks. You'll never get caring with them, they don't have that in the model. What is being hand-waved as philosophy here is really important. It's a substantial distinction whether or not something is thinking or not. If something "thinks", then it's responsible as it's own entity in the world, cognito, ergo sum. These models don't have agency, they don't have thought. Everything they say or do is as a software with fault fully attributable to their authors and users. A latent space isn't a memory space, it's a lossy compression. Yes, you can sort of take out hallucinations, then you have to do a lossless compression instead (much more expensive) and then you're not going any generation, just look up. You're doing search rather than "AI". So you know... not exactly something that's going to attract investor hype these days.
youtube
AI Moral Status
2025-11-02T13:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | deontological |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwdAOIw0vC2w_SXVel4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugyu7U3JsjE2Z72cTRh4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw4AGe2FeVh54njl494AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz10iK1QouyETqmQR14AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwD-S2aY4BOf3U2Exh4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx-4HfaBMiOAiRX6yx4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxbns0VwxHsfe7e4fJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzmsr9lspFLoWJm5gJ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxnO6auS0yaYgzQgPB4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgztsmRIcAbBSheo4-l4AaABAg","responsibility":"company","reasoning":"mixed","policy":"unclear","emotion":"indifference"}
]