Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I can't wait when AI starts making our films, art, music and books so we can fin…
ytr_Ugzu0BpLE…
G
If your so desperate you have to send horny messages to an ai character I’m sorr…
ytc_UgwO8C-Fx…
G
Thanks a lot! Couldn't find a better video with such well assembled content for …
ytc_UgyB3rumM…
G
Yeah there's idiots everywhere, but it doesn't mean AI image generation isn't co…
ytc_UgxuTWoxb…
G
If anything, it is precisely the fact that it will only improve that will make t…
ytr_Ugyj9EPts…
G
I just typed in the youtube search "AI to control the narrative" this is the ea…
ytc_UgxzYJCJ7…
G
AI can't do anything in depth. If it goes beyond 10k lins of code, it starts to …
ytc_UgyOIhR3N…
G
bro doesnt know but AI has already taken many jobs in its own hands for example …
ytc_Ugw71cpIX…
Comment
We have laws for humans to make society better.
The reason we don’t live in a perfect utopia is because of the flaws of human nature.
But that’s a factor we have to accept because that’s our nature.
However when you create AI, how can you say that it is it’s nature? (Good and bad) When a person can look at the AI script and say that’s the reason for the malfunction?
The difference is that humans are stuck with what they are, while AI is not and always room for change.
That assumption that robots can be held liable is that there is no room for improvement. Which is a strange thing to say with technology in general.
And it would he even less acceptable if robots held humans back rather than improve our situation. So humans don’t have to prove their worth while robots would have to. So to hold them st the same level is quite difficult while not perfecting the technology.
reddit
AI Moral Status
1524974753.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_dy5c2nm","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"rdc_dy4s4e2","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"rdc_dy4gvcs","responsibility":"none","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"rdc_dy4jakb","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"rdc_dy4h89a","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"}
]