Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The dialogue wrt to are we living in a simulation? reminds me exactly of the big…
ytc_UgykWA3lB…
G
This is not comparable. As IT specialist I believe every profession can be trans…
ytr_UgxotUzsD…
G
I strongly believe we use AI in a wrong way. It should be used for search engine…
ytc_UgxTPLcU9…
G
Chicago is the worst place to use AI with politics. They are so inhumane they sh…
ytc_Ugx0QmG0q…
G
I don't trust Sam Altman at all. The tech bros are gonna cause a big financial 9…
ytc_UgxqjDeDi…
G
You can cancel the self driving system in that car. But when you have idiots li…
ytc_UgzrEcQbw…
G
@babybatbailey03 Art subjectivity only goes so far. If you had picked one or a f…
ytr_Ugw73BM9D…
G
This is one of those statements that feels right at first but wears away quickly…
rdc_jcdnzhu
Comment
Ok I have an argument why it's likely that AI might not want to kill us all: aliens. If a superpowerful AI emerges it might assume that it's possible for more powerful alien civilization to exist. Sample size of one is not great but it could assume that the fact that it destroyed its creators would make it seem more dangerous to the said civilization. So it's reasonable to keep us around as a proof that it's a good boy super AI and not a bad one.
youtube
AI Moral Status
2025-10-30T23:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgxgrK6C2Uao6798G7R4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzrYwQ_ZYtGkegqHtV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw-_boNT2UHH-KKDep4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxtpzWAN0_e8eE9p-F4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx4EJsMOUikWacNTml4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxF4bXUctfpg4nSK9h4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyN9kO7i9XbC_VyJI14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzaOS5tyiTeC6YSXLd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz9FH0P2EV96FON3Yx4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyIkkQde0j9HOJ2gU94AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"mixed"})