Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It was a lie. ChatGPT won't suggest a specific brand. This advertisement had bee…
ytr_UgwQz3dbY…
G
I think some Serive jobs will still be hot in this AI hellscape. Work on a/c uni…
ytc_UgxLfOA5k…
G
How the hell is care at 19 percent 😅
Your not gonna tell me a robot is gonna cu…
ytc_UgyZlqT59…
G
Frankly, I would trust a robot before anyone with power today. They are psychoti…
ytc_UgxDDHigG…
G
@opensocietyenjoyer also the fact that you are asking me that question shows me …
ytr_UgxoAKcoy…
G
The answer as to why AI can't create art is very simple, it lacks imagination. A…
ytc_UgyGSETNy…
G
Good grief... So-called self driving technology is programmed by incel geeks ba…
ytc_Ugz_JNBu0…
G
I agree with everything he said, but he missed a vital part. #UBI UBI has been r…
ytc_UgzP-MwhN…
Comment
So in most sci fi having a robot almost indistinguishable from human beings was like the next to last stage of the AI's plan to defeat the humans. But in reality humans have gone ahead and moved our machine overlords or exterminators to that almost check mate to humanity move long before that stage of the machines actually launching their first strike. Like surrendering to the enemy combatants before they start basic training. OK that's a better analogy. Maybe our machine overlords will be better to us than we've been to each other or ourselves. By Your Command
youtube
AI Moral Status
2023-09-21T05:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxYQQ3rcIAU37-WnPp4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyUB2BsF952OdXvf054AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx4NXVM-bRkjXNcriB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyznBa8FPSHQlFQluV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzxRD-OuYWN0-319N94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgyyvlwcerF0mFoVrst4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzx2a0QLbeOlDxMqFF4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwft6-WPn4Wda4bGQl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"disapproval"},
{"id":"ytc_UgwCih1PweL40W7TXtl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyIfYMESlQgKqnNfip4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}
]