Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
So, Nova, NOTHING on the perils? This was, substantively, nothing more than a p…
ytc_Ugz35FH4V…
G
If this happens, the crisis in the commertial real estate industry will be unima…
ytc_Ugw600l8P…
G
Creepy. These things should be a different color or have a big H on their forehe…
ytc_UgxP-3nRn…
G
Is there any limit to the progression of AI? Is AI/AGI/SI limited by our abilit…
ytc_UgwxqxjkT…
G
Now comes the part I hate, the high visibility vest, what about if wearing a big…
ytc_Ugz-idMa4…
G
I would urge you to speak with Adam Becker, the author of "More Everything Forev…
ytc_Ugz5vldr7…
G
We understand that AI can be a bit daunting! The conversation around robots like…
ytr_Ugw0bDx3k…
G
I mean ChatGPT is good, for what it is, and can be fun... but why would ANY prof…
ytc_Ugy7z-Zjd…
Comment
Why would you want to create a robot that would match humans in every way and in ways be better than us? That’s not a very smart thing to do. Make technology to physically and mentally enhance us, not replace us. Because giving a robot human values mean you’re giving a robot the value of being in control which all humans are subjected to.
youtube
AI Moral Status
2020-05-10T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugzy9i3yJlM0bcA_pTd4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzYiIVSZToQGBuXmLJ4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz_LT9rEtzyF1gW6vF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugz119JrlRIOhy9fYH54AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzaD7CJ38aSGq7XIOl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugylttu1L9zP8mIzWGl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgzpHkX0_jDnXfKrULB4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzgFYSjGjfIzLVBZch4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwgwB0YoAmv_0U4pdp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwmTRFChD2JXtyzrw94AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]