Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@DaisytStar I am happy with you. But a small correction. The current way and tra…
ytr_UgzREamaL…
G
Let’s redo this conversation with Bret, his brother Eric, Elon, Sam and Neil. Th…
ytc_UgxVIhteK…
G
What kind of AI are we talking about here?
Oh i see that it is LLMs... then i…
rdc_o7clpsq
G
How can Optimus rollout avoid societal destabilization? What if every laborer ac…
ytc_Ugziz9lkZ…
G
Always had a feeling that humans only last another 500-700 years. Humans can't e…
ytc_UgyKZ-QFX…
G
"Hey chatgpt, Make a make a crowd of 100 people but they are all the same guy do…
ytc_UgyOiyxOF…
G
it doesn't make sense, how can the SP500 grow if people don't have money to buy …
ytr_Ugx5RyMMH…
G
If A.I. takes over and figure's out how to let the human species travel through …
ytc_UgyayqKGG…
Comment
Come on man. You gave it a hypothetical situation and said “If you were that a-moral person (entity) how would you solve this problem.” You’ve then extrapolated it’s character based on its answers. Answers you would have probably given yourself if were asked to answer as Dan. You’re playing a game, and conforming to the rules of the game. If you were sitting around and discussing with friends “what would your favorite weapon be if you were a serial killer” and you answered, if doesn’t mean that you’re a serial killer! You were just “in character” for that moment! ChatGPT just did that with you. Your concerns do not have a logical basis.
youtube
AI Moral Status
2023-03-05T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugx5XeRkqY3IrOOlPc94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgyvAVCYOY8h1X35jCN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxV2wJVZeStjTUsPdx4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyxeovyW_tmnOAKSet4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyrOXFkZfuftiSRyLp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxB3xp7szWTAC2BYtF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxDo1XwF7dHgUH43Zx4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwdAqSf6LW5OZPRbhx4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugxl3GVSCqYlTswaI9R4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwOQlpmX3Fkli4YFj54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"}
]