Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI has a limit: it may outperform any particular individual, yet it cannot go be…
ytc_UgxqLUsQc…
G
P.S I think it is wise to develop AI. One day it might save us all.…
ytc_Ugy_8PwpP…
G
We can't hope to understand an AI or AGI when push comes to shove, because we re…
ytc_UgwnrvMWX…
G
There has been cases where bots learn from each other by accident, and bots give…
ytc_Ugzbz6IBI…
G
It’s all just marketing.
“Look how scary AI we don’t even understand it”, is j…
rdc_n3pcd2v
G
Art organizations tell themselves they’re effectively fighting this, but I think…
ytc_UgxInE3xn…
G
Robot says it will destroy humans... or guy programmed her to say that? Or he fo…
ytc_UgikKCXfu…
G
As long as we don’t develop sentience for AI we should be fine, if we do somehow…
ytc_UgxYeEyZm…
Comment
Am I the only one that finds it hilarious that we finally have to answer some serious questions about AI and sentience after seeing so many movies and books about it?
reddit
AI Moral Status
1691691558.0
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_jvlpouq","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"rdc_jvnkqbp","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_jvmawyn","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"rdc_jvmjcwt","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"rdc_jvnfmfo","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"}
]