Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The AI is used to avoid and minimize civilian casualties when their coward Hamas…
ytr_UgyGICWuO…
G
@EXPSanity a lora acts like a filter you train with a set of images be it a pers…
ytr_Ugy9gBcTZ…
G
checked it rn, chatgpt is still racist to whites.
I asked it for such advices …
ytc_Ugzcxjcug…
G
You understand they don't just put direct reddit comments in the training data a…
rdc_l9vm3g0
G
Imagine what this world would have become if Bernie had been elected president. …
ytc_UgxHntUuQ…
G
It's insane to me that people argue that AI "art" requires talent or skill. Real…
ytc_Ugzxjqfid…
G
Already too late. The singularity occurred 6000 years ago. We now live in a AI g…
ytr_UgyRPOfrF…
G
You’re gonna make it go Rogue 😂 jesus thats so dangerous lol
Literally an I-Robo…
ytc_UgyN0EJbv…
Comment
I don’t think this will be the immediate focus of our attention with regards to automation. It is better to focus on how simple, non conscious machines can be used to exploit humans, and who controls them. We are closer to machines that can model millions of people at once than those that can argue for rights. AGI is quite a ways away, and building AGI that is aligned with humanity’s goals is more important than worrying about whether it will feel bad.
youtube
AI Moral Status
2018-05-14T20:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_Ugzh1wdOOHKk7AEvEOZ4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyMQpRfgepJs_b43f14AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzZ3yd0xcUfATtKCuh4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxoH3CHkZZ3Q4iGr-F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzagFJ9PYFKvkOqdEF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_UgzQLosjNyCDhYjepBR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzbqgiIjw8rlWcmH2t4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugw6GL-VcARNPVudB354AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy4f-Au3qIAvy45JPt4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgymqWaWzHC3NxHIoU54AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"}]