Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Wolfram is very smart, and maybe, too smart for his own good, to admit Yudkowsky…
ytc_Ugy8QpWKb…
G
@hakametal it's a good thing. Artist they're just salty when they all die and th…
ytr_UgxBhL8qh…
G
As a mix of digital and traditional artist, I always get confused whenever peopl…
ytc_UgzITV-cw…
G
People need to stop calling these search engine programs "Artificial Intelligenc…
ytc_UgxxALLvJ…
G
art is also in considerate using of mediums and context to create your idea from…
ytc_Ugx_1xmfZ…
G
I will be canceling Amazon prime if Ai replaces delivery, warehouse workers, etc…
ytc_UgySlMu92…
G
Yudkowsky is an autodidact[23] and did not attend high school or college, I am a…
ytc_Ugx849mvh…
G
I don't think that's quite how it works. ChatGPT cannot spit out its training da…
ytc_UgyJe_5-n…
Comment
The biggest problem in robotics is that they are perfect, unlike humans, they dont do mistakes and they dont age. Building a self-improving AI that would have infinite time to improve itself wouldnt mean the end of humanity if the AI had some rules it has to follow, for example to protect humans from any damage.
youtube
AI Moral Status
2017-02-23T17:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UggMYT3QVEugTngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_Ugg4EttFwJ0C_HgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UggQic20SC1MG3gCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugh9ZDxiKzDTDXgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgheLFoKvgFErngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgggD4wUkJmJlngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UghWccnEejCDEngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UggmC4suz5PNg3gCoAEC","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugi8KLtUpuUXmngCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugh9GeRqG8Yl1HgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]