Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
2 things:
1: AI is doing a great job
2. Men don't look for the details as like …
ytc_UgyX8--oD…
G
@lukec5623 I think he might be refering to a feedback loop. As this system is a …
ytr_UgzGWYrB3…
G
That would be incredibly hard to do. A good amount of people here are going to …
rdc_g9sz7lq
G
Immoral, ungodly, reductionist to the point of occluding obvious future possibil…
ytc_UgwnVSuzS…
G
Thanks for sharing your thoughts! It really is a mix of fascinating and eerie, i…
ytr_Ugz1imU_j…
G
Since YT won't let me post a links, please search:
"'Lavender’: The AI machine…
ytc_Ugw2LjoLg…
G
I wonder what would happen if that robot turn that machine gun on 🤔 haha…
ytc_UgztLdpJD…
G
I would happily have AI replace CEO’s. Seems like the job has long hours, high s…
rdc_m2arrl0
Comment
Any robot that can become smarter due to the more interaction they have, can AND WILL eventually outsmart human beings. I mean are we seriously going to ignore every single movie that shows how this shit ends???
youtube
AI Moral Status
2021-05-21T05:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | unclear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_Ugyy4i3cJRPMjLX6hWJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgzQuyUZK7iqsN-s87V4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzgYkSZYtFH-MHUOEl4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy2Qyf0rsTF07188rt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwB0XaKzX9IODLRVRR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxF_UhhLA7LKsOBmyZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxBI9E7xSTzfTrXgeJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugw_agtYV5fW7-ImDxh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgwTPnQXbfmc-2BvFel4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy9v0WTjzOYTnHNIIJ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"mixed"})