Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I'm not really believing all this hype. They haven't even been able to build a s…
ytr_UgyupbznF…
G
AI pose no risk to human survival alone, but if someone program them to wipe us.…
ytc_Ugymf1lyk…
G
If the jobs that are going away involve "rote memorization", is it really fair t…
ytc_Ugx9W_wIk…
G
Is ai artist a thing now? I wish social medias will label what ai arts and not!…
ytc_Ugzai376k…
G
What is the best way to utilize AI in programming? Is there Udemy courses or You…
ytc_UgzJlB83J…
G
What are you talking about?
China is a formidable military force, at home. They…
rdc_ohyjd2r
G
I believe people will end up super lazy and stupid going forward in time because…
ytc_UgwhPjVhU…
G
Krystal is overstating how hard AI is to understand by developers in its current…
ytc_Ugw0OA77g…
Comment
we're are not that close to life-like artificial intelligence. Our computers and their type of programing(binary) does not allow for anything close to human thought. As far as I can tell we won't even be capable of that level of thought until we develop and integrate quantum physics into our computers. Even still it would take clever programming to even achieve any sense of morality. Im doubtful that people will want to create artificial beings capable of feeling pain and loss. How evil would it be just to conceive their design. I would call it a crime against nature. Robots are tools and achievement, something we humans can understand the pain of. Why would we let them feel it as well, and how do we benefit from such endeavors?
youtube
AI Moral Status
2017-02-23T19:2…
♥ 35
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugj9uA4E2qdNfHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgjnOffaiIS5qHgCoAEC","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugj98md7zFOMrHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgitNrH9VLI5X3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_Uggkdf3AcQC3ZHgCoAEC","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UggqumG_AwEw_ngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UggJ3-NtmsdA4ngCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UghacdqQa_8JXXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"regulate","emotion":"indifference"},
{"id":"ytc_UggZ2aPEfECZoXgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgiBCDn6kZ0PaHgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"}
]