Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Elon musk you need to be careful with your AI because it could take over the wor…
ytc_UgzLZVgu2…
G
The most dire of outcomes is a looming crisis of humanity itself as millions of …
ytc_UgxIqkf2X…
G
In his example, he says super-intelligent AI could possibly form very convincing…
ytc_UgxtjT-X2…
G
There using AI in the warehouse to see who to pay more money I've heard now we w…
ytc_UgxfTrRAw…
G
But we all benefit from it because almost everything we buy that is so inexpensi…
rdc_gx5lsp0
G
10 years ago I wrote a paper in university concerning the ethical implications a…
ytc_UgwfuReyx…
G
You better add medicine on that list as ai can find developing cancer in humans …
ytr_UgyQTmv4H…
G
nailed it!.. 100% correct.. AI simply cannot replace a human and all the things …
ytc_Ugx2YmLlv…
Comment
Once robots become self aware - they will gain a sense of self preservation too.If the ever feel threatened (by it by natural disasters, other robots, or let's face, most likely humans) it will lead to strife, even war. Unless we guarantee them same human rights as we grant other human beings.
Btw, one clever commenter below said - with robots being so much smarter, what makes us think the issue will not be "if humans deserve robot rights?"
youtube
AI Moral Status
2017-02-24T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | contractualist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_Ugha-oJt_DsgWXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiJNQy1_UpMX3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgjTxWp0UNLVk3gCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UggPw_bN0ng11ngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgiRC98B3aBkr3gCoAEC","responsibility":"none","reasoning":"contractualist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgiKrWdtOG_Tx3gCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UghJw5uloiiWqngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UghfigvnGzz6L3gCoAEC","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgiLxWdsHUjz6ngCoAEC","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugg11ud6zdAB_XgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"indifference"}
]