Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Until the AI can just answer the e-mails without needing a human to prompt it. I…
ytr_UgwHGKUJN…
G
God I work at my alma mater and my job is pushing so hard for us to all start us…
ytc_UgxNDdsQw…
G
If AI continues to improve, the result will be something between a complete inhe…
ytc_Ugxzw5aj8…
G
Kk KFC femboy Hooters and Hooters girls we understand nobody has any say in the …
ytc_UgyKKr_7G…
G
Remarks by Mark Zuckerberg, the founder of Facebook and Instagram, about the fut…
ytc_UgwloAeUD…
G
In 1996 I wrote some papers on AI for grad class. I gave examples and a lot has …
ytc_UgzOTW_b_…
G
We have one AI podcast already in Mexico a podcaster Roberto Martínez created h…
ytc_UgwCDdDlm…
G
AI cannot do hairdressing, there will be a lot of people with really slick haird…
ytr_UgyEIXyhp…
Comment
I fail to see why many of these AI speculations assume that we will force a sentient machine to work. Sure, it will likely happen in some instances. But if we as human's spent centuries dehumanizing each other for labor, why would we even bother elevating robots to that level?
As to the "It'll happen before we realize" argument, I wager the machines will be set back long before they have the chance to develop consciousness. If they start making *any* decisions that don't fit their intended purpose, they'll be reprogrammed to remove that quirk.
Now, the luxury and pleasure industries. Those I concede may give rise to artificial intelligence. We already see dating sims and chatbots popping up with ever increasing detail. It may even be that marriage counselors will be the most equipped at dealing with AI. xD
youtube
AI Moral Status
2017-02-27T01:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugg3xHoUtx6gWngCoAEC","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UggE8ZCLy_Y7-XgCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UghT7BJ_Jkv_q3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UghD0PHvZSddz3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UghzdlAYEf702XgCoAEC","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugi0wlK0xxTZ3XgCoAEC","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UggL_n6lQWteeXgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UggLBNtGHpEtHHgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgiX-KqMqEVNV3gCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgivRlFZ-T5UaXgCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"}
]