Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
saying we shouldnt have robots that can identify enemies because they might kill…
ytc_UgxrtkvP9…
G
My sense is that people who are saying to be polite with AI, actually haven’t sp…
ytc_UgzfVzHvG…
G
I'm not trying to jump to any conclusions but like, what would it HURT to make s…
ytc_UgxgL-VAr…
G
What do we need to power our world ?
What does AI need to power it's world ?
Ene…
ytc_UgxmojGQD…
G
When you plug ChatGPT with Robot Machine 😂
They can’t still think of themselves…
ytc_UgwU4QtTo…
G
Good news some peeps already experimenting trying to train AI with another AI to…
ytr_Ugw03tJTA…
G
Hi frostyboy500. It's certainly not an ideal setup, but since a subreddit can p…
rdc_cfkuaeb
G
Because you never know what political party is in next. Your bad think might be …
ytr_UgyXThsl2…
Comment
Here are just some of my personal ideas about the topic, no offense to anyone, I just want to raise some questions for further discussions:
Let's just say we can gives robots their rights and self consciousness, so the AI should be able to feel and behave like a normal human being. So as a normal human beings, we always have curiosity and the eagerness to explore what's out there, we want to interact and expand our knowledge and vision as time goes on. So how could a machine doing such things if in the present day, we still haven't found a way to sustain a life of a robot. Every living things needs energy to work, just as we need eating everyday, the robots need to plug themselves in a "power staion" to recharge themselves and also they need to carry a battery that is strong enough to help them sustain at least for 6-8 hours in order to be like human. This I think leads to a problem, we can sustains a small AI like SIRI for a full day in our phone now, but we still havent able to create a battery that is reasonably compact and economically enough for a full "human scale" robot.
So I think the bigger question we need to solve first if we want robots to be treated like a "next-gen human society" or be equally as human is not whether we should afraid of what will happens if we give them consciousness, but How can we be able to provide them the basic needs for their survival as a normal human being and make them feels that they are being treated equally as us and they should be able to explore themselves as well as being educated.
I strongly believe that the main problem that could causes robot to turn against us is because people don't treated them as a person and that is really hurtful for a conscious mind and that way also teaches them to treated themselves as a "should be more advance race than Human" and thus leading to their uprising behaviors toward their creator.
youtube
AI Moral Status
2017-02-23T16:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | deontological |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgiJxrcUrvuH_3gCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgiTPqoAdEgMr3gCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UggC_jx4u5W3BXgCoAEC","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UghhWiVkOMPmungCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UghBsb6B-kdY_XgCoAEC","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugg9i1U5KLMObngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UghODAUsQRPifngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Uggq231mY4_ztHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgjF3w78FAbALXgCoAEC","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgiiPSD_XyGyKXgCoAEC","responsibility":"developer","reasoning":"deontological","policy":"none","emotion":"outrage"}]