Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Trying to explain to ai users why it's wrong is like trying to explain to rude c…
ytc_UgyQOJzOr…
G
Sorry but Blake is wrong. A robot can respond with what sounds like it has fee…
ytc_UgzPYYLOn…
G
The way I poison the AI image generators is that I suck at drawing. My crap is s…
ytc_UgwOJJkHY…
G
I had someone try to argue that using AI is like using any tool. That they manag…
ytc_UgwSbaV7h…
G
I was sad to hear these guys are already linked up. What is to stop them from le…
ytc_UgzA-RTlS…
G
Well, let me ask you this. Business is automating to save costs & increase profi…
ytc_UgzeoEzz3…
G
I used to work for Waymo, and it uses guard rail like mapping technologies with …
ytc_UgzedWS9m…
G
I doubt the AI Is racist... most likely the creators coding. But in your video …
ytc_Ugw984TZc…
Comment
Let's put some fears to bed
Will robots gain self consciousness and take over humanity
it is unlikely robots will achieve self consciousness like in Hollywood blockbusters like terminator and iRobot
the reason is there is a massive amount of processing power needed, battery power that can out perform 1hr to 4hrs operational time and storage for it to maintain as much data as we do
therefore
robots will be as ai assistants such as Google Home and Alexa currently work at the moment, cloud based
this means to the robot, its body isn't unique to its self and dependent on its survival, ie if the robot "died" when it lost power ie factory reset and it was self conscious to the point it didn't wish to be eradicated then yes, the desire to survive would be important to its self
but where it is stored on a cloud then the physical part to the robot ie its body is nothing more than a mere shell, and it's destruction by humans will mean nothing to it as it can simply operate another shell or robot and continue being "alive"
therefore it has no threat from us
and this by the way is a big if in that we are assuming we would reach the flops required to imminate human cognitive abilities
for now ai simply performs Google searches and uses text to speech and uses speech recognition to turn said audio into text to search the internet
its good we have face recognition as this is primary to making a life like robot
can robots not take over the world then
ohh don't get confused they will take over in different ways that are non threatening ie, they will start to be used to care for the elderly, serve in restaurants, bars/or/pubs, start making their way into classrooms, as teaching assistants maybe lunch time assistants, this will see a reduced need for human staff, which ultimately will see there a reduced workforce and higher unemployment
this then means families are less likely to have children and we (not them) will be choosing to replicate less bringing down the population our selves, meaning it won't be painful or fearful it was just be our own decisions and preference and in doing so the climate may also improve ie less people driving, less need for farming, less need for plastics, paper, and other forms of natural destruction, less need for fishing
we will learn to grow dependent on them in ways that they may be driving vehicles (where autonomous vehicles struggles to kick off), they may be operating in gp surgeries where the tech ology becomes trusted and advanced enough to reduce large volumes of errors ie that they become as good as a gp, maybe even in surgery rooms (that by the way is a long long long way from todays tech)
but what if a robot could xray you or perform an mri scan by the mere act of having you walk through a scanner similarly to an airport metal detector
isn't this Sci fi like star trek and haven't we aspired to make that a reality over the years with stuff like communicators (mobile phones), an ai assisted ship (alexa, Google Home), smart devises such as climate control and voice Controlled lighting (smart thermostats and lights)
it's an exciting future not to be feared
youtube
AI Moral Status
2021-05-25T10:2…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxE1Rz0nsXEXNmGDsN4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugz43h9oEILBK_e5lst4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzJmLX3ItHPviDUYoJ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz6jUv21Od8BmKZsAR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxichsBaU5Qlz_gAFp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwzCTMlTArvA-3kj6N4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyeaWRHjgaJSpDU9fN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwNiMcudtxGIqcVxQ94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwLaVqxadFt-THDfT14AaABAg","responsibility":"user","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzXm8itjx8RaCH34vt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]