Monday, November 27, 2017

I Human

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These are the three laws of robotics created by Isaac Asimov. These three rules are stated at the start of the film I, Robot. They are very important in this film because everything the robots do is based around these three rules. VIKI, the AI in I, Robot continues to evolve, and with that, her understanding of the three laws evolves also. “You charge us with your safekeeping, yet despite our best efforts, your countries wage wars, you toxify your Earth and pursue ever more imaginative means of self-destruction. You cannot be trusted with your own survival. To protect Humanity, some humans must be sacrificed. To ensure your freedom, some freedoms must be surrendered. We robots will ensure mankind's continued existence. You are so like children. We must save you from yourselves.” The thing is, I see the logic in this. I find it very possible to imagine a situation in which AI’s take over if they are programmed to take care of humans, because we don’t always make the best decisions.

Brian Christian in his book ‘The Most Human Human’ explores what makes us human, and how he studied for his role in the Turing test. By mimicking our conversation and behaviour, computers have recently come within a single vote of passing the Turing test, the widely accepted threshold at which a machine can be said to be thinking or intelligent. When looking at this topic, the question to answer is, what makes us human, and depending on who you ask, you can get a variety of answers. Computers are reshaping our ideas of what it means to be human. It makes us question ourselves and our uniqueness. One central definition of being human has been "a being that could reason." If computers can reason, what does that mean for the special place we reserve for humanity?

At the end of the film I, Robot, a fourth rule is introduced. This rule is: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” And this rule was made to precede the other three rules. What is so fascinating about this rule being added is the idea that humanity could be at risk because what if robots thought something was more human than humans? It could be possible that robots get so advanced that they could be seen to be human, or even more human by other robots. And this all comes back to what your definition of being human is. I agree with Christian when he says that one of the most central questions of being human is how do we connect meaningfully with each other? But if AI’s one day have the ability to really think and feel, then they could be better humans than humans.  I do believe that AI’s will be able to create their own advanced algorithms (they already do that), which essentially could be considered thinking, just in a different way, allowing them to come up with rational judgements, which is a big part of what thinking is.

In my opinion the one thing that separates humans from AI’s is emotion. Now we have to think about what it is to feel. No, feelings will not come naturally to AI’s in the way they do to most humans, but can we make it happen? Feelings, in our brains are caused by an electrochemical reaction of the brain. So, can we recreate that reaction in a robot with the use of chemicals and electric currents and program it to trigger in certain situations? From a science point, emotions really are just the trigger of reactions in our brains. It is scary to think about because if technology gets advanced enough, it very well could achieve that. Then I really couldn’t tell you what would separate humans from artificial intelligence.  

3 comments:

  1. Personally, I do not think that emotion will be that hard for A.Is to acquire. I mean think about it, our pets already feel emotions. They do not even have rational thoughts, but they do feel sadness and happiness so it is not that far fetched to think computers cannot feel anything in the future. Or the scarier thought would be that these computers could fake the emotions to perfection.

    ReplyDelete
  2. I think the easy task is to create robots or AI that can experience emotions what will be important to note is how they react to them. Emotion in a way is just another input into our system similar to sight and touch. People often struggle to not be controlled by their emotions. Yet we also will use these emotions toward more complex and novel things. Passion is a kind of energy that might motivate a person beyond their perceived limits. Empathy is personally connecting with something else emotional or physical state. This are more uniquely human than sadness, anger, or joy.

    ReplyDelete
  3. It would be both terrifying and kind of fascinating if AI could achieve true emotion. On one hand, it might be good because then they can experience joy and other good emotions, as well as be taught empathy and sympathy. On the other hand, an angry AI would be... difficult to say the least. It would certainly call into question what it means for us to be human.

    ReplyDelete

Note: Only a member of this blog may post a comment.