Sunday, November 26, 2017

The Most Human Robot

          One of the bigger questions that is beginning to be more and more prevalent in today's world is how comfortable will mankind become with artificial intelligence. How far will we let computers into our lives? Will they be involved in every facet of our life ranging from our jobs to our transportation? Or even how we communicate with each other? As we advance as a society technology is going to get more and more complex, as well as get more involved with our life to the point where the human by itself will become obsolete. I believe we will get to a point where technology will become such a crutch that the base human will become more devolved. Why should we have to learn and know everything when a computer or A.I. can just do it for us? I think the key is to find a way to innovate technology to able to further society without crippling the everyday man. And I believe the best way to do that is set a sort of limit of how "smart" our technology can get to ensure that there will always be a pecking order. Human life should always trump the advancement of technology, regardless of how evolved the technology has become. 
          One perfect example of this is in the movie I, Robot. In this futuristic society, robots have been made to help everyone in society to make people's lives easier. They also have been programmed to do no harm to humans, and are unable to do so. In general, life was good until an artificial intelligence evolved past her programming which in turn caused her to take control over the robots. The natural pecking had been disturbed and an artificial intelligence has taken over. Now she was beaten in the end, but that is besides the point. So many movies nowadays that deal with this subject, have all very similar plot lines. Humans make an A.I, the A.I surpasses the humans in intelligence, the A.I. tricks the humans and takes over, the A.I. is defeated by some Achilles Heel at the last second. What makes these plots and movies so good is the fact of how believable they are. It is not that far-out to think that something like this could happen.
          Another thing to look at is the comparison between humans and computers in the book "The Most Human Human," by Brian Christian. During one part of the book he discusses the soul and how it transcends the body. Even when the body dies, the idea of soul lives on. It is the common belief that humans alone have souls, and the proof of this is our ability have rational thought. The ability to separate ourselves from our body and base instincts sets us apart from every sentient being on Earth. That unlike other animals, our final goal is not survival and reproduction but a life of contemplation that produces happiness. But if you look at it objectively, aren't A.Is in their purest form solely rational thought? They do not have weakness that humans have, that is existing in form that has a preconceived time frame of being alive. So in my opinion, the best way to ensure that the pecking order is never disturbed is by putting a limit on how innovated A.Is can become. The moment we lose the title of being the only rational being on Earth, is the moment we are no longer on top.

2 comments:

  1. How do we limit A.I.s, though? Humans are constantly curious (another characteristic that I'm not entirely sure a robot could possess?). We always want to build newer and better. Take a look at the iPhones, you know? We can't just stop at one thing because there's always going to be that person that pushes it to the limit. I think limitations would only cause even more problems. Which leads to an interesting point you made, how do we control it? Is there a way that would could? Maybe form some kind of "Artificial Intelligence Committee?" Whereas I agree with you, limitations should be made to prevent A.I.s surpassing humans, but I'm at a loss for how we would do it. Logically, it seems our best bet since none of us really want to find out how far an A.I. could really go....

    ReplyDelete
  2. I think we have be careful from phrasing the arrival of true AI in only the terms of competition. We need be cautious but more importantly aware of what we are doing when working toward creating AI. Having another sentient race on Earth doesn't automatically mean bad things. Co-existing is an option and may be necessary one as we may not control our introduction to a new sentient race. In movies like District 9 we are presented a very different sentient being than us and it is not handled in a co-existing way with very disastrous results. I'm just saying there is always more than one about something.

    ReplyDelete

Note: Only a member of this blog may post a comment.