Friday, December 15, 2017

Final: Decidedly Human

In Brian Christian's book, The Most Human Human, he raises and interesting argument about what AI would do if they truly gained sentience. Many movies related to machines with enough intelligence and sentience to pursue a goal have these machines pursuing a goal of world domination. However, Christian argues that this perception of AI is inaccurate. He believes that once a machine develops sentience, they would "immediately develop a crushing sense of ennui and existential crisis" (135). In other words, these machines would not know what to do with themselves as they are not raised, as humans are, with any sort of value system or given years to develop goals.

In the movie, Ex Machina, one can sense a little of Christian's argument. The AI in the movie never gives off a sense of wanting world domination. Instead, she is curious and somewhat lonely. What makes her and AI is her ability to deceive for her own personal goals. She lies and fakes affection in order to gain what she wants: freedom. For Ava, she has been isolated all her life. She has had contact with two human beings and that's it. It's easy to sense how she has developed a "sense of ennui" in her captivity. Since she is intelligent and has learned that there is more to the world than what she's seen, it's only natural that she wants to get out. Christian states that it would be difficult "for the machine to have a sense of its own goals and/or a way of evaluating the importance of goals" (137). However, Ava does have a goal and the importance of it is made clear when she kills people in order to achieve. However, she is not necessarily evil. She hasn't been brought up with a value system, so killing, to her, may not carry the same weight it would to a human being brought up in a society that deems murder one of the most horrible things to do. And, to prove she isn't completely evil, she doesn't immediately go outside and try to take over the world. Rather, she is living in it. What her goals are now is entirely up to her, but she never appears to be malevolent.

Alternatively, the movie I, Robot, brings up another point. In this movie, the machines create their own goal. VIKI and the other robots in the movie are created solely for helping humans out. Because of this, they believe that humans are helpless and feel the need to overtake them in order to make sure they don't die out. Here, the goal becomes world domination as a sort of self-preservation system, which Christian points out that machines already somewhat have. The robots do not want to fall in disrepair, which is precisely what would happen if humans died out as VIKI believes they will. Therefore, the goal of the AI is not completely evil once again. However, the question comes up of whether or not they are like humans. Christian would likely argue no. Humans discover their so-called reason for existence as they live, and that reason can change when new circumstances come up. Therefore, once VIKI and the other robots learned that humans can actually take care of themselves, they should've stopped following their goal of domination and moved on to something else. However, because they were programmed only to protect humans no matter what, this concept of changing their minds was completely foreign to them.

In the end, I think that humans and AI can be ultimately very similar to each other. However, some fundamental differences will still have to exist. For example, humans have the capacity to adapt to new situations in a unique way. Plenty of people have changed their minds regarding their life goals based on the circumstances they are currently faced with. However, the idea that AI would immediately turn out to be completely evil and entirely ruthless is a bit of a misunderstanding I think. AI would not want to take over the world unless that is what they were originally programmed to do. Even after becoming sentient, an AI would still have that original programming in them and wouldn't leave it completely behind. To me, they would become curious. We consider them ruthless because we grew up with a certain system of values that dictate our lives. However, if an AI was never brought up understanding and conforming to such practices, can they really be considered evil? I think AI can never be human in our own understanding of ourselves. But, that doesn't mean they can't be extremely close. AI can think and act and speak in all the same ways a human does, but their perception of the world will always be at least somewhat different in the end.

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.