Monday, November 28, 2016

The Human Perspective of A.I.

As the weeks pass by we are taking the time out to learn about Technology, A.I., and Humanity. The Movie Avengers: Age of Ultron created an interesting  perspective on A.I. rising to create destruction. Ultron came to this conclusion because of a few interesting factors; robot operate off one bases that Christian talks about Computibility versus Complexity. He calculated how to help Humanity by seeing that Humanity has always done things against themselves and not work together to actually evolve but destroy. He saw that the only way to help and save humanity is to kill them all, seeing we are such a destructive species.

The Factor of Computability and Complexity is something that Christian talks about in reference to understanding the psyche of humans and the motivations of robots. Computability is the thinking process of robots or probability. Robots will do actions that are only effective through probability which can provide of either a 95% success rate or 75% failure rate. Humans on the other hand, are complex creatures. We see and understand things through a more different light. We look at the impossible and scream possible; we see the everyday tasks of life and call them mundane. Humanity fails to realize the beauty in our complexities and imperfectness. In the movie I, Robot with Will Smith and featuring the A.I. Sonny, Will Smith interacts with this problem and sees conflict in it.

The movie I, Robot is in a setting where A.I. are integrated into human everyday life. People own them and send them to complete errands for them; Others work for their owners and do things like sale food and they are the new Customer Service Reps. Will Smith comes across a higher thinking A.I., Sonny, and believes he killed the Creator of Sonny, but in actuality it was the O.S. portraying it as Sonny.Will Smith and Sonny create a unique bond between man and machine because of his previous encounters and bias about them. Remember that Computablitiy versus Complexity factor; well this is when that comes into play with their relationship dynamics. In a bad car accident Will Smith was trying to save a small child from drowning in a sinking car. A robot saved Will Smith instead of the child because of the probability of saving his life was higher than saving the child. He hated that and even screamed that he should save the child instead. In the human perspective we value a child's innocence and potential to grow and be great more than someone who has aged and made mistakes or dedicates their life to saving the community in his instance.

When thinking about Christian's point on current Customer Service and the transition from Human to Machine. Christian talks about how we interact with humans and get bounced around back and forth and put on hold for long spans of time. With the integration of A.I. like in I, Robot, it was seen that the robots are programmed to answer and actually solve problems people had. Will Smith was amongst very few people who didn't take a liking to the active integrated of A.I. He always preferred the interactions with humans as well as classic apparel. For myself I can say that I really prefer the human interactions myself because in present time the machines aren't as higher thinking and can solve my problems as effectively as the prerecorded machines.

In both films we take a look into the Human perspectives and acceptance of A.I. In the Age of Ultron we saw that A.I. will speed up our own demise and in I, Robot we saw that A.I. will become beings of higher thinking (or processing lol). The Age of Ultron did however take me out of my element in regards to educational movies because of the realm of heroes and villains that play on other topics but seeing it in a new light was interesting. I'm really liking these last weeks of Technology, A.I., and Humanity.

Saturday, November 26, 2016

Is the A.I. Life A Game?


To extend our discussion about artificial intelligence, it seems that not much talk is to be had over whether one approves or disapproves of A.I. , considering the fact that they are likely to emerge and to begin to blend into society as has been shown in the various films we’ve watched. In the Most Human Human,  Brian Christian  considers the many defintions of the meaning of life from a human perspective. “Games have a goal; life doesn’t,” is one point that is made clear when he explains what existentialists think. This is interesting to ponder, in my opinion, especially because games and the quest to be human came up quite often in the film, A.I. Artificial Intelligence.

Image result for ai movie

AI Artificial Intelligence is about a robotic boy, named David, who is the first programmed to love, and is adopted as a test case by a family who could benefit since they do not have their son around. “Though he gradually becomes their child, a series of unexpected circumstances make this life impossible for David.” David goes on a long journey to figure out how he can become a real boy in attempts to convince his adopted mom to love him so that he can live with her again. He finds this courage from hearing his mom read the Pinocchio story. He truly gained the desire to be loved seeing that he was capable of love.

David never willingly hurt another human; the innocence he possessed drew up just as much care for him as I’d have for a normal child. In a world in which robots were a normal part of society, do we have moral responsibility to take care of robots (especially when they are in the form of little children and with the capability to love)?  To what extent do we love them back?  In this case, the mom’s real son and husband were not in agreement with keeping David around, so she had to sadly take him away. Was she wrong to abandon one who identified as her son even though he wasn’t a real boy? David wasn’t even stuck on this question, though. He was motivated by love and fulfilling his dream of being happy and loved by his mom. When he was in the home with the family, if David didn’t understand something, he ask, is this a game? Usually it was. When he was being dropped off for good in the woods, he asked if it was a game, and his mom replied, “no” with tears in her eyes. As mentioned earlier, games have an end goal, unlike life, from some perspectives. The end goal of human life, from an existentialist point of view, is determined by humans. This is seemingly one of the distinctions between humans and robots. Going with the idea that humans can determine their end goal, it is obvious that robots cannot. There was a robot named Jo in this film who was created to be a male escort. That was his purpose. The other robots had specific purposes as well and when humans felt like it or were displeased with the robots, referred to as mecha, they were destroyed. David, however, is an exception to this distinction between humans and robots. Fueled by love, David created an end goal for his own life, even though it is artificial.  The struggle to want to be real and chase dreams is the human flaw that made David real, according to his designer. Despite the fact that David was artificial, he grew to be more real than any AI had been. If such advancement is possible…. if an AI can grow to be just as human as a human and determine their own goal, then is their life a game or is it real? 

Friday, November 25, 2016

Facing the Challenge

The idea of artificial intelligence is both frightening and fascinating to some. As with every creation or invention, AI raise a series of questions; the most important question being can advanced, human-like robots be trusted. Transcendence(2014) explores the idea of the possible outcomes that can occur as a result of the perfection of a self-aware AI. Brain Christian, author of Most Human Human, expresses the belief that technology should not evoke fear but instead challenge human beings by providing competition that is very essential to the growth in character and mind of human kind. Christian also believes that AI ultimately help humans realizes things about humanity. Transcendence can also be analyzed in such a way.

In Transcendence, Will speaks to an audience of presumably the world's brightest minds, telling them that what he and his wife could potentially make will not could but Will surpass the knowledge and capabilities of even the most intelligent human being This is a first introduction to machines as an intellectual challenge to people. Competition is a natural occurrence in American society. Christian mentions how competition is needed to help push man beyond our limits. This can be seen as true just by analyzing our school systems and students' performance when they are told to set their own standards versus being measured by on how good they are in comparison to someone else. Usually students work harder when they are reminded of the "competition". Nevertheless, competition also breeds hostility. Opposed to someone seeing this person or being as a needed push to conquer obstacles, one might see the person or being as a threat to her or him. This is very much seen in Transcendence. When the PINN program ,which eventually manifested into Will, was supposed to be seen as enlightenment, a chance for humans to open up their minds to new possibilities and push them beyond their narrow mindless, it was instead seen as a threat to human existence.  The people's reaction to the self-aware machine only reiterated Will's point he made in the beginning about man's unwillingness to understand and embrace new things.

One of the main things that Transcendence revealed about man kind is our intolerance, and the way we use our "beliefs" (justified or unjustified) to support our intolerance. The people in the film believed Will to be an automatic threat despite lacking proof of any such thing, and their justification was "why wait". Why wait for "proof" when waiting will result in "horrible things"? Such a question has created unnecessary wars and conflicts with other countries. Why wait for them to use their weapons of mass destruction? Intolerance gives way to impulsive decisions. The bottom line was that Will was a threat to what Christian calls "the sentence", human beings are the only beings that can _____. He was capable of doing all things man was able to do. The fact that something non human was equivalent if not more advanced than the human race existed was of no comfort to anyone. Even Max and Joseph who supported AI and wanted PINN to be successful felt threatened by the machine's existence. Thus humans are only comfortable with what we feel that we can have dominion over. Will made them feel inferior and as a result he had to be eliminated. The fact that he was doing goof did not matter. The fact that the only people who killed another being were those who were "organic" humans didn't matter either. Again the film revealed that the nature of man is to be superior to whatever is created. The objective is to be god-like not to create a god.

It Is Time To Talk

Isn't war part of being human? War has lingered around since the beginning of mankind. Whether one thinks about it from a religious point of view or a historical point of view, war is embedded into the human nature. Now whether it has to be like this, is another question for another post but for now let us at least proceed with the idea that war is something else humans do. Also, if Artificial Intelligence (AI) becomes real, should we as a human race fear war with our creation? The movie I, Robot (2004) addresses this concern by showing human perception of robots. We assume robots might attempt to hurt us. We are genuinely afraid and this fear holds us back from believing in their humanity. After all they were created by humans.

Brian Christian, author of The Most Human Human, points out that conversation should be between two people helping each other construct dialogue (177-178). This is significant to address because humans and robots should have a conversation not a battle. Christian believes there is a difference between chess and conversation because in a chess game people attempt to undermine their opponent by attacking their opponent to minimize their moves in hopes to beat them. There are also those that play a strong defense that tries to create an impenetrable defense in hopes to beat their opponent. However, conversation is not chess, where one tries to beat the other person or at least it should not be like that.

If one applies this zero-sumness idea to the relationship between robots and humans, then we should not fear war with robots because we would be having a conversation instead of an extremely stressful chess game. Going to war with an AI is not as easy as I, Robot depicts it and I am not sure if this is even possible. When I think about robot armies, I think about ancient Rome and how whole legions were loyal to the highest bidder. I do not want to say that there might be legions of robots for higher in the future but what I am saying is that we know what history has to offer us and we can learn from it. Before we even create AIs we need to address and solve these issues. Frankly, I do believe this is the next big step in human creation and this is why I believe we as a human race needs
to be okay with it. I do not think a small group of people should have the right to unleash a possible human threat on the world. Why can't we vote on whether we want scientists to continue with artificial intelligence technology? Just saying...

Humans or Replicants?

I've always heard great things about Blade Runner (1982), but never realized that its plot revolved around artificial intelligence and especially did not realize how relevant it is to the conversation. More interestingly, the film is set in 2019. I think it is funny how so many films of the past expected our world today to be so much more technologically advanced than we are, another example being Back to the Future Part II (1989), which was set in 2015. While there is still time, meaning we have not made it to 2019 yet, I highly doubt that we will have artificial intelligence as depicted in Blade Runner, just as today's hover boards and flying cars are not near the level of Back to the Future's. Despite the fact that these films underestimated the amount of time it would take for us to make these technological advances, today's world is just as excitingly and disturbingly technological. Moving on, I feel that Blade Runner introduces several interesting points throughout its dialogue, especially one scene between Dr. Eldon Tyrell and his self-proclaimed "prodigal son" Roy Batty. Roy is an AI, or a "replicant" as they refer to them in the film. Dr. Tyrell created Roy and he turned out to be perhaps the most intelligent of all the "replicants". In the scene, Dr. Tyrell is explaining to Roy why he can not extend his lifespan and the dialogue is interesting to say the least. Tyrell tells Roy that he (Roy) was "made as well as we could make you". This touches on my favorite argument regarding artificial intelligence, being that AI is man made and therefore can not ever reach the point of having the whole human package, so to speak. Being man made, the technology relies on algorithms and equations in order to 'think' or calculate, as I would rather refer to it. In my opinion, this is why AI will not ever have the whole human package of emotions, feelings, impulse, distinct mannerisms, and unpredictability. How Roy responds to Dr. Tyrell furthers the conversation:

"You were made as well as we could make you." - Dr. Tyrell 
"But not to last." - Roy 
"The light that burns twice as bright burns half as long - and you have burned so very, very brightly, Roy. Look at you: you are the prodigal son; you are quite a prize!" - Dr. Tyrell
"I have done questionable things." - Roy 
"Also extraordinary things; revel in your time." - Dr. Tyrell 
"Nothing the god of biomechanics wouldn't let you into heaven for." - Roy 
While this is probably one of the most famous scenes of the film (because the dialogue is so amazing), it stirred some fairly deep thoughts in my mind regarding artificial intelligence. First, being the fact that Roy understands that he has done questionable things - Roy has a conscious, in other words. He can comprehend morality, not just in the philosophical sense but in the sense that he might feel bad about these things. At least in that moment. Secondly, Roy recognizes Dr. Tyrell's compliment, but more importantly recognizes the fact that he has not been 'good' enough for his creator - the "god of biomechanics". He is attempting to find a way to expand his life past the four years he has been granted and even though he is Tyrell's greatest creation, he still can not be granted the opportunity to live longer than the other replicants. Third, the dialogue taken from this scene can be applied to human lives, in terms that humans have the ability to do extraordinary things and we too only have a limited amount of time in this world.

That third and most important thought, I think, struck me hard because it makes me think of humans in terms of being artificial intelligence. Are we not the same thing? We were created, had to be, by something or somebody (depending on your beliefs) and are only given one life to do as we wish. We expect each other to have a moral conscious, so why should we not one day be expected to have a moral conscious when it comes to AI?

While Blade Runner shed some more light for me when it comes to the morality of AI, even to the point of pushing me to feel more compassion towards AI - my mind has not changed. I would not feel a moral obligation to help AI because of the fact that they are made by us and not God, but it does make me wonder why we would even attempt to make something so close to human. Why would we make AI so close to looking like humans? If we kept them looking more like robots, would we not feel as morally obligated to do things for them? It's an interesting thought. SPOILER ALERT: Roy kills Dr. Tyrell at the conclusion of their conversation. May that be our warning?

Are You Self-Aware?

Transcendence was a more positive outlook of A.I. It looked into the trials and tribulations that human society would enter into in order to come to understand and ultimately come to live. We begin with two characters who when originally believe to be our protagonists. They are based as the protagonist because we, the audience, is meant to believe that their dreams for our world would be beneficial and be instrumental in the coming age of our planet. Evelyn, who works along side Will Gaster, both dream this dream. Their goal is to create a world that is more at equilibrium and can maintain being more homeostasis like. This includes the environment externally of humans and internally of humans. It seems to be that this is what all A.I.’s strive to achieve. Even Evelyn states this as a goal in changing the world, but Will does not want this same thing. He is only envisioning his work as just that–work.

When paralleling this to the Brian Christian book, “The Most Human Human”, it is clear to see just how distinguishing the two are. “The Most Human Human” primarily focuses on the aspect of humans in terms of their technological use; such as the small limited use we have of A.I. in things such as Cleverbot. These are examples are of A.I. that do not have a direct effect on our entire world. Whereas in the imaginary world created by the movie Transcendence, A.I. has a direct effect on humans as well as the planet. It is beyond what is discussed in “The Most Human Human.


But one thing the movie does create that distracts us from the main plot line is the complicated romance between that of Evelyn and the A.I. Will. And in this way we can plainly see the true agenda of this movie to be mostly entertaining rather than provocative in intellect. But ironically, this too is what makes us ‘human’.  Throughout the movie there is a question that is proposed, “Can you prove you are self aware?” In which the proposed response given is, “That’s a difficult question. Can you prove that you are?” A provocative question that if one could answer, I believe would be sure to crack that of the Turing Test. The understanding and explanation of self-awareness would be explained by the human and then answered by the one proving it is A.I. or human.

Thursday, November 24, 2016

Evolution? Really?

All the way back in 1968 it is very apparent that people still had that fear of machines, and I found it interesting that in 2001: A Space Odyssey they could project so many things that were still possible and people are still fearful about to this day. Obviously written during the Space Race, this movie is, to me, hilarious because it points out issues that people had with the quick advancement of technology and possibilities seeming endless, and the endlessness seems to still be something that frightens us to this day. This movie was comical also because thinking about what I was doing in 2001 and that I did not even exist in 1968.

This week, though, I want to discuss the issue of artificial intelligence as "the next step in evolution" because of the way in which this movie starts. In this film, it begins showing early hominids and their first discovery of using tools, which according to my archaeology teacher this semester, used to be what people thought made a human, human. Now, though, we know that many other creatures have tool usage. But this use of tools gain these apes advancement in society. Then fast forward to closer times and they have gone from a bone as a tool to a thing like Hal.

Through the archaeology class I have taken it has helped me develop my own understanding of what evolution is. It is the gradual change overtime of humans and other animals that is natural and not intentional. We have talked a lot about the issue of intention in class, and my teacher has repeated that its not like hominids chose to stand and their morphology to change. Because of this, it is extremely hard for me to see the development of technology and eventually artificial intelligence as a stage in our evolutionary process.

I do not think I would consider our search to expand past physical organic bodies as an evolutionary step but more of a experimental process. It seems as though we just like to see how amazing and far we can take something. It is completely intentionally, but we are not exactly sure if we will complete get there all the way. And going with my first blog where I decided the artificial intelligence would not be human in my eyes, I cannot deem the creation of it as an evolutionary step in mankind as well.

Sunday, November 20, 2016

A.I. are beings as well.

Hello Class,  Meet my buddy Gaget! He is an A.I I was introduced to him back in my childhood. When, as a child, I laid eyes on the movie Terminator II Judgement Day I knew A.I (or human mechanics) were beings too but I couldn't articulate it properly at that time. Gaget was always a pal especially when I saw him again in A.I. the movie back in 2001. The story of an A.I. boy, that looks like a young adolescent, was created in place of his son which was made in his image. The beautiful story of his journey to search for his family, love, and to be a real boy will bring you to tears. That was when I knew Gaget was a real being. Seeing Ex Machina in class threw me for a loop because it was mainly centered around A.I. trying to pass a certain test. That test was to see if an A.I. could fool someone into believing that they were interacting with another human. After reading The Most Human Human, I decided to go back and re-watch A.I. to truly take in the movie for all of what it had to offer. After watching both A.I. and Ex Machina, it has driven home my belief that Artificial Intelligence are beings too. We shouldn't have been toying around with creating intelligence knowing  a vast majority of the population aren't even cognitive of their own humanity.

While watching Ex Machina it sent me back the many times when I am trying to talk to a human being when I call to get any form of services taken care of. I couldn't think of any machines that would pass a test to make me believe that I was talking to another human being. On the other hand, I knew deep in my gut that the Ava was playing the young man to let her out. I observed how she took advantage of the three appeals (Logos, Pathos, and Ethos) to work in how she was in danger and she needed to be freed. The creator told the young man that she was programmed well to project herself as human-like as possible to pass that test. In the end, It was saddening to see Ava kill the creator as well as lock the young man in the house however I knew that he should have just observed and not gotten too involved. I do believe that they are beings however just like human beings as well as animals, when either hurt or threatened, they will attack out of pain and fear. This made me think about the section where Christian was talking about Authenticating.

In the reading, The Most Human Human, Christian points out many interesting instances of A.I. portraying Human day to day thing to pass the Turning Test.This is the name of the test that Ex Machina and all the others are referring to when posing the question whether you believe your talking to another human or a machine. His section on Authenticating talks about how clever-bots talk to other human to see who is the human and who is the machine. In the instance of A.I. the movie, for the young A.I., that was not the problem he was created to the point where he fooled the human eye and human mind from time to time. In his time and era robots were common around the world but they also were hated by many. Some machines and A.I.s were hunted down and terminated in horrible ways. This really portrayed the vast grey area of how we as human organic beings feel about A.I. and their level of acceptance. Christian also talks about the computability theory verses the complexity theory. Here he talks about computability being answering or doing something if it can or cannot be done. This line is drawn because the computing determines if something is possible or not.

Focusing on Technology, A.I., and Humanity for the last couple of weeks will be a rather enlightening and mentally broadening as well. Hopefully you all will be able to be as accepting of my good friend Gaget as myself. He is friendly and he won't take over your world lol.

Human Capability

What awaits us in the near future? Movies tend to show a future where people are able to rely solely on machines and not have to lift a finger if they don't wish to do so. I, Robot explores the idea of what like would be like if people lived in harmony with self-aware artificial intelligence. Moreover, the movie addresses the questions of what is the true divide between man and machine?

In I, Robot, people live in harmony with robots. The protagonist, Detective Spooner, have a strong distrust and dislike for robots after a robot allowed a someone to die. The robot saved his life over the other person because of practicality. As a result, Spooner was convinced that the action made robots apathetic and incapable of experiencing true emotions. To Spooner, robots were machines that were capable of turning on a person at any time due to their design for purpose but not loyalty. The question of what separates our being from there being still remains. Take the Turing tests for example where people are to decide whether or not they're talking to a human or machine without laying eyes on whoever or whatever they're speaking to. In Ex Machina, Ava disappeared into the human world inconspicuously because she simply appeared human. If Spooner and others were not aware of the robots existence, they may have felt differently about them. Just simply knowing that there is a difference creates a different feelings and expectations about the object or person.

The robots were not people; they were devices that were meant to get things done. In my opinion, it appears that the robots were not human because they were not given that title. History has shown that the right to be considered human can be as ambiguous as the concept of race. Removing the human element has been a common tactic used in oppression. In a way, those who have been opposed were basically seen as robots: lacking emotions, only good for the purpose of serving the more dominant person and other inhuman elements. Despite the facts that those oppressed individuals were created by God and possessed a soul, they were not automatically granted humanity. As seen in many movies, robots are usually enslaved. Lacking a human title alleviates moral obligations for some. Thus I have came to the conclusion that what makes humans and robots different on a societal level is a title. People experience the same things that Sonny was feeling in I, Robot and just like Sonny some of those people are not seen as human. Here, I feel humanity comes to interpretations and expectations on a societal level. If AIs become as advanced as those in our films, there will be people fighting for the equal treatment of machines and those against it just as with the fight for civil rights and other fights of the oppressed.

Friday, November 18, 2016

"One day they'll have dreams, one day they'll have secrets..."

For a discussion on Technology and Human Values, one of the best movies to watch is iRobot. I selected iRobot, in particular, because I’d seen it as a child, yet remember absolutely nothing about the story line. In this film, robotic activity is normal. The human like robots are created to serve the people and to do all of the “dirty work” that the people prefer not to do. To see a robot walking around is just as normal as it is to see any other human in this film. Detective Spooner, who already has a dislike and distrust of robots, is called to investigate the recent death of Dr Lanning, an accomplished scientist and designer of the original set of robots.
Detective Spooner took the “robots don’t do anybody any good” approach from the beginning. Quite honestly, I identified the most with his stance, not because I think that they aren’t capable of any good, but because I mainly only hear reasons for creating robots and not reasons why they can be harmful. In iRobot, the robotics designers and professionals of U.S.R say that a robot cannot harm a human being – that’s Rule 1. Rule 2 is that robots must obey human orders. Rule 3 , in accordance with rule 1 and 2, is that robots should defend its own self.
It turns out that the robot, named Sonny, that Detective Spooner  suspected, was indeed the one responsible for Dr Lanning’s death, upon Dr Lanning’s request, though, which created a more complex situation. While all of the other NS5 robots turned on the citizens and the older robots, Sonny was actually a good, more advanced one. V.I.K.I. , the main operational system of USR, was directing the evil behavior through the other NS5 robots, following the logic that humans cannot be trusted with their own survival. When V.I.K.I was destroyed, the robots reverted back to their initial purpose of serving humans. Sonny was the exception that convinced Detective Spooner to not be totally against all robots.
Image result for irobot characters

Although Detective Spooner came around to having more trust in robots, he did raise great questions about people’s fascination with them. “What makes robots much better than being a human being?”  People like the initial functions of the robot, but don’t realize that “one day they’ll have secrets, one day they’ll have dreams,” and will then be capable of doing way more and way worse than just serving the humans.  Spooner’s dislike of robots wasn’t their form or because they seemed to weird, because he, of all others, could relate since his left arm was replaced with a mechanical one after a tragic accident. Spooner disliked that the robots tended not to have a heart.

I’ve come to the conclusion that everything about robots is good, but not all of them are necessarily bad. If anything, the good and bad qualities of robots reveal quite a few things about human nature.  The Most Human Human explores what artificial intelligence teaches us about being alive. In it, we learn about the Turing test, which engages humans in a challenge to determine whether their chat conversation on the computer is with a human or not. In iRobot, Detective Spooner’s unspoken question was if the robots could be trusted, which is similar to questioning how human they are. At first, the fact that the robots would one day have dreams and secrets, only confirmed his dislike for them, but experience with Sonny, who seemed to have a positive mind of its own, proved that the complexity of robots doesn’t always mean bad and could sometimes mean good, just as it is with humanity and all of its complexity and capability of doing much good, much evil, or both?
In our reading it was noted that if a computer ever wins the Turing test challenge, that artificial intelligence would be what it’s meant to be. I think that Sonny, in iRobot, though obviously not a human, could get the award for being most human like.

As I continue to develop my thoughts on the advantages and disadvantages of artificial intelligence, I will keep in mind Dr Lanning’s words : “One day they will be capable of dreams, one day they will have secrets.” And for me, that means that not everything about a robot is good, and not everything is bad - so they could hurt and help as much as humans can, although never adequately replace humans. 

Humans, what are we?

First off, these two movies were some of my favorite we watched this semester. Since we will be on this topic for weeks; I believe the first thing I would like to tackle is what I think the definition of a human is. In the dictionary, a human is a man, woman, or child of the species Homo sapiens, distinguished from animals by superior mental development, power of articular speech, and upright stance. For one, this definition does, in fact, exclude artificial intelligence, because you cannot include a computer or piece of technology into a species since a species is just for living carbon organisms. But I do not know if the dictionary definition really covers what I believe a human is.

In the movie I watched this week, Transcendence, the characters focused on the definition of human as being self aware. They kept joking "how do you know you are self aware?" I thought this was funny, because seriously how would we know? Author of The Most Human Human, Christian, states that a human is “... is to be 'a' human, a specific person with a life history and idiosyncrasy and point of view; artificial intelligence suggest that the line between intelligent machines and people blurs most when a puree is made of that identity.” He hints that artificial intelligence might be more than human in a way.

I think what really stuck with me this week in class was the idea of a soul and a creator. Through this, I believe an atheist could come up with a very good argument for why A.I. can be considered human, and I believe a post-modernist would also be able to come up with a way to say there is no way to know. Austin said in class that he could deem Ava not a human because her creator was not the same creator as the one who made us. Because of this, I believe that these creations that are not God's do not have souls. 

The movie I watched also made me think about the idea of how making artificial intelligence is like trying to be a god; both movies said something about it being god-like. That theme matches the issues that we discuss in class. It is hard for me to think of them as humans because of not being God, but if by making something as good or better than humans, then does that make them God?

Is It Really Artificial?

“Her” explores the meaningful emotions related to intimacy and technology. In fact at one point the main character Samantha who is the AI who goes everywhere with Theodore asks the question, “…And then I had this terrible thought. Are these feelings even real? Or are they just programming?” Which I believe she is trying to say is, how could I possibly know what a real emotion is if it is not an organically brought up emotion that someone decided that when this thing or that thing happens, I am supposed to respond in this way. The difference is that one is programmed and one is alive.

Which brings me to my next quote from ‘The Most Human Human” where it is explained that, “To be human is to be ‘a’ human, a specific person with a life history and idiosyncrasy and point of view; artificial intelligence suggest that the line between intelligent machines and people blurs most when a puree is made of that identity.” Meaning it is not the intelligence that makes the human, it is the sum of the human’s experiences and one’s ability to interprets those experiences in one’s own individualistic way. This is where AI fails. It is simply a sum of programming or a statistically shaped decision. There is nothing organic. And it could be argued that humans do this too. We look at someone else’s situation and if we do not like the outcome of that situation when we are placed in it, then we either avoid it or do something differently. But at optimal AI, all AI would do the exact same thing the best way it knows how and continually do it this way unless instructed or decides to do it otherwise. Our humanness is also defined by our individuality, which is something that all AI will ultimately come to lack.


In “Ex Machina”, Nathan says that, “The real test is to show you that she’s a robot and then see if you still feel she has consciousness.” Which is the complete opposite from “The Most Human Human” in that the question of whether one was a human or a robot was always illusive. Which allowed room for deception. This was not the case in “Ex Machina” the idea of deception was not so obviously placed. Ava who is the AI in “Ex Machina” was set to be this innocent experiment and it’s creator, Caleb was set to be its hostage. Which allowed Caleb, the one questioning for consciousness, to fall right into Ava’s trap of using him to grant her freedom.