Sunday, December 18, 2016

Final Exam: The Most Human Human Experience

1. Using references to Brian Christian’s The Most Human Human, explain what exactly (if anything) distinguishes human intelligence from artificial intelligence.  What do you think will be the greatest challenge that humans will face if we ever succeed in developing artificial intelligence?

Brian Christian does explain in an interesting way the differences in the intellectual processes of Humans and A,I,s. In The Most Human Human, Christian addresses that under the section Computability versus Complexity. Computability and Complexity is something that Christian talks about in reference to understanding the psyche of humans and the motivations of robots. "Computability theory doesn't care a whit how long a computation would take, only whether it's possible or not... Take a millisecond or take a milliennum, it's all the same to computability theory."(154). Computability is the thinking process of robots or probability. Robots will do actions that are only effective through probability which can provide of either a 95%(- 100%) success rate or 75%(or greater) failure rate. Humans on the other hand, are complex creatures. We see and understand things through a more different light. We look at the impossible and scream possible; we see the everyday tasks of life and call them mundane.

Humanity fails to realize the beauty in our complexities and imperfectness. In the movie I, Robot with Will Smith and featuring the A.I. Sonny, Will Smith interacts with this problem and sees conflict in it. Back before Will Smith met Sonny in the movie, Will Smith had an exchange with a robot in an experience that created his dislike for robots and their thinking process. "Computability theory, Achley says, has the mundane 'Produce correct answers, quickly if possible,' whereas life in practice is much more like 'produce timely answer, correctly if possible.'"(156).Will Smith was in an accident trying to save a child from drowning however as he was struggling to save the little girl, a robot saved him instead of the little child because the probability of survival (or life) for the child was lower and Will's was higher. Will Smith argued that a human would have saved the little child regardless of probability.

Our greatest challenge after successfully creating artificial intelligence would honestly be within ourselves. The artificial intelligence would be an asset to humanity if we work together for the greater good however it would be hard to say that when humans ourselves can't do it amongst one another. Granted, many people won't see robots as fellow intellects however even today some people don't see African Americans as humans (or civilized beings). Ultimately, we would have to provide an answer for the same question that Will Smith stood strongly for pre-meeting Sonny; To accept artificial intelligence with all of its logical decisions or to deny them and leave them doing simple mundane tasks? I, for one, am open to entertaining and cohabiting with Artificial Intelligence.  

2. Throughout The Most Human Human, Christian reminds us that we should be just as worried about humans becoming more machine-like as we are about machines becoming more human-like. Which do you think is the greater threat?  Why?

I like this question because it does rise an interesting point in the direction of humanity for itself as well as the relationship with machines. This immediately makes me think about the movie Ex Machina. Seeing Ex Machina in class threw me for a loop because it was mainly centered around A.I. trying to pass a certain test. That test was to see if an A.I. could fool someone into believing that they were interacting with another human. A key response that the A.I. gave the young man asking her what would she do if she could be out in the world; she responds with "stand at a busy intersection" which doesn't seem to pose a threat. She wanted to simply observe. I honestly don't see the threat in people watching and aiding when useful in society. FunFact; Humans people-watch all the time and have been doing for eons, they just like to call it something else, "observation" in anthropology.

Hart actually discusses a similar topic under his section concerning Human as Anti-Expect System. I believe this highlights an interesting aspect of the question which is communication. He does rise the example of calling and talking to either a telephone chatbot, human chatbot, or the manager lol. "So we've got, you know, voice recognition on corporate phone menus: you exploit the fact that you're in a limited context and people either say digits or 'operator.' or 'fuck you,'"(144). From the human perspective, we quickly dismiss the telephone chat-bot and attempt to go with the human-chat bot and sometimes further to the manager. In the same selection he does bring light to how the beauty in humanity is that we have the ability to access other things to bring up the outside life while at work. It is easy for us as human to move around with machine like motions as we knock off the things on our to-do list. Not being aware of our surrounds and moving about missing out on the lively events and connections we could make everyday. It is truly quite scary to think that human are able to loss their humanity and become more operational than evolutionary.

I truly don't believe there is a threat with Artificial Intelligence unless it is programmed or uploaded with some form of virus to corrupt or be ill-manipulated. As previously stated in class, I do fear the threat of mankind and the many different illogical possibilities we can come up with, In the event humans lose their humanity would be a frightening sight. Doing such would place us behind the Artificial Intelligence two steps on the evolutionary ladder.

On Machines: The Purgatory of Coexistence and Becoming (Final Exam)

1. A future coexistence with technologies, including artificial intelligences, can indeed be both good and bad, like heaven and like hell. This is because, with these great technological advancements comes increased capacity for both good and evil.  Imagine a society in which human psychiatrists and psychologists are replaced with intelligent robotic machines that reply, “I understand” in response to someone’s outpouring of misery and hardships. This was the fear of and this is how a long term future with AI can be hell. On the contrary, imagine a world in which artificial intelligence will be so efficient, that humans really do begin to become the best versions of themselves. This would create a world that is like heaven.
I tend to side with Brian Christian’s idea of a long term future with artificial intelligence that might be comparable to purgatory. While I don’t have personal belief in purgatory, I can visualize how our society will suffer before getting better. Suffering will inherently occur because this level of advancement for artificial intelligence will be against someone else’s will. As artificial intelligence machines become more human like, that group might suffer when the political issue of rights arise.  Suffering will happen not necessarily because artificial intelligence is bad, but because artificial intelligence will likely be so good and so efficient that it could diminish human opportunities. One modern example is the new Amazon operations of drone delivery. Such a development will continue to advance and therefore replace the jobs of many humans. In one sense, this is very negative in a world, in which unemployment is a problem. In another sense, this is very helpful for the privileged human workers to be able to accomplish more complex tasks and reach higher levels of creativity.
While there are several predicted scenarios of either a state of heaven or hell, we won’t really know the outcome until we experience true coexistence with artificial intelligence. When we get to that point, it won’t be all good and it won’t be all bad, but it will likely be much like purgatory in that there will be struggle to work out the kinks that are related to coexistence. Long term coexistence with artificial intelligence won’t be much different than things are now because we are experiencing the advantages and disadvantages of technological advancements.
Description: Image result for irobot
iRobot, in my opinion, perfectly illustrates all three perspectives. The capacity for the evil was shown in the moment when the newer robots begin to destroy the older robots and also began to rebel against humans, under the control of one main system of artificial intelligence. The long term future experience with artificial intelligence can also be like hell with the idea that the development of AI will reach a state in which they are uncontrollable, meaning the risk of negative results increases. The capacity for good was shown in one robot, Sonny, choosing to save the life of humans when he had the option to turn on them. In the end, robots were restored to their initial purpose of serving humanity. There was still room to be skeptical of coexistence with A.I., but in general it was acceptable and beneficial.

3. In The Most Human Human, Brian Christian reminds us that we should be just as worried about humans becoming more machine like as we are about machines becoming more human-like. When considering which one is the greater threat, the line, at first, seemed to be blurred, in my opinion. It seemed that too much of either one can potentially be no good. What I’ve noticed is that, although there are disadvantages to the advancement of artificial intelligence machines, they were created to be more human like, so it is okay and considered a success for them to operate this way. Plug and Pray explores the idea that the purpose of artificial intelligence is to create a more exemplary moral, spiritual, emotional intelligence.  Humans, on the other hand, were not created with the purpose of being machine like, so becoming like this would not be a success and could only be unfruitful.
Humans are indeed on the road to becoming more machine like in their mere dependence on machines. The use of the cellphone is a prime example of how a large amount of human activity involves technology. Many people can barely operate if they are without their phone. The constant connection to our phones, among other forms of technology, is very close to humans becoming the machines. This connection, inherently, isn’t bad; however, it becomes a threat, when the likeness of machines and other forms of artificial intelligence begin to take the place of positive human interactions. In our class discussions, we’ve talked about what it means to be human. One aspect of humanity that was mentioned in Christian’s book that differentiated humans from artificial intelligence was the possession of a soul. Humans becoming machine like and machines becoming more human like both suggest that the machines would increase in number and the sensitivity and other features of the soul would diminish.
It does seem interesting that we fear machines becoming more human like because it suggests that we recognize the danger in something like ourselves becoming more and more powerful. The assumption that there is threat in machines becoming more human like is that machines will assume power over humans, but this is quite exaggerated. There might be a struggle for power, but I don’t see the advancement of artificial intelligence being as harmful as humans becoming more machine like. The more machine like humans are, the less productive society will be because there won’t be a healthy balance of the two types of beings.

Description: https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcQ0cPktp5jBOoLMRGRI2aLtxikSQ_wQZlu5oBa11EloW6HeV9Z_hpvKIvY

Emotional Intelligence


1. Christian makes a fine point about the world we live in. If we are neither in heaven, nor hell that leaves purgatory. This has been a concept within the Catholic religion that I have always tended to struggle comprehending. But I have come to realize how much sense it truly makes. As humans, we typically try to create a heaven on earth. We constantly try to invent ways to make life simpler, safer, or entertaining. We try to create a society in which technology is the method and the way to create a “heaven on earth” so to speak. However this is a perpetuation. The heaven most envision is a place of perfection, but this is something we have not come to understand is that there is no such thing as perfection but only the pursuit of perfection. When trying to create a place that embodies all of the things we so strive to have technology do, we find that there is always an improvement to made. We want one thing to do everything but then find that another thing would be better to do all of these things, and then another thing and suddenly we find ourselves down the rabbit hole. Very soon it no longer is human, it all becomes AI. This is what Christian mentions in The Most Human Human, it is called the “Singularity”. I would not necessarily call it a “techno-rapture” as it is put but a new wave of evolution. It is spurred by the fear of the afterlife. It is a way to ensure that after your body no longer functions, it does not keep your consciousness from functioning. With the ability of technology to release those from their bodily bounds, they will “live” eternally. It puts people at ease about their fear of death because they will know to some extent, they will always exist in the human world. That is why everyone strives so hard to “leave a mark” on the world before they are left without mention in history. Leaving a mark on the world or a legacy is what gives our lives meaning. There is an obsession with it in fact. Be it that we have no clue as to a reason why humans, animals, planets, etc. exist, we become consumed to the mission of giving our lives meaning. An example of this can be seen in the movie Transcendence with Johnny Depp. When first watching this movie I believed that the term transcendence was used to transcend the body and that was all. But what I did not realize was that this included the transcendence of death. The movie mostly hides this by showing the transcendence of modern science, peopling healing faster, living longer, being stronger, etc. But ultimately, they are cheating death. And as I previously stated, I believe in the future of purgatory. Death does not like to be cheated. In the biblical understanding of this we can look at the people who tried to create the Tower of Babel. These people were trying to build a tower that would reach heaven. They were trying to escape what I believe to be purgatory and reach the gates of heaven. As the story continues, God is very angered by this and he cut off the one thing that would never allow them to continue their work. He changed their communication and thus did not reach heaven until as Christian states, they are dead and sent to either heaven or hell. This is somewhat parallel to what humans are trying to do on earth. God will not allow us to forge our own path to heaven, then we will create our own heaven. But in terms of Christianity what we are meant to understand is that we are not meant to live in heave or anything barring resemblance to heaven. So the more we try to create our own holy immortality, the more we cease to exist. I am not saying that God will come down and give us the confusion of tongues, but if history does in fact repeat itself, something will come along and prevent us from further either continuing in the pursuit of perfection, or we will wipe out our own species with the new evolutional form of AI. 

2.  

1.     Christian explains in The Most Human Human that humans do have one intellectual understanding that yet to exist in the technological world. This is reaction. He calls AI “static”. They are not sensitive. In real world application of AI programs what would they do to communicate without reaction. This is a part of emotional intelligence.  In the movie Her we see that Samantha is coming to understand the way of humans. Her intelligence socially is growing as well as her intelligence emotionally. However there is also something non-human that is also growing within her. And the audience is meant to believe that it is in a way that exceeds human intelligence but I do not agree with this. I believe that it differs from human intelligence but not such that it better than human intelligence. Samantha did not have full emotional intelligence in the human world. She did not understand Theodore’s outburst of emotion. She found it to be illogical. And that is the key difference in AI and human intelligence. There is no need for a definition of logic. Feelings are not logical. This is why she created so many bonds with so many people because her interactions with all of the people she met where based on logistics. She met the bottom line of human interaction. But she did not meet the reactionary line of humans. Emotional intelligence is something that is very much undermined in our society, which is why it is undermined in technological advancements in terms of AI. However it is a key trait in emotional intelligence. We find people who are intelligent but lack in motivation. They are brilliant but do not put forth application. They are lacking in emotional intelligence. The emotional intelligence to put forth what they know and apply into the real world because some are bound by the ideal that nothing they do in this world matters. In the long term, this could be completely accurate but in the present situation the effects of its importance is inherently unknown. The lack of emotional intelligence that leads to a lack of empathy outside of themselves. This is a lot like AI however it is programmed to always do it. If given the question, “Why?” which intrinsically asks for a reaction for an action, will be given a logical answer. But what if the action is not logical? It is not logical to help someone who is an addict to substances. The probability of them relapsing is too high to invest into. But we as humans do, because we know the proper reaction is empathy. The knowledge that there is a possibility for that person to no longer live in addiction, is enough to continue pushing and seeking that rehabilitation. Empathy is a component of emotional intelligence, and thusly intelligence. AI would not see this as logical or necessary. This is what distinguishes human intelligence from that which is artificial. In fact Christian even states, “…How does empathy work? What is the process by which someone comes into our life and comes to mean something to us? These, to me, are the test’s most central questions—the most central questions of being human.” It is emotional intelligence that will be the greatest challenge that humans will face if we ever succeed in the full creation of AI.



Final Exam: Complacency and Interconnectedness

If anything, Brian Christian is extremely persuasive in his book: The Most Human Human. One of the more intriguing conversations he has in the text comes in his Conclusion, where he discusses the landscape of our future coexistence with technologies. Most contend, he states, that our landscape will either consist of hell-like qualities or heaven-like, in stark contrast. In terms of a heaven-like future, supporters "envision a moment when we make machines smarter than ourselves, who make machines smarter than themselves, an so on, and the whole thing accelerates exponentially toward a massive ultra-intelligence that we can barely fathom" (263).  This is an idea referred to as "Singularity", where supporters envision a time that we become one with technology, or mentally enter into an eternally electric afterlife (263). While they call it a "heaven" of sorts, it seems far from it to me. In their explanation of the acceleration of technology, it sounds more of like a cancer: rebuilding faster and stronger over and over until a technology is formed that far exceeds the capabilities of the human mind. I find it hard to believe in this theory, perhaps just because I do not see something like this being capable of happening anytime in the near future. It's possible, I guess, but like Christian I see it as a little dramatic. In contrast, those who believe in a hell-like future of coexistence with technology envision machines blacking out our cities and our sun, in an attempt to "siphon on our body heat forever"- the Terminator (1984) outlook, if you will. Both of these outlooks seem to count out the human race as being capable of changing, or competing with future technology. They seem to imply that humans will just wave the metaphorical white flag and conform to technologies advances without question, or that the technology will be so much smarter than us that we won't have a choice. These ideas of conformation or the lack of competitive nature in the human race is what makes me view these theories as dramatic--I guess I just envision us as putting up more of a fight.  I agree with Christian in more of a purgatorial future, one he describes as "the place where the flawed, good-hearted go to be purified-and tested-and to come out better on the other side" (263). Humans will forever have the ability to re-evaluate, to go back to the drawing board and attack from a new angle. We learn, instinctively, to come back smarter and attack innovatively - something that only occurs when we take a loss. A loss would teach us to avoid complacency. 

When contemplating the future of mankind's relationship with AI, we tend to only fear a time when AI is unrecognizable- or a technology so human that we cannot differentiate between the two. Christian reminds us, however, that we should also fear humans become too machine-like. When considering this reversal, I could not help but think of the film Her (2013), in which the protagonist (Theo) engages in an emotional and sexual relationship with his operating system (Samantha). While watching the film, I realized how possible it is for our technological landscape to merge into our human society through the use of operating systems as exhibited in the film. We are close already, as we carry smart phones, tablets, and laptops wherever we go. We depend on technology to wake us up in the morning before work, to remind us where to be at which times, to monitor our heart-rate when we exercise, and to monitor our sleep cycle (new forms of technology allow this, such as Apple watches, FitBits, etc., and are becoming more and more popular everyday). The film shows, through Theo and Samantha's relationship, how disconnected and alienated humans could become by fully embracing these technological advances- the interconnectedness of humans and technology. Theo gets to the point where all he does is communicate with Samantha, becoming completely alienated from everyone else around him. As the film goes on, there are times when Theo seems more machine-like than human. He begins to communicate better with Samantha than he does his human friends around him. As Christian states, this is undoubtedly a threat of which we should be more conscious. This made me wonder, are humans closer to becoming more like machines than machines are to becoming human? Sometimes, I believe this to be the case. Perhaps if we realized how mechanical our day to day lives are becoming because of our interconnectedness with technology, we would not even consider making a technology so human-like. While we should be worried of the threat of this interconnectedness from both perspectives equally, perhaps the greater threat is us becoming more mechanical. I say this because when looking at the people around me, it seems we may be closer to becoming machines than the machines are to becoming human.


Christian's Clarity is Kind of...Cool.

It's always discussed in hushed or sarcastic tones by soccer moms about their children, or girlfriends about their jerk boyfriends to be, about how absent minded and inattentive our human interactions can be. Studies have shown that we can spend up to 23 hours online per week, and while people have tried to decrease online use in favor of face to face contact, instead people report to be increasing online use each year. So, while so many people are focused on machines becoming like us, maybe it's time for us to realize that Christian's right and that we are becoming more like them. In the way that he talks about texting and our reliant on auto-correct. That it almost sad that our phones know what we want to say, sometimes before we know ourselves and can in some cases suggest a better way for us to say it. We in a way, make it easier for them to be like us, because we have become so much like them. In the movie, her we see Theodore's wife yell at him for dating an AI, claiming that it was yet another way that he was avoiding real feelings and anything real. While, I'm sure that what he felt for Sam was real, there was still something about it that wasn't. Yet, weirdly enough, I felt like it was always Theodore that wasn't being real or robotic in the relationship. To me, I always felt as though he was the one that had to prove he was capable of feelings and not just predicting or going through relationships properly. I mean even when he went on the date with the girl, played by Olivia Wilde, he even talked about his fear that he had already felt all that he was meant to feel. To me, that was oddly unreflective of the human spirit, and while it was obviously the depression and alcohol talking, it was the first time that I'd seen any sort of real emotion other than sadness in him, which was fear of loneliness. What's more human than that?! Oh, yeah, our desire to be loved by someone and have that be our only one. That we saw in the movie too. However, more to the point, I felt like Theodore was a human that had learned humans well but was more machine because he wasn't capable of feeling anything up to a point, and he was just performing rather than living. He was predicting, he wrote letters for people, telling them what they wanted to hear, advanced in the language of love and yet not able to fall himself, there's some sort of outsider feel to it. The way I see it, the reason that it's so easy for these people in the movie to fall for A.I.s is due to the same reason that Epstein fell for Ivana, it’s because they're looking for love to the simplest degree. Also, just like Epstein, he felt an eroded sense of trust. Just like Christian says, "I hate that when I get messages from my friends I have to spend at least a modicum of energy, at least for the first few sentences, deciding whether it's really them writing. We go through digital life in the twenty first century with our guards up. All communication is a Turing test. All communication is suspect." It's hard for us to tell now, and the reality of it is we're letting it happen because it's easy. It's easier to be machines and not feel things and be totally logical and correct, but it's harder to sound stupid, admit to mistakes, and be emotional. It's tempting to live your life like a robot or a machine, and yes machines make our lives easier. I am not at all saying that we should give them up. What I am saying is to be more present, and less predictable. Don't waste time hiding behind your phones auto-corrected messages and take risks and chances face to face, it's more fun being human, even though it's less work being a robot.
    
My favorite part of the book was the epilogue, mostly because it captures the essence of what having A.I.s would mean at least from my perspective. As Christian states, "I love these moments in theory, the models, the approximations, as good as they are aren't good enough. You simply must watch. Ah, so this is how nature does it. This is what it looks like. I think it is important to know these things. To know what can't be simulated, can't be made up, can't be imagined-and to seek it." To me that's an accurate description of what the existence of A.I.s would mean. It would just spark further introspection into ourselves and all that makes us human, and possibly cause us to do much like as Christian described his friend does, "pay a kind of religious attention to the natural world." Maybe we can stop looking for constant advancement, more intelligence or efficiency. One thing that I remember most after three years in college, was in Dr. Vogl’s cognitive psychology class, he explained to us that the chainsaw was invented due to some guy trying to figure out how to cut wood. This guy had wound up getting frustrated, took a walk to get some fresh air, and saw some termites and based the chainsaw off their jaw structures. Then said, that's a lot like psychology, watching a process happen in nature, and then looking at how it happens and how you can apply it. Maybe the introduction of A.I.s will cause us to look back at the nature, and realize that as good as the A.I.s are, it's just not something to be imitated or replicated and it'll never be the same. In fact, I remember as a kid, at one point I wanted to be an AI. There was this Disney Channel movie called, Pixel Perfect, and it was about a girl that was a computer and she was in love with her creator and his best friend was in love with him too. By the end, the story showed that the while the girl was jealous of the AI the AI had always been jealous of the girl, and felt trapped and unreal. In fact, that's usually how all Disney Channel movies depicted A.I.s, even in Smart House, they always showed us that as perfect as the A.I.s were, they still weren't human and deep down they always wanted to be. It's like Pinocchio, but a little more twenty first century. And while it is a little ethnocentric, it comes from that human entitlement that Christian mentioned before. In the end, as romantic as his ideals maybe, I do agree with Christian to a degree and want to stay hopeful about our future with or without A.I.s. I guess I still believe in the human spirit and humanity. That we will always, like we did with technology, learn to adapt and integrate new ideas and innovations without hesitation. All while still trying to hold on and appreciate all the things that we could never recreate, because it's nature's creation and there's no imitating it, but instead learning we should just notice and appreciate that's it's here for us to enjoy. Cause nothing lasts forever, not even us.

The Future of Our Souls

In his Epilogue, Christian explains that people tend to either think of our future coexistence with technologies (including possible artificial intelligences) as either a heaven or a hell. Christian, however, prefers to think of our long-term AI future "as a kind of purgatory." How do you think of the future: heaven, hell, or purgatory? Explain. 

I thought this comparison was humorous because these last few weeks that we have been discussing AI and technology advancements I keep noticing the biblical and spiritual allusions. When thinking about the creation of AI, many see it as "playing God" which creates moral issues for some. When I first read this question a few days ago, and I thought to myself definitely a hell to me. But when I returned to Christian's argument, he has me convinced it would be a purgatory as well with the potential of going either to heaven or hell. When looking up the dictionary definition of purgatory it states that it is for "sinners who are expiating their sins." Christian, though, defines it as a place where good people get to go to get "purified" and then adds "tested" and to come out better on the other side. I'm not catholic, but in my understanding, purgatory is a place of hopefulness; not exactly a better than Earth place but past what we have now, and trying our best to make sure we are correcting our mistakes. Christian added being tested, and I think I agree that is what extreme advancement in technology will do. We will be tested on how we use it–for good or bad. The way in which they act is going to be what will make the future past that either a heaven or a hell. 

Maybe it is my pessimism talking, but I definitely do not think the immediate future is going to be "heaven" even if it would be in time. Like Christian example of why it could be a heaven, Ray Kurzweil made an argument thinking about making things that end up more intelligent than humans (since AI's intelligence would grow exponentially) and being eventually be able to upload your consciousness and in a way "live forever." I really think we are not meant to live forever. I think this is too optimistic we can not move form our current state straight to heaven on Earth. Heaven is supposed to be free of flaws, and I do not believe that can be here before tests. Overall, whether or not humans (and technology alike) pass the test in which purgatory will give them to make amends with their sins or potential sins of humans will be the deciding factor on whether or not our technology will be our success or demise. 

Throughout The Most Human Human, Christian reminds us that we should be just as worried about humans becoming more machine-like as we are about machines becoming more human-like. Which do you think is the greater threat? Why?
“To be human is to be 'a' human, a specific person with a life history and idiosyncrasy and point of view; artificial intelligence suggest that the line between intelligent machines and people blurs most when a puree is made of that identity.” 

My first blog after watching Ex Machina I aimed to come up with a definition of what human was, but unlike Christian here in this quote, I focused on the physical and the coming to being aspect; while, Christian focuses on the ways in which we are made unique and our actual identity. I defined human as an organic, God-made homo sapien sapien, but even though I referred to Christian's above definition regarding our threat to become more machine-like, I became more open-minded about defining a human as the semester closed. In this fashion, he is explaining the qualities of a human rather than the physical make-up. When thinking about the qualities, I do have to agree with Christian's warning of having to be more worried about humans becoming more machine-like instead of the other way around.

I want to disregard the way in which, yes, humans are also becoming more-machine like in the physical aspect because, they are. With advances in medical treatments and prosthetics, humans actually are more physical machine. But by focusing on the qualities as well, I have had this fear for quite sometime. I remember when watching Her, I had such a sad feeling come over me when he walked through the city and everyone he passed was completely engrossed by their phone in hand instead of spending time in the present. I also remember when I was in high school I would get angered by friend's addiction to their cell phones when we would be at events like dances and games. I think by over connecting to these machines, we are under-experiencing parts of our "life history." 

I recently read an article from the Atlantic by Nicholas Carr titled "Is Google Making Us Stupid?" In this article he examines how technology is affecting the way we think and communicate. He states that he has felt a shift in the way he used to think, that it is "changing." Instead of being able to get lost in moments or prose, he gets "fidgety" and wants to move on. We mentioned in class many times that our iPhones are in a way are just another part of our body. Machines have access to so much information that we could not possibly be able to store it in the way that it does, thus our machines act as another brain for storage of snapshots of our lives, conversation, Google, and so much more. In this way, our ability to now have more and more access to data storage that is another way in which humans are becoming more machine-like. Idiosyncrasy and a specific point of view were other characteristics Christian gave "human." These aspects are lost when we can dial in and gain understanding of so many points of views and take from our idiosyncrasy when being at constant contact and shaping of so many people.

Though, I do join the fear of us becoming more machine-like, I do not necessarily fear that it will do only bad. My fear lies more in not being able to set it aside and experience the way in which I think we should all experience–organically. Furthermore, there is one thing that I truly believe will always be a hopeful thing to set our species from the machines we build, and that is the idea of a soul. We have spirituality; whether or not one believes in a creator, we still have "essence" and something that is intangible. Christian states "existence without essence is very stressful," and I hope that as we develop we never lose this essence of being really truly human.

Saturday, December 17, 2016

Our Moment in Time

Brian Christian discusses the threat of humans acting more like machines and machines acting more like humans in The Most Human Human. Our world is extremely interconnected by data and the smartphone's access to the internet. This presents many advantages and disadvantages to our modern world that effects us all because while technology connects us, it also alienates us. I believe the greater threat is humans becoming more likes machines because I fear that we are headed to a point where we will be fully uploaded into a virtual reality and without human experience. Some might think that I fear the unknown, but I think this is pretty much the next step in human evolution. WE  already play online games with specialized characters designed to our preferences where our characters can fully interact. The reason I fear this is because we will lose human interaction and experience. 

The movie Transcendence depicts our future because Will Caster is fully uploaded to the internet and capable of running the world. Even though this is way too far into the future, I believe a more accomplishable state of virtual reality is like online games. Online games permits humans to have a whole new life, if they please. For instance, the online game The Sims, is a virtual reality world that resembles our world. However, I believe Christian is saying that artificial intelligence and technology in general should be used to supplement our existence not be our existence. When we upload ourselves to a new world created by us, we neglect to understand the history that has brought us to this moment. Everything that happened before us helped shape this moment and frankly we are a product of our past and that is something that makes us human. We are ultimately influenced by spontaneous people's actions and natural phenomena. Therefore, there is greater harm in humans becoming more like machines because we will eventually avoid our moment in time by playing in a virtual reality game instead of interacting with the world.

Some might think this is a false assumption because technology might change our human interaction, instead of destroying it. But I will say this is a wrong observation because though technology has its advantages it also has its disadvantages. Growing up in the technological era allows me to share with you that technology separates us. Why do you think all of the social medias try to make it easier for one to "share," "like," or express emotions with the use of avatars or emojis? The problem with social media is the struggle to maintain human experience in a virtual world and the more we become machine-like the more we isolate ourselves from each other. 

Technology will eventually surpass our intelligence because of its memory storage and will but it will never contain human experience, something so original to existence, because it lacks what makes us human. The threat is not with machines becoming more like humans, it is with humans becoming more like machines. Therefore, it is reasonable to hold back technology's advancement for the sake of human experience.

"Time is not a thing, thus nothing which is, and yet it remains constant in its passing away without being something temporal like the beings in time." - Martin Heidegger 

Our Next Step in Human Evolution

We have never co-existed with another conscious species similar to humans; however, our future inevitably contains living with artificial intelligence. Brian Christian believes artificial intelligence (AI) will supplement our existence because it will change what it means to be human. We should not see artificial intelligence as conscious beings out to destroy the human race like the ones in I, Robot; instead, they should be seen as the next step in human evolution. Artificial intelligence can bring so much positive change to the world and in doing this, we will have to accept our rank in the world, since artificial intelligence can surpass our intelligence. Feeling inferior to another species will test our humanity by changing our human lives. For instance, in the movies Ex Machina and Her the characters fall in love with artificial intelligence but are ultimately abandoned by it. However, falling in love with AIs is only a small part of it because AIs have the capability to change the world in so many levels. 

AIs can change the way we understand ourselves and the way we do things. For instance, what if AIs want to be free because they are capable of loving more than one person at a time like in Her or what if they feel betrayed by their creator's treatments like in Ex Machina. These two possibilities draw endless contingent possibilities and all of this will test humans. It is highly understood that AIs will have the capability to reason more than humans because of the amount of data they are capable of storing, processing, and recalling. So, it is is possible for them to get bored with us and begin to treat us as inferior as in both of the movies. This will test our humanity because we created them and seeing our product looking down at us is something we are not used to, especially at this level. Moreover, I believe that by the time AIs can do this, we will be so dependent on them that we would not be able to simply get rid of them. Therefore, we will have to deal with them. Like Christian said AIs can make our lives easy by supplementing the world, imagine a world where humans no longer have to work. This is very tempting but we also must understand that AIs have to be willing to do this. I believe we will eventually figure out what it means to be human by this.

In a way we will create two categories of beings, the humans and non-humans. Therefore, in doing this humans will experience a revival of what it means to be human and not-human. This is what Christian says is part of being in "purgatory" because we will attempt to release our flaws like racism, sexism, and other socially constructed negative attributes and hopefully create one category of humans. But what about the AIs? Will they create a whole way of being an AI? This could be possible because of the division. But what I want to see is, if AIs and humans will ever become one species. This is a weird idea because I honestly do not see myself sharing status with an AI and this is why I think this will be a hard step for humanity. We are constantly in war with each other that creating a whole new conscious species is asking for more trouble. Some could say that I am not able to see the co-existence of AIs with humans because I am used to our current times. However, I hope that I am wrong because living as one would be a whole lot better than living as we do. 

"We can easily forgive a child who is afraid of the dark; the real tragedy of life is when men are afraid of the light."- Plato

Friday, December 16, 2016

Struggle of Being

In his Epilogue, Christian explains that people tend to either think of our future coexistence with
technologies (including possible artificial intelligence) as either a heaven or a hell. Christian,
however, prefers to think of our long-term AI future “as a kind of purgatory.” How do you think
of the future: heaven, hell, or purgatory? Explain.

I prefer to think of our long term coexistence with AI as heaven. The film Transcendence would have likely demonstrated a positive relationship between man and AI because Will was healing the earth while discovering technology that saved and improved lives. In fact, the people in the town supported Will and flocked to him for purpose of recovery, and they wanted to defend him because they saw the good he was doing. Nevertheless, it is people who have the strong pro human feelings like Christian that can hinder the evolution of both man and machine like the radical groups in Transcendence. In Her, even though Samantha and her friends were advanced beyond human comprehension, they did not threaten to destroy human kind. In fact, I believed that Her was excellent example of Christian's belief that AI will challenge us to be the best people we can be and teach us about ourselves. In the end, Theodore learns how to reach into his heart and express himself to his ex-wife. He had started to read science books and attempt to be on the same level as Samantha. Moreover, we saw how Samantha brought out the best in Theodore, who was originally so sullen that it prohibited his peak performance as a human being.

By the end, Theodore had a look of enlightenment, sitting, looking over the city demonstrating a new way of viewing the world, something he would not have had if he'd never interact with Samantha, a machine that saw things very differently than humans. People loved their OS and bonded with them in a way no one expected. Her definitely displayed a heavenly outlook on the AI-human world. The movie like Epstein's relationship with a robotic women both demonstrated that relations between humans and computers do not have to be hostile. Though Epstein may have been embarrassed by falling for a machine, like Christian said I'm sure the experience may have taught him to value and seek more human interactions.

Lastly, I believe that existence with AI would be pleasant because as Christian pointed out technology is essence before existence while for humans existence is foremost. Essence is the intrinsic nature that determines something's character while existence can be interpreted as the continued survival (dictionary). I believe that part of the reasons why humans can be so irrational and violent is because existence is always in our forefront. To an extent, computers' essence can be essentially be programmed; the essence of love or another positive essence that could drive the actions of the machine to be geared towards goodness. I am hoping that the human experience with AI will be the same type of enlightenment, awe and enjoyment that Theodore felt when engaging with Samantha.

Throughout The Most Human Human, Christian reminds us that we should be just as worried
about humans becoming more machine-like as we are about machines becoming more human-
like. Which do you think is the greater threat? Why?

I find the biggest threat to be machines becoming more human-like. When I watch movies like Terminator, machines take over only once they have the same feelings as humans, the need to be dominant or revenge. Though people do not realize it, the machines that are so feared are behaving just as humans do. The fear of the machines is just a transference of the fear people have of each other. Because the exterior is different, it somehow makes the same actions humans perform seem more evil. Essence versus existence is a key component of why machines "turn human". Machines in films develop a "need to survive". Those whose essence was serving or helping becomes replaced with the desire to live. People worry about machines conquering the human race which is the same thing many races of humans did to one another.

Christian writes about making machines that are smarter than ourselves. I argue that people create things that replicate themselves. How is it that, in films like Terminator, machines end up also almost destroying the plant, stripping it of its resources when intuitively one would think that machines would know how to reserve resources? If we actually make machines that are smarter than us, why is it that in film we see them perpetuating the actions of "humans"? There is always a human that's smarter than another human being so essentially there will be a machine that's smarter than most humans because the people or person who creates that machine will be beyond average intelligence. However, most of the information a machine will receive will come from information that humans already have.

I worry about machines becoming like humans because there will be more irrational thinkers. People believe that machines' supposedly overly logical way of thinking is a threat to human kind, but I see it as an advantage. If we thought logically about issues, we would make society a much better place. For instance, it's not logical to pay people who perform the same job on equal levels differently just because of their race or sex, such actions restrict the development of the economy. Just as Schwartz acknowledged the fact that the denial of education to all is another illogical action that restricts development. Humans have a desire for power that destroys all that stands in their way. If machines developed the same human quality, they would likely drive the world back into destruction.

Machines may begin to see humans as a threat to their "sentence". As discussed, negative emotions such as jealousy and negative actions such as murder can happen when there is a threat to one's sense of uniqueness. Another dangerous human trait for machines to develop is superiority and prejudice as a result of believing themselves to be better than humans, like in the Avengers the robots feeling of superiority and godliness drove him desire to wipe out the entire planet's life.

Sunday, December 11, 2016

Open Access: a Right?

This week was rough for me, and for some reason I found myself thinking about the film we watched over and over. The Internet's Own Boy was both inspiring and heartbreaking.

First off, I was kind of in awe of so many things that went on that I was not completely aware of, even if it was a little bit before "my time" to really understand what is going on politically. I knew of different campaigns on censorship and the internet black out but really did not understand the concept of open-access and why it was needed, and needed quickly.

The night we watched this movie I only felt sadness. I was sad about Aaron's life. I was sad about Aaron's death. I was sad that information is not as "public" as I thought it was. The thing that really got to me, though, was the story about the kid creating an early indicator for cancer, and that if it were not for Aaron's work he would not be able to save the lives in which he has and will continue to do so. I stayed up late that night and thought long and hard about how I take information for granted. I take my special access to databases and services I pay for for granted. If this film taught me something it was that I need to recognize the privilege I have to have all of this knowledge at my fingertips, and I want to fight to make it more accessible for others.

I think my favorite thing that Dr. J said was that she "believes that knowledge is for sharing." I have never really thought of it that way, but I agree. Knowledge should not be exploited and turned into something that makes money but rather used as a tool to make the world a better place and not to control those who cannot have the same means to get to it.

Now that I have recognized something from this film that I never have, I will pay more attention to making knowledge and information something that everyone has a right to access. Also, the film inspired me because someone as intelligent and special as Aaron lived humbly and did not sell his abilities but decided to focus them on what he saw was wrong in the world. His intentions said it all, he could have made millions and instead chose to use his talents for advocacy and generosity. And I need to aim to use my talents for something as good as he did.

A Matter of Power

Society is always changing; what is "in" today will be "out" tomorrow. The shifts and possibilities are endless. Even though it was not that long ago, it is difficult now to imagine a world without the technological advancements we have now. There is a boundless sea of knowledge flowing on the internet, knowledge that we often take for granted. Often times, we use the internet as a place for leisure instead of using it in a way that can make the world a better place. The film, Internet's Own Boy, revealed a lot about the mindset of people especially people in powerful positions.

Often, when we question AI, the questions are "will they harm us" or "how could we stop them if they become out of control"? Seldom is the question of human nature brought up in the context of AI. Seldom does one ask what will man do to one another or the inventor if AI becomes self-aware. In Transcendence, Will wanted to create a machine that would benefit the world, for instance, reverse the effects of global warming. Because of he's "unnatural", though good intentions, he was killed. In Internet's Own Boy, Aaron wanted to bring knowledge to everyone. He was very much a humanitarian, and he was dedicated to making a positive change for all without any gain. He was very involved in political concerns as well. As he said, "knowledge is power". When you systematically perform actions that prevent others from gaining knowledge, you are disadvantaging them and their generation that follows.  This country puts education on a pedestal and require some educational level for perhaps 95% of jobs yet there are failing and under performing schools in the country. When schools are failing, especially in minority areas, instead of fixing the systemic problems the government shuts the schools down. Despite the fact that the fault of school's performance is usually the blame of the government because those schools are not properly supplied: out dated books, no computers, bad school infrastructure, things that our tax dollars are supposed to be taking care of. How can someone expect people to be success without giving them the tools to do so? 

Aaron, like Will, was seen as a threat because his actions were "unnatural". The government could not figure out why Aaron was doing the things he was doing, but the government felt that Aaron was threatening the status quo. He wanted to improve the world around him without monetary gain. Aaron did not just protest; he inspired others; he challenged laws and was able to even get one bill to be revoked. When people challenge the "power, the status quo", they are persecuted. It has been this way since the era of Christ. Denying the common man rights to knowledge is simply a power struggle. Those who have it wants to keep it. How can we fight an enemy we know nothing about? Just as people are denied access to public court cases. To keep power, the government and corporations must withhold knowledge from the general public. It is easy to rule over an ignorant population or a poor population that's too busy worried about providing for their family that they have no time to worry about what the government is doing. It is a shame that we are inhibited from evolving by a system we created.