One of the aspects that makes the 1982 film, Blade Runner, so strange is the fact that the A.I. replicants are
basically indistinguishable from humans. They look the same, talk the same, and
act the same. In Brian Christian’s book, The
Most Human Human, Christian explains that a key aspect of the Turning Test,
which helps determine whether a computer can truly “think” like a human, is
speaking and the phrasing of words. Christian cites Roger Levy who says, “I
think that in my experience with statistical models of language, it’s the
unboundedness of human language that’s really distinctive.” Humans can make up
words, use secret codes, type abbreviations in text messages, use nicknames,
and use so many more secret tricks when communicating. Computers are programed
with the dictionary. They cannot understand secret languages and codes that
humans can and this aspect confuses them.
In Blade Runner, the “Voigt-Kampff” test awfully confuses the replicants
for two reasons. The first is the fact that the test is meant to measure how
the test subjects emotionally handle the questions. Since the replicants are
robots and do not actually have emotions, this obviously does not work very
well, as seen when Holden tests Leon. Leon cannot compute or show the emotion
Holden wants to see. Leon eventually reacts by shooting Holden. However, like
Brian Christian writes in his book, the language of the questions also confuses
the replicants. For example, Holden asks Leon a hypothetical question about why
he does not help a tortoise flipped on his back in a dessert. Leon, the
replicant, at first questions what a tortoise is. He does not recognize the
word, just as many of the robots involved in the Turning Test would not. He
also cannot answer the question for the simple fact that it is “hypothetical.”
There are no facts involved, and the situation is not actually a real
situation. Again, just like the Turning Test, the robot cannot understand.
I think both Blade Runner and The Most Human Human remind us of the fact that, while computers
and A.I. technology contain countless amounts of knowledge that at times exceed
that of the human race, they are still, simply, just programs. They are not
humans, and they lack some human abilities, such as language barriers,
emotions, and hypothetical situations. As to whether this will always be the
case, I cannot say. However, at least at this moment in time, computers are
only as smart as what they are programmed to do. They take code words and
hidden phrases literally, which is something humans have the ability to easily
look past. While computers may be equal to the knowledge level of humans, these
small key differences still place humans way ahead of technology, as these small
differences can easily allow humans to outsmart technology.
But what if they can learn to make up words as we do? There is something of a pattern to how we make up words, at least for the most part. And if an AI can learn to understand that pattern and make its own words as well, then what?
ReplyDeleteYou're correct with the "for right now" part of your blog! As of now, robots are definitely not quite close to becoming anything we would dub "human." I mean, take the robot Sophia, for example. She is the most defined robot we have right now. She has the capability to learn and grow from us. She's very aware of the fact that she is a robot. They haven't quite gotten to the hypothetical situations, but I think that's the important part. As she learns and grows, her "humanity" will start to become more solidified. Eventually robots might actually grow beyond their programs and understand hypothetical situations and ticks to language. One of these days humans won't be able to outsmart it much more, we'll be on equal playing levels. Just like how the chess playing program, Deep Blue, did to the chess player. It eventually conquered what the human couldn't. That won't be very hard to do in the near future!
ReplyDelete