AI has been seeing crazy amounts of news hype recently. Robots are learning how to feel, how to do back flips, and how to fool lonely men into believing that their companion dolls have personality. Meanwhile, we're learning that our smartest algorithms make mistakes with frightening regularity and are vulnerable to the kinds of manipulation that makes a stop sign look like a 45 mile per hour speed limit sign. AI seems to both be accelerating at an uncontrollable pace and also inching forward toward promises that turned out to be more complicated than initially anticipated. We can talk to our smartphones now, and they actually understand some of the time. Compared to the early primitive text to speech applications, Siri and Alexa are freaking magical, but they're still a long shot from the science fiction likes of HAL or C-3PO.
While most of the musings around artificial intelligence center either on its utopian or apocalyptic potentials, there's a lot of nuance missing from how "smart" algorithms are already redefining the landscape. The classic treatment of malevolent machines shown in The Matrix asks usual big question of AI debates: what if it gets really smart and turns out to not be so friendly? Like much of science fiction, this question about future technology is actually an opportunity to reflect about human nature and the current state of society. What we really fear is that we'll build something that amplifies our worst impulses for domination and control, like a giant chimpanzee mind in the cloud. While no one really knows how a hyper intelligent computer would act, the evidence suggests that our efforts so far have been problematic, at best.
The current major applications for AI are things like image recognition, spotting fake news, and recommending media. For these algorithms to be any good, they need to be trained human input, like when a captcha requires you to identify images of overpriced hipster nightclubs out of a lineup, in order to prove that you are human. Maybe it's just me, but this already eerily echoes a world where we work for the machines (or at least the masters of the machines). Simultaneously, the results generated by AI are generally seen as neutral or impartial, while being anything but. Researchers have found that computers can better identify the objects in an upper class household than those in more modest dwellings. A YouTube insider reveals that the AI making recommendations is actually toxic, working to keep you glued to the screen at all costs. As algorithms are quietly slipped into the decision making processes of social services and governmental agencies, questions of bias and underlying human rights are too easily overlooked.
Of course, there's plenty of bright side to the conversation too. Smarter robots could free us from tedious labor and be potent allies in overcoming the crises currently threatening human life on Earth. Ultimately, it's not about designing the best technology, but rather about facing the deep rooted human problems that get unconsciously carried into our quest to build intelligent machines. As we are increasingly presented with algorithmic options for everything from planning a marketing campaign to keeping our homes a comfortable temperature, it's worth asking: is this actually intelligent? Is it making us more, or less human?