The Other Kinds of Artificial Intelligence

Machine learning and other approaches can feed into AI, but they’re not the same.

Artificial intelligence is seemingly everywhere these days, from self-driving cars and virtual assistants to medical innovations. References to AI are so pervasive, it also seems to have different names. Depending on the project at hand, you might hear talk of machine learning, deep learning or cognitive computing, all of which produce a kind of “thinking machine.” But while those terms sometimes get used interchangeably, they’re not exactly the same thing.

AI, of course, is the most commonly used term, because it is the umbrella that sits over the others and also because it has long been alive in the cultural imagination, for both good and ill, from the beneficent Data in “Star Trek: The Next Generation,” to the malevolent Agent Smith in “The Matrix,” and many other manifestations in between.

In the real world, AI has a similar reputation. While its benefits are widely touted, it also has been painted as a potential threat to humanity by Elon Musk and others (including the likes of Bill Gates and Stephen Hawking), with Musk comparing AI to “summoning the demon.”

Artificial Intelligence

CXO Tech Forum

Headlines naturally have played up the potential end of the human race, but Musk’s more immediate point was the need to gain a better understanding of AI and how it works before it grows too powerful and, possibly, gets the better of us. To that end, here’s a brief look at AI, machine learning, deep learning and cognitive computing, and how they complement and differ from each other.

Artificial Intelligence

AI is the overarching term, and the ultimate goal, for machines that can think for themselves. The term itself was coined in 1955 by John McCarthy, then a professor at Dartmouth, who three years later developed the LISP computer programming language, still the preferred language in AI development.

AI is used in a growing number of everyday applications, from Netflix recommendations based on a user’s history to voice and image recognition, and some of its biggest advances have come in the last few years. Error rates in voice and image recognition systems, for instance, have fallen considerably since 2010. In late 2016, Google Translate added AI, marking — from a user’s point of view — an overnight leap in quality, almost like going from someone trying to communicate with a phrase book to someone who was fairly fluent. In July, Facebook shut down one of its AI engines after developers discovered it had gone off and invented its own language.

But as fast and efficient as these systems are, they still can’t think like a human.

Current AI systems exist in the realm of what’s been called “vertical artificial intelligence,” using algorithms to sort through troves of data, interpret various factors and reach conclusions. But they’re generally tailored to specific tasks, such as medical diagnoses, and business management, driving a car within the universal rules of the road, or using a person’s history to predict their personal preferences.

The Holy Grail is General AI, which essentially would allow a system to think like a human in any situation — and which is harder than it might seem. Among the barriers are the subtleties in processing language and handling a full spectrum of ideas at once. AI systems can beat humans at chess and "Jeopardy," for instance, but can’t really handle a wide-ranging conversation or score high on the SATs. They are excellent at identifying images in a database, but less impressive in real-world conditions where inconsistent lighting, shadows or other interference can create confusion. While they can perform specific, complex tasks with greater speed and efficiency than humans can, a full General AI system appears to be pretty far away.   

Nevertheless, the approaches that make up facets of AI have been making great strides.

Machine Learning

At its root, machine learning, or ML, represented a major new direction in computing. Rather than following its programming to execute a specific task, computer algorithms would “learn” from examples, allowing it to reach a conclusion that had not been directly programmed into its software. And although it’s a subset of AI and is used in AI programs, not all AI uses ML.

The concept of machine learning dates to 1950, when Alan Turing proposed what would become known as the “Turing Test,” in which a human and computer engage in a conversation to see if the computer can convince its interlocutor that it, too, is a human. Computers weren’t up the task until 2014, when a chatbot posing as a 13-year-old boy from Ukraine convinced 33 percent of a panel of judges it was human, surpassing the 30 percent threshold long established as a passing grade for the test.

Bots now seem to be passing the test at an alarming rate, which indicates the sudden acceleration in recent years in what ML systems can do. Back 1957, the economist and AI pioneer Herbert Simon predicted a computer would beat humans at chess in a decade. It took 40 years before IBM Deep Blue took down Garry Kasparov, but reality is now overtaking predictions.

Google’s AlphaGo program, for instance, defeated a grand master at the Asian strategy game Go in 2016, 10 years earlier than experts had predicted. In another example, ML is being used to cool data centers and other buildings, finding greater efficiency in systems already optimized by human controllers. Machine learning algorithms are widespread in big data analytics and data mining.

Deep Learning

Deep learning is a type of machine learning that sifts data through successive layers of artificial neural networks, in which nodes act as neurons, and the output from one layer becomes the input for the next. Tech companies such as Google and Amazon are developing deep learning for a variety of uses, as are government research agencies such as the Defense Advanced Research Projects Agency, with the Deep Purposeful Learning program, which it calls Deep Purple, and the National Institute of Mental Health’s research into Explainable AI.

Deep learning is being applied to the visions systems in self-driving cars, natural language processing and fraud detection, among other uses.

Cognitive Computing

Cognitive computing, by definition, is perhaps the closest to AI, as its goal is to create a computer system that simulates human thought. It’s used in a number of AI applications, including robotics, virtual reality and natural language processing. And it uses some of the same tools as machine and deep learning systems, such as neural networks and machine learning algorithms, combining them with the cognitive science of how the brain works.

IBM’s Watson is the most well-known example of cognitive computing, and while its successes in a variety of fields are examples of the potential for General AI, it also shows some of the limitations still in existence. Watson, after all, isn’t a single entity. Different versions of it have been applied to cancer research, financial investment, travel planning and security analytics.

The Next Steps

The tools of artificial intelligence have made great strides in recent years, and their development is likely to accelerate. But most researchers say a machine that can think and act like a human is still pretty far down the road. Understanding how these systems work is one of the keys to ensuring they serve humans, not the other way around.

101 Constitution Ave NW, Suite 100 West Washington, DC 20001

(c) 2017 GovernmentCIOMagazine. All Rights Reserved.