By Xieyang Jessica Qiao
Medill Reports
Artificial intelligence (AI), machine learning (ML), and deep learning (DL) – these buzzwords are used so interchangeably that they become fluid in interpretation. But while these emerging technologies are intertwined, they provide different levels of application.
DL is a subset of ML, and ML is a subset of AI, the umbrella term that is common to all three. In a diagram, AI is the biggest circle encapsulating ML and DL. But the progression toward smaller circles takes us to more sophisticated and brain-like systems of analyzing data and learning from it for new applications.
“Human intelligence exhibited by machines, that’s the formal definition of AI,” said Jason Mayes, senior creative engineer of Google. “Now, there are two types of AI: artificial general intelligence (AGI) and narrow AI.”
Hollywood movies such as “The Terminator” revel in the idea of AGI, where machines can successfully perform any intellectual task a human being can. While human beings might automate products and services in the future with AGI, we are now still in a phase called narrow AI.
Mayes said narrow AI concentrates on and excels at one or two things. In those one or two particular tasks, the system can replicate and even outperform human intelligence. In the medical industry, doctors who examined brain scans of tumors in the past could miss the tumor because the imagery generated by MRI machines was grainy. But narrow AI could mitigate this risk and improve the treatment quality in this situation.
“The narrow AI system can highlight where the tumor might be and doctors can inspect that area in greater detail,” Mayes said. “Now, we get this combination of human and AI working together to spot more abuse cases rather than miss them.”
The spectrum of AI applications is wide-ranging, including speech recognition, natural language processing, regression to predict numeric values, and so forth.
“If you have a house in San Francisco that is 1,000 square feet, then we can estimate its price if we have enough data,” Mayes said.
If AI is the end result, then ML is operating at the implementation level because it involves creating algorithms that learn complex functions or patterns from data sets and make predictions based on them. Rather than being programmed with explicit rules to follow, ML is able to learn from examples.
“The key part of machine learning is that, when you write a program to look for cats and it learns to do that, you can use the same sets of code to look for dogs,” Mayes said. “You just change the training data you give. Instead of using cat photos, you use dog photos. But the actual programming code doesn’t change.”
Even more narrowly focused on a subset of ML techniques is DL, which promises to escalate the advances of AI to another level. It can be generally defined as “a technique for implementing ML,” Mayes said.
DL interprets data features using what is known as a deep neural network (DNN), a type of artificial neural network (ANN) where algorithms are arranged in multiple interconnected layers that are deep in their size and feed on each other, mimicking the human brain while learning “patterns of patterns,” Mayes said.
The biological brain has about 86 billion interconnected neurons, some of which are good at detecting certain things when activated. For instance, there may be a bunch of them dedicated to recognizing an eye, others for an ear, a nose or a mouth.
“If these activations happen simultaneously, your brain tells you that you saw a face,” Mayes said. “An artificial neural network is essentially trying to replicate that using mathematics and statistics. Each neuron in the next layer is connected to every neuron in the previous and data flows forward only.”
Through several stages of data processing, each of which is based upon previous building blocks, ANN automatically generates identifying characteristics to recognize high-level things. Yisong Yue, machine learning professor at the California Institute of Technology, said ANN can take the input of an image and predict an output.
“Does the image contain a cat? Yes or no? That’s at a high level what an artificial neural network tries to do,” Yue said. “It takes a look at all the pixels in the image. It looks at different combinations of these pixels. It sees that a different combination of pixels looks like the picture of a cat.”
Image recognition and voice search are just two services sparked by progress in DL. In the healthcare industry, startups began using DL to train an algorithm for tasks such as diagnosing skin cancer. While researchers have made tremendous headway in the field of ML, they still encounter problems embedded in the system, one of which is being susceptible to unintended biases.
“There are risks of bias if you don’t train your systems properly,” Mayes said. “You need variety in your training data to reduce bias in as many ways as possible for a meaningful system to work for most people.”
For these systems to become widely distributed to the general public, things have to work in a way that makes sense from a market perspective, Yue said.
“The difference between something that works in a research lab and something that works as a product that millions of people can use is the effort that a company spends to develop that technology,” Yue said. “Is there a way to monetize these technologies and is the effort worth the payoff?”