by Nada R. Sanders, PhD, and John D. Wood, Esq., co-authors of “The Humachine: Humankind, Machines, and the Future of Enterprise“
Artificial intelligence is at the top of mind of every corporate executive. It dominates shareholder calls and flames the fire of financial expectations. Its powers and potential give stock prices a bump and bolster investor confidence. But too many companies are reluctant to address AI’s very real limits.
It’s become taboo to discuss AI’s shortcomings and the limitations of machine learning, neural nets, and deep learning. However, if we want to strategically deploy these technologies in enterprises, we need to understand AI’s six distinct weaknesses.
AI lacks common sense.
AI may be able to recognize that within a photo, there’s a man on a horse. But it probably won’t appreciate that the figures are actually a bronze sculpture of a man on a horse, not an actual man on an actual horse.
Consider the lesson offered by Margaret Mitchell, a research scientist at Google. Mitchell helps develop computers that communicate about what they see and understand. As she feeds images and data to AIs, she asks them questions about what they “see.” In one case, Mitchell fed an AI lots of input about fun things and activities. When Mitchell showed the AI an image of a koala bear, it said, “Cute creature!” But when she showed the AI a picture of a house violently burning down, the AI exclaimed, “That’s awesome!”
The AI selected this response due to the orange and red colors it scanned in the photo; these fiery tones were frequently associated with positive responses in the AI’s input data set. It’s stories like these that demonstrate AI’s inevitable gaps, blind spots, and complete lack of common sense.
AI bakes in bias.
There’s an increasing awareness that machine-learning algorithms encode biases and discrimination into outcomes. After all, algorithms simply look for patterns in the data. Whatever is embedded in the data is what the algorithms will repeat.
A well-known example is when Google trends overestimated incidences of the flu. The theory goes that if people get the flu, they’ll turn to Google to search for “flu” and related terms. But this turned out to be a misleading method of gathering data. Searches for “flu” actually reflected how often the flu made it into the news, rather than how many people were in bed, sick and miserable. The lesson? What happens in the digital world does not always reflect reality. Without human interpretation and context, these types of outcomes can completely mislead an organization.
AI is data-hungry and brittle.
Neural nets require far too much data to match human intellects. In most cases, they require thousands or millions of examples to learn from. Worse still, each time you need to recognize a new type of item, you have to start from scratch.
Algorithmic problem-solving is also severely hampered by the quality of data it’s fed. If an AI hasn’t been explicitly told how to answer a question, it can’t reason it out. It cannot respond to an unexpected change if it hasn’t been programmed to anticipate it.
Today’s business world is filled with disruptions and events — from physical to economic to political — and these disruptions require interpretation and flexibility. Algorithms can’t do that.
AI lacks intuition.
Humans use intuition to navigate the physical world. When you pivot and swing to hit a tennis ball or step off a sidewalk to cross the street, you do so without a thought—things that would require a robot so much processing power that it’s almost inconceivable that we could engineer them.
Algorithms get trapped in local optima.
When assigned a task, a computer program may find solutions that are close by in the search process — known as the local optimum — but fail to find the best of all possible solutions. Finding the best global solution would require understanding context and changing context, or thinking creatively about the problem and potential solutions. Humans can do that. They can connect seemingly disparate concepts and come up with out-of-the-box thinking that solves problems in novel ways. AI cannot.
AI can’t explain itself.
AI may come up with the right answers, but even researchers who train AI systems often do not understand how an algorithm reached a specific conclusion. This is very problematic when AI is used in the context of medical diagnoses, for example, or in any environment where decisions have non-trivial consequences. What the algorithm has “learned” remains a mystery to everyone. Even if the AI is right, people will not trust its analytical output.
AI offers tremendous opportunities and capabilities. But it can’t see the world as humans do. Instead, it provides the potential for humans to focus on more meaningful aspects of work that involve creativity and innovation. As automation replaces more routine or repetitive tasks, it will allow workers to focus on inventions and breakthroughs, which ultimately fuels an enterprise’s success.
Nada R. Sanders, PhD is an internationally recognized thought leader and expert in forecasting, analytics, global supply chain intelligence, and sustainability. She is Distinguished Professor at D’Amore-McKim School of Business at Northeastern University and a lifelong fellow of the Decision Sciences Institute.
John D. Wood, Esq. is an attorney, author, and advocate for sustainable business practices; a member of the New York and Texas State Bar Associations; and founder of The Law Firm of John D. Wood, PLLC. He delivers continuing legal education on topics including open-source software and copyleft, artificial intelligence in the law, and deepfakes.
Their new book is “The Humachine: Humankind, Machines, and the Future of Enterprise“. Learn more at TheHumachineBook.com.