Getting on the AI Learning Curve: A Pragmatic, Incremental Approach
After decades of promise and hype, AI is now seemingly everywhere. Over the past several years, the necessary ingredients have come together to propel AI beyond the research labs into the marketplace: powerful, inexpensive computer technologies; huge amounts of data; and advanced algorithms including machine learning.
While AI is likely to become one of the most important technologies of our era, we’re still in the early stages of deployment, especially outside leading-edge technology companies. But, as AI continues its rapid progress, it’s not too early to ask a few important questions: What is AI’s overall value to the economy? What are the biggest application opportunities?; and, what are AI’s most serious challenges and limitations?
To address these questions, McKinsey recently published a discussion paper on the marketplace potential of AI. The paper is particularly focused on machine learning and related technologies, and is based on a detail analysis of more that 400 use cases across 19 industries and 9 business functions.
“Two-thirds of the opportunities to use AI are in improving the performance of existing analytical use cases,” is the paper’s overriding finding. This is a very interesting insight. AI is now being successfully applied to tasks that not long ago were viewed as the exclusive domain of humans, - machine translation, natural language processing, defeating the world’s top Go players, - but only 16% of the use cases studied by McKinsey are greenfield cases, where only machine learning techniques can be used. For the vast majority of the use cases, 69%, the key value of machine learning is to improve performance beyond that provided by traditional analytic techniques. And, in the remaining 15% of cases, machine learning provided limited additional performance over existing analytical methods.
The data requirements for machine learning are substantially different from those of other analytic methods in a number of dimensions. The performance of traditional analytics tends to plateau as the data set size increases. However, the performance of properly trained machine learning techniques will significantly improve as the data sets get larger. Machine learning methods are particularly valuable in extracting patterns from complex, unstructured data, including audio, speech, images and video.
“Deep learning methods require thousands of data records for models to become relatively good at classification tasks and, in some cases, millions for them to perform at the level of humans.” Such advanced AI methods are thus particularly valuable in cases where analytics techniques are currently used, but much more data, unstructured as well as structured, is now available. “However, if a threshold of data volume is not reached, AI may not add value to traditional analytics techniques.”
McKinsey estimated the range of annual value to the global economy for both AI and for all analytics techniques, including AI. It did so by analyzing the value being currently created for each of the 19 industries and 9 business functions in its research study, and projecting their future potential value. AI has the potential to create between $3.5 trillion and $5.8 trillion in annual revenue, which is about 40% of the $9.5 trillion to $15.4 trillion potential annual impact of all analytics. AI’s potential value amounts to between 1% and 9% of 2016 revenue of the various industries analyzed, with the biggest potential in retail, transport and logistics, and travel.
The use case analysis also showed that the biggest AI opportunities are in top-line-oriented business functions like marketing and sales, and in bottom-line-oriented operational functions like supply-chain-management and manufacturing. The potential annual value of AI in marketing and sales is between $1.4 and $2.6 trillion, out of a total analytics annual potential of $3.3 to $6.0 trillion. For supply-chain management and manufacturing, the AI annual potential is $1.2 to $2.0 billion out of a total analytics annual potential of $3.6 to $5.6 billion.
These findings suggest that companies seeking to adopt AI in their operations should leverage and ramp up their existing analytics capabilities. The use cases show that AI should be deployed for value creation precisely in those areas where other analytics methods are already creating value. Companies should ramp up their analytics capabilities, including adding machine learning skills. They also need to make sure that they have access to the necessary data. Such a pragmatic, incremental approach to getting on the AI learning curve makes much more sense that attempting to tackle advanced, greenfield AI problems which require the kinds of skills and data that’s generally only available to tech giants.
A similar pragmatic, incremental advice for building AI capabilities was offered in a recent Harvard Business Review article by Tom Davenport and Rajeev Ronanki. Based on a study of over 150 AI-based projects, the authors found that AI can play a major role in three important business functions: advanced process automation, cognitive insight through advanced analytics, and cognitive engagement with customers and employees.
Not surprisingly, the majority of the projects studied, 71%, were using AI for advanced process automation. It’s the least expensive and easiest AI capability for companies to implement, since they’ve long been engaged with the automation of business processes. A new era of smart connected processes is now emerging. Datafication, - the ability to capture as data many aspects of business and society that have never been quantified before, enabling just about every product, service and process to become smart.
Cognitive insight projects were the second most common type in the study at 38%. There projects were starting to use machine learning algorithms to detect patterns in vast volumes of data and interpret their meaning, a kind of analytics on steroids. This is similar to the findings in the McKinsey study that enhanced analytics is a pragmatic way to get on the AI learning curve. Only 16% of the projects were experimenting with advanced AI capabilities, some still in the research stage, to engage with employees and customers - e.g., intelligent agents, natural language chatbots.
The McKinsey paper notes that to fulfill its promise, AI must overcome a number of serious challenges beyond the onerous data requirements discussed earlier.
- First is the labeling of the massive amounts of training data required for supervised machine learning, which often must be done manually. Promising new techniques are emerging to address this challenge.
- For many applications, e.g., healthcare, it’s often difficult to obtain sufficiently large and comprehensive data sets for training machine learning algorithms.
- Why was a certain decision reached? It’s still quite difficult to explain in human terms the results from large, complex AI applications.
- Another big challenge is the ability to leverage training among related machine learning applications. Each application must be separately trained even if it’s similar to a previously trained application.
- The final challenge is the risk of bias in data and algorithms. When applied incorrectly, AI applications risk perpetuating existing social and cultural biases.
“Above all, our analysis of use cases suggests that successful AI adoption will require focus and setting of priorities,” notes the McKinsey paper in conclusion. “Its value is tremendous - and looks set to become even more so as the technologies themselves advance. Identifying where and how that value can be captured looks likely to become one of the key business challenges of our era.”