Automation, leveraging artificial intelligence (AI) and other technologies, has opened up new possibilities. The pace of adoption has been rapid. Institutions of all sizes globally are leveraging automation to drive value. According to the McKinsey Automation Survey in 2018, 57 percent of 1,300 institutions have already started on this journey, with another 18 percent planning to kick off something within the next year.
When done right, automation has proven to deliver real benefits, including the following:
- Distinctive insights: Hundreds of new factors to predict and improve drivers of performance
- Faster service: Processing time reduced from days to minutes
- Increased flexibility and scalability: Ability to operate 24/7 and scale up or down with demand
- Improved quality: From spot-checking to 100 percent quality control through greater traceability
- Increased savings and productivity: Labor savings of 20 percent or more
Four most important practices that are strongly correlated with success in automation:
- Understand the opportunity and move early: Start taking advantage of automation and AI by assessing the opportunity, identifying the high-impact use cases, and laying out the capability and governance groundwork.
- Balance quick tactical wins with long-term vision: Identify quick wins to automate activities with the highest automation potential and radiate out, freeing up capital; in parallel, have a long-term vision for comprehensive transformation, with automation at the core.
- Redefine processes and manage organizational change: Since 60 percent of all jobs have at least 30 percent technically automatable activities, redefining jobs and taking an end-to-end process view are necessary to capture the value.
- Integrate technology into core business functions: Build AI and other advanced technologies into the operating model to create transformative impact and lasting value support a culture of collecting and analyzing data to inform decisions, and build the muscle for continuous improvement. We hope this curated collection will be helpful to you in realizing the full value potential from your own automation transformation.
Limitations in AI Technology
AI technologies have plenty of limitations that will need to be overcome. They include the onerous data requirements listed previously, but also five other limitations:
- First is the challenge of labeling training data, which often must be done manually and is necessary for supervised learning. Promising new techniques are emerging to address this challenge, such as reinforcement learning and in-stream supervision, in which data can be labeled in the course of natural usage.
- Second is the difficulty of obtaining data sets that are sufficiently large and comprehensive to be used for training; for many business use cases, creating or obtaining such massive data sets can be difficult—for example, limited clinical-trial data to predict healthcare treatment outcomes more accurately.
- Third is the difficulty of explaining in human terms results from large and complex models: why was a certain decision reached? Product certifications in healthcare and in the automotive and aerospace industries, for example, can be an obstacle; among other constraints, regulators often want rules and choice criteria to be clearly explainable.
- Fourth is the generalizability of learning: AI models continue to have difficulties in carrying their experiences from one set of circumstances to another. That means that companies must commit resources to train new models even for use cases that are similar to previous ones. Transfer learning— in which an AI model is trained to accomplish a certain task and then quickly applies that learning to a similar but distinct activity—is one promising response to this challenge.
- The fifth limitation concerns the risk of bias in data and algorithms. This issue touches on concerns that are more social in nature and which could require broader steps to resolve, such as understanding how the processes used to collect training data can influence the behavior of the models they are used to train. For example, unintended biases can be introduced when training data is not representative of the larger population to which an AI model is applied. Thus, facial-recognition models trained on a population of faces corresponding to the demographics of AI developers could struggle when applied to populations with more diverse characteristics. A recent report on the malicious use of AI highlights a range of security threats, from sophisticated automation of hacking to hyperpersonalized political disinformation campaigns.