The DNA of AI Success: What It Looks Like and How to Get It
The AI journey starts with a single step, but too many companies take the wrong first step. The natural tendency is to begin with PoC projects at the departmental level. Start small and see what happens, right?
Actually no. With AI, starting small usually means staying small, with the small, neutral or negative returns on investment that too many companies have experienced. Half measures run contrary to what we’re learning about the inherently high-powered nature of AI in the enterprise. AI isn’t a tactical tool in search of point solutions, it's a strategic technology that requires silo-free, organization-wide commitment, from the C-suite through line-of-business to IT. Less than that results in project failure, bad ROI and futility.
That was the overriding message from Michael Gale, a noted AI thinker and bestselling author of The Digital Helix: Transforming Your Organization's DNA to Thrive in the Digital Age, at Nvidia’s recent virtual GTC Conference. The immediate source of his insights was an IBM study that included interviews with 550 business leaders, and Gale observed that although AI is becoming “table stakes” when it comes to IT transformation, the IBM study shows a correlation between the 55 percent of companies merely experimenting with AI and the 50 percent of AI programs that have no real measurable ROI.
But what about the less than 20 percent of companies classified as "AI thrivers," which consume 60 percent of the growth in AI’s collective ROI? Who are they and what traits do they have in common? Gale said they share seven traits, characteristics of which can be broken into two broad characteristics.
Organize for Scale, Not Experimentation
The first characteristic is that AI thrivers don’t count on AI to spontaneously and organically proliferate. Instead, thrivers from the outset instill a vision of weaving AI throughout their businesses by forming a cadre assigned with “proactive planning for scale” even as AI adoption is still at its formative stages.
“For what we would call core principles, for what we call measured success, if you build that core team, it's got to be cross-functional even if the (initial) AI investments may focus on one small area,” Gale said. “If you don't have a lot of cross-functional involvement, it's really difficult to even hypothesize scale, let alone deliver it. You've got to design scale from the very beginning. The proof-of-concept idea is not at all bad, but you've got to move off that idea very fast. And you have to have a team involved that has multi-functional, cross-functional capability within it.”
A key task of this core team: “handling the inevitable ambiguity.” By this Gale refers to the confusion and disruption that result when AI is integrated into business processes, issues that can be handled only if senior execs, departmental managers and data scientists collaboratively resolve uncertainties and dislocations brought on by AI.
Gale cited GM Financial's efforts to instill that ethos, he quoted Lynn Calvo, assistant VP of Emerging Data Technology, who said: “Our goal is to leverage machine learning across our entire organization through a center of excellence model. One of the biggest things that keeps me up at night is moving from experimentation to production.”
He also cited a large company in the sports industry that, to instill a sense of mission and vision within its AI cadre, told it to produce 100 interesting findings during the first 90 days of its work, findings not just about data and ways to use it but also about collaboration about communication.
“The team placed on a wall a set of post-it notes, 100 of them,” said Gale, “that people looked at, and they brought people in to start to ‘osmosify’ this level of learning experience through this process. So you should think about unique ways of showing progress and success collaboratively as you start to go through this process.”
It’s all about building “pathways to scale,” Gale said.
“We've learned from thousands of hours with with clients going through this process … if you don't build those pathways to scale really early on, you'll be stuck in a set of islands of experiments,” he said. “They may be very successful experiments, but they're not going to make enough radical difference to the organization.”
AI at the Core
Alongside nurturing a “share and go big” culture, the other key trait held in common by AI thrivers is a technology strategy that Gale called “keeping AI close to the core.” This is a combination of servers, software, frameworks and networks located in hybrid, on-premises environments that incorporates private clouds. Whereas AI experimenters tend to rely on public cloud platforms to augment their data center capabilities, AI thrivers build on-prem infrastructures that reflect a vision of AI integrated throughout their companies’ business processes, that supports company-wide collaboration, that lets them to organize for scale, Gale said.
“So just think about this, the 15 percent of organizations that do this and do it well were sucking up nearly 60 percent of all the (AI) revenue growth…," he said. "Organizations that … disperse that AI infrastructure and ideas outside the organization were actually 250 percent less likely to be in this high growth revenue group. What is clear is if you keep AI at the core, close to you, you can see the high end of positive changes in OPEX, CAPEX, SGA and a whole bunch of other metrics.”
One reason on-prem AI infrastructures deliver better ROI, Gale said, is because they are built by companies less concerned with cost containment than companies that turn to pay-as-you go cloud-based alternatives.
“They have a very low anxiety about costs,” Gale said of AI thrivers. “Only 9 percent of them see cost as the biggest issue. It really is about design as a strategy. And actually the biggest challenge for them – and I think it's an important indicator of how you build success – is really in this idea of collaboration. The biggest challenge for these leaders is not technology, but how do they manage the day-to-day collaboration and the process at all levels of an organization.”
In addition, on-prem AI infrastructures tend to be optimized with accelerated processing (GPUs, FPGAs, etc.) and high performance networks – combinations of technologies selected and tuned for the requirements of the organization’s AI objectives.
Gale cited the case of Washington University St. Louis and Vanderbilt University accelerating the creation and deployment of deep learning models designed to “fill the gaps” in incomplete MRI brain scans. They use a customized combination of on-prem hardware and software that resulted in 20X faster training of deep learning models, increasing speed and accuracy of diagnoses.
Central to AI success is increased developer productivity and resource utilization. “It's tough to do that if you don't keep AI close to your core,” Gale said. “Performance is defined in a number of ways, from simplified programming, to easily deployed private hybrid cloud environments, to reduced data movement, to the right tool sets to get going and implementation and management. This ability to get increased developer productivity and resources utilized better is part of making this a very robust part of your business, not just a peripheral experimentation.”