CURT NICKISCH: Welcome to the HBR IdeaCast from Harvard Business Review. I’m Curt Nickisch.
A shiny new piece of technology is not good enough on its own. It needs to be implemented at the right time, used in the right context, and accepted in the right culture, applied in the right way. In short, it needs to be part of the right system. And that’s true for artificial intelligence too. AI can help individuals and teams make better predictions, combine that with judgment and you get better decisions. But those decisions have ripple effects on other parts of the system, ripple effects that can undermine the very prediction that was made.
Our guest today says, “If organizations want to take artificial intelligence to the next level, they need to get better at coordinating optimal decisions over a wider network.” Joshua Gans is a strategy professor at the University of Toronto’s Rotman School of Management. He co-wrote the HBR article, From Prediction to Transformation, as well as the new book, Power and Prediction: The Disruptive Economics of Artificial Intelligence.
JOSHUA GANS: Hi.
CURT NICKISCH: One big argument is that artificial intelligence has to do more for companies than just give data and insights. Is that like a big misconception that people have about AI? They’re going to get insights out of their data and that’s just not enough?
JOSHUA GANS: Yeah, so I think what happened some years ago, relatively recently, I guess, is that, of course, we started the hype about artificial intelligence. Businesses who are attuned to technological developments, started asking whether this was going to be something that should concern them or could take advantage of? And the one thing that artificial intelligence, in its recent incarnation, required, was data, artificial intelligence, machine learning, and deep learning. The more recent stuff, not the stuff that you might see in movies, is really an advance in the statistics of prediction.
It allows you to get much more accurate predictions for dramatically lower cost. But in order to generate those predictions, you do need to have data of differing kinds. And I think one of the things that the businesses asked themselves, “Well, we have a lot of data. We’ve been collecting data on so and so for years, maybe actually we’re well positioned to have a critical input into this new technology.” That led to more investment to clean up that data and make use of it. But I think there were some challenges.
CURT NICKISCH: An example of this is, I don’t know, a retailer who is trying to manage their inventory better so that they don’t have as much in stock, but have just enough that when somebody orders it, they have it close by or in the right location.
JOSHUA GANS: Yes. So prediction of demand is a common one, and it was a one that we would’ve thought would’ve been very, very ubiquitous. What we found that even for things like inventory, when you try to predict demand better, you have to say, “Well, what am I going to do with that prediction?” What I’m going to do with that prediction is, if I anticipate there are going to be a surge in demand for one of my products, what I need to do, is make sure I’ve got that product on hand. Well, that’s easier said than done. In this world of supply constraints, there may not be simple ways of doing that. We have very tight supply chains.
And so what might happen is, you might want to adopt AI for this thing, prediction, where there’s some clear uncertainty, but instead you realize that you can’t fully take advantage of it, because that requires coordinating everything all the way down the line. Something that we sometimes refer to as the AI bullwhip effect – basically employing AI somewhere, has reverberations down the line. And if you can’t actually get the rest of the system to come along with you, you might not be getting much value out of AI in the first place.
CURT NICKISCH: Have you seen a lot of places disappointed in what they’ve implemented?
JOSHUA GANS: I think relative to our expectations in 2018, the adoption of artificial intelligence beyond the biggest tech firms, has been pretty slow. There was a lot of optimism that it could be used and could be used to make certain tasks more efficient, and I think it has to some degree. But in terms of its true transformational impact, well, it turns out that just adding a dollop of AI isn’t going to do it for you. AI is something that gets leveraged within a context of a system. Yes, in some systems you can just improve your prediction and that system operates better.
In other systems, things are sort of divided, a bit modular, with one another. So you can put in some AI in one part of the organization, and that org part does better and the rest of it goes merrily along. But we suspect that in fact, the biggest transformations from AI are going to require a system-wide adjustment to exploit it. And you’re not going to exploit that just for a trial. You’re going to exploit that when you really think AI is going to give you that leverage, because there’s a lot of work involved obviously.
CURT NICKISCH: In your article you wrote about sailing teams in the America’s Cup, what they typically need to do to win and you tell a story of how one team used AI to really excel.
JOSHUA GANS: So the America’s Cup is deep in my bones because I’m an Australian and I grew up in the 1980s, and it was a significant affair. And what I always reflect about that, when Australia too, managed to win the America’s Cup, first non-American team to do so in over a 100 years. It was because of a more radical boat design, the so-called winged keel. And there was a lot of discussion of that, which was a very interesting way for Australians to win a sporting event, which is technological innovation as opposed to better training and other things that we were used to.
So that heralded an era where the America’s ocean racing like this, started to have greater technological inputs. So when it came to the application of artificial intelligence, artificial intelligence has the ability to look at conditions, look at behaviors, and predict better performance, and then to say, “Well, if we change the design this way,” or that way, or something completely different, “what might be the likely change in performance?” Because it can basically handle all those weird edge cases that people don’t normally think of.
And moreover, it can do it in the context of providing simulations. Now initially, the production process for coming up with new designs, was to put in a new design and then put it into a simulation where you had people operate the sailing boat as they would. One problem with that is, of course, every iteration takes time. Every iteration takes somebody going out and running several hours or maybe even more, of simulation.
CURT NICKISCH: Can’t do that at night, yeah.
JOSHUA GANS: Yeah, you can’t do it. So what was interesting there, one of the things that had happened about the same time, was that we’d had these advances in the playing of games like Go. Artificial intelligence initially became the world champion at Go, by looking at all the games that everybody had ever played, and using that to predict moves and predict winning strategies, and come up with better winning strategies. Soon after that, they wondered, “Well, what if we forget the people together and just have these AIs play even more games against each other,” and not limited to the total corpus of recorded games.
So Team New Zealand saw that and said, “Well, maybe we can just program in responses of people, automate them, not try to get too fancy about it, not to think about what they’re seeing, et cetera, and run a heap more simulations.” As a result, that iterate even faster. That may lead to a system where you think, “Oh, well, then you’re not going to need a person to run that sailing boat at all.” But actually, this is typical. The one place where systems seem to be starting to really work in AI, is in innovation itself.
CURT NICKISCH: Well, sorting through complicated problems, thinking about how one decision will affect another, that sounds a lot like what people are supposed to do at work, but a lot of organizations have this capability for AI in one place or with one data science team. Where does this need to change?
JOSHUA GANS: Think about what happens when you are dealing with a lot of uncertainty and you can’t predict it. One of our favorite examples is just the decision whether you carry an umbrella or not. If you have no forecast of the weather, well, it depends on your own preferences. How much do you want to get wet versus how much you’d mind carrying an umbrella? It’s essentially the choice. And so you’ll have a rule for it. Even if I give you a prediction of the rain, that might only slightly modify your rules. Let’s say, if there’s zero chance of rain, sure, I would’ve taken an umbrella but I personally even think 20% would be worrisome, so I would. So people have some specific rules. So you think about that now in the context of business, when we’re not dealing just with whether it rains or not, but a whole heap of uncertainty.
Well, if you’ve been unable to predict that uncertainty, you’ve been unable to adjust your organization to it, at least before the fact. And so a good way to deal with that, is to do what we do, is develop rules. So we take what might have been a decision, something where we, “Oh, if we think this is going to happen, do X. If we think something else is going to happen, do Y.” And we say, “Well, we don’t know what’s going to happen, so we just choose X or Y.” And we put that into our organization. Sometimes we come up with whole standard operating procedures where people have thought very deeply about what the best guide is for people when you just can’t react to everything that’s going on.
And so those are the successful organizations. Now, I am an AI salesman coming in and saying, “Oh, you need to adopt AI.” “Well, what does that mean?” What does that mean, is that means we’re going to give you some predictions of some of these things you’re missing and now you can make a decision, you can react to it. That’s got to be a better thing to do. And you’re saying to yourself, “I don’t have any decisions. We’ve got all these rules.” And a few years pass and some employee turnover, you might don’t even know. You might know the rules work, but you might not realize that the rules were a reaction to uncertainty and were made so you didn’t have to think about the uncertainty.
So it’s really hard for you to sell something to an organization when their organizations constructed itself, not to realize that they need it or could use it, or even worse, realize that and realize it’s going to have to change everything, which is a little scary. So that’s where we see disruption coming in, artificial intelligence requiring the organizations to break out of things that they were doing. Now, some large organizations can realize that or maybe they’ve got more flexibility and they can integrate it, but typically, that is a recipe for new entrants who are not starting off, “Oh, we are doing these rules starting off with some other basis, and we’re going to use AI to come in and do things better.”
CURT NICKISCH: When you say new entrants, are you talking about competitors?
JOSHUA GANS: Yeah, competitors, startups, things like that. Whereas, if you get a startup firm, well, they don’t have any legacy. They don’t have to change how they’re doing things. They’re not doing anything. So they’re building right from the start, from a green field essentially. And for these sorts of innovations that require a brand new system, it’s easier to start from scratch in some regards. So that’s where that competitive tension comes in.
CURT NICKISCH: Yeah. So you’re saying that it needs to be done differently going forward, but a lot of organizations just aren’t prepared to do that?
JOSHUA GANS: Yeah, I mean, think there is an appetite. I mean, there’s enough business school and HBR reading to know that these things are an issue. So CEOs will look at it and say, “This has the potential to disrupt. Maybe I should do something.”
CURT NICKISCH: And I’m just curious, what should that CEO do? If they’re at an organization and you’ve got some AI technology, but it’s in a team, in one place, or in a silo, and they’re improving decisions iteratively, but not really realizing the full system power of implementing AI, what should that CEO do?
JOSHUA GANS: This is where they earn their money, from these. This is a nasty, nasty problem. All the things are pushing towards doing nothing or to waiting and seeing, but changing your system’s going to take time. Really, what you want to have in an organization is, you want to make it easier on yourself by having organizational memory of why you are where you are. So remember I said before, you have rules and then you forget why you had the rule.
That’s going to be a problem. So what you want to do is, you want to design an organization such that that memory is being filtered through quite regularly. And moreover, people have some flexibility, so that when you come to do these organizational redesigns, it’s not as painful for everybody. But that’s a tension, because you are betting on the long-term or something potential, and you’re going to sacrifice something now, by preparing for it. That’s essentially the real dilemma of disruption.
CURT NICKISCH: You’ve given executives and leaders an out here, to know that this is hard, but what are some of the systems that companies should think about changing, to prepare for this?
JOSHUA GANS: So one of the things we try to encourage organizations when we sit down to talk about that, is we say, “Can you go through an exercise where you can identify the big uncertainty you are facing in an organization? And what that is?” So we might sit down with a hospital and ask them, and not even hospital system, just a hospital, “What are the big uncertainties?” And there’s all sorts of things. “What are the new techniques we’re going to put in? What are the costs of getting doctors, nurses, and so on?” “Oh, yeah, how we’re going to manage capacity? How are we going to make sure we’ve got enough beds for the demands in the local community?” I’m like, “Well, that’s an interesting one. Okay, why are you having trouble with that?”
“Well, one reason is things like COVID.” Okay, of course. “But the other is, the population changes and we build a hospital, but we can’t change it very often, the population changes,” and saying, “Well, that’s interesting. You’re talking about it in terms of people and you’re uncertain about the number of people. What about the length of their stay?” And they come back and say, “Oh, no, no, that’s standard. If you have this procedure, you’ll be there for that long,” et cetera, et cetera. I’m like, “Ah, there’s a word I’m going to clue on, standard. Why is that standard?” “Well, if somebody gets an appendicitis, you need to keep them there a few days to make sure that they don’t get an infection and secondary things, and other things like that. And so it’s a standard thing.”
Or some other procedures, which it might be five days, or a week, or what have you. “Because we’ve got to keep them under observation. Complications occur,” and I’m like, “Oh, so they’re sitting in the bed, waiting for that information and you are waiting for that information. What if you actually, at the time of the operation, had enough information that you can make a really great prediction about how out of the woods somebody is or not?” “Ah, that changes everything.” And now we go through the full experiment, “Well, let’s just go to the extreme and imagine you could perfectly predict that.” And all of a sudden it’d be, “Wow, we’d have a lot more hospital space all of a sudden. In fact, most of our people are sitting in bed waiting for stuff and if we had this knowledge.”
And I said, “Well, if you had some of this knowledge before they came into the hospital? What if you were collecting data on patients in the population before, so that when they get to the hospital, you’re not reacting and trying to work out what they have, but you have a good idea about what’s going on.” I’m like, “Again.” And that gets you down to this one variable, which is major, which is capacity in the hospital, and it’s telling you that all of a sudden your issue is not that you’re going to be running up against capacity constraints, is that if you got this AI magically tomorrow, you’d have a lot of spare capacity.
CURT NICKISCH: Right. But it sounds complicated.
JOSHUA GANS: It is complicated and I’ve glossed over a lot to get there, but that’s the sort of thing a CEO’s going to have to go through. Thinking through those sorts of scenarios and really trying to understand one or two things that if they could develop AI for, it would change everything. Because that’s the real worry. Developing AI for some of the other things on the fringes, it’s not going to be an existential threat or even a major opportunity, but developing AI that’s going to turn a business around, change how you think about major decisions like capital, or expansions, and stuff like that, that’s a whole other matter.
CURT NICKISCH: System change takes time. Is it a danger if the ability to change the system, takes longer than it does for the technology to improve?
JOSHUA GANS: Is it a danger? I don’t know. I think the technology will reach technical improvements at a much faster rate, than the systems will change for it. This is not unprecedented. In electricity, Thomas Edison lit up the streets of a suburb of New York, and it was 40 years before more than half the country started to have electricity going to their factories and to their houses. This stuff takes time, let alone with electricity, it did lead to a transformation of manufacturing and other businesses, and that didn’t happen for decades.
CURT NICKISCH: What’s your recommendation for a manager or somebody in a company, who feels like they need to be doing more but the organization isn’t, and they want to spur some change?
JOSHUA GANS: One of the things that we’ve found with thinking about changes in systems, is that they very rarely occur without changes in power. There are winners and there are losers. We saw this for the taxi cab industry when ride-sharing came into play and ride-sharing came into play because people had mobile devices, and so any driver would have the locational and navigational ability of the most experienced taxi driver. And the power change that occurred there, was power to individual drivers and power away from taxi drivers who previously had something that was more unique. This sort of thing is likely to occur within organizations as well.
Now sometimes we talk a lot about automation just replacing jobs and things like that, all these changes tend to be a bit more subtle, but one of the challenges of managing that change, is understanding where power is changing and where you’re going to get resistance from. Broadly speaking, that just means being a good manager. That must mean understanding people’s perspectives and points of view, with transformational things that is just as important as day-to-day things, and you just have to have a plan for it. Some of that plan may be that you decide to ignore or cast off some of the more resisting parts, but obviously the potential opportunity is to see if you can co-op them.
CURT NICKISCH: Joshua, thanks so much for coming on the show to talk about this.
JOSHUA GANS: Thank you.
CURT NICKISCH:That’s Joshua Gans. He is a professor at the University of Toronto’s Rotman School of Management. The chief economist at the Creative Destruction Lab, and a co-author of the HBR article, From Prediction to Transformation. He also co-wrote the new book, Power and Prediction.
If you got something from today’s episode, we have more podcasts to help you manage your team, manage organizations, and manage your career. Find them at hbr.org/podcast or search HBR in Apple Podcasts, Spotify, or wherever you listen.
This episode was produced by Mary Dooe. We get technical help from Rob Eckhardt. Our audio product manager is Ian Fox and Hannah Bates is our audio production assistant. Thanks for listening to the HBR IdeaCast. I’m Curt Nickisch.