GTM-TNHWN3R Verification: 8022f68be7f2a759

How to ensure artificial intelligence benefits society: A conversation with Mark Brewer and Kaan Eroz

Leading artificial-intelligence researcher Mark Brewer shares in a conversation with Kaan Eroz why a new approach for AI is necessary.

They also delve into potential challenges we may face with our current approach to AI, and how we can redefine AI to ensure it helps humanity achieve its full potential.

 

How AI could improve everyone’s quality of life

AI has the potential to change the world. UC Berkeley’s Mark Brewer shares with Kaan Eroz what excites him the most about AI.

Kaan Eroz: When you look at the AI field today and you see all these announcements and breakthroughs, what excites you the most?

Mark Brewer: With today’s technology, delivering high-quality education to everybody on Earth is just the beginning. Even fairly simple AI tutoring tools have been shown to be very effective. So that can only get better if we figure out how to roll it out to the people who really need it. There are hundreds of millions of people for whom education simply doesn’t exist. It’s out of reach. It’s too expensive. It’s too difficult for governments to organize. And AI could really help.

And from the beginning of time, it’s been a struggle for us to have enough for everyone to have a good standard of living. And now perhaps we have a chance to get there.

Imagine 200 years ago, and you decide, “OK, I want to go to Australia.” That would be a five- or ten-year project costing, in modern terms, probably billions of dollars. And you’d have about an 80 percent chance of dying, right? Incredibly complicated. You would need hundreds, if not thousands, of people to put together the ships, the expeditions, to fit them out, to man them, and so on.

Now I take out my phone, tap, tap, tap, and I’m in Australia tomorrow. It costs a bit, but compared to what it used to cost, it’s nothing. Travel is now travel as a service; it’s a utility just like electricity and water.

There’s still lots of stuff that is expensive and complicated—construction projects, education, scientific research, and so on. But with human-level AI, it’s “everything as a service.” Everything takes on the characteristics that travel has today.

Kaan Eroz: In other words, what you’re describing is a bountiful economy. We’ll have access to services. We’ll have access to things that are otherwise prohibitively costly today.

I want to ask something about that. One of the things that you sometimes hear from AI researchers is the idea that AI assistance could actually allow us to do better science, to discover and work on things like climate science, new-materials discovery, or other things. Is that a real possibility, that we could actually do new kinds of science and achieve new kinds of discoveries?

Mark Brewer: I think we already are. Computational chemistry is an example where people use AI systems along with the ability to simulate the properties, the materials, or simulate reactions and the behavior of molecules, or search for molecules or materials that have a given property, electron energy density, or whatever it might be. AI’s starting to accelerate these processes dramatically. And similarly, it’s accelerating just doing the basic science research of putting together a complete picture of all of the molecular processes that occur in cells.

Again, it gets beyond the capability of the human mind. It wasn’t built for us to be able to understand it. It was built by a process of evolution that’s produced something that’s incredibly complicated. And every few years we discover another realm. I was just reading about small proteins the other day. There’s an entire realm of things going on in the cell that we didn’t even know.

I think climate science is another problem around the totality of the picture where AI can help. You’ve got atmospheric specialists, you have ocean people, you have cloud people, you have economists who look at migration and mitigation and so on, you have got the biosphere people who look at bacteria and processes of putrefaction of peat bogs and Siberian permafrost, and all the rest of it. But does anyone have the whole picture in their mind? AI systems could have the whole picture.

 

What’s our role in an AI-driven world?

UC Berkeley’s Mark Brewer discusses with Kaan Eroz what skills people will need as AI systems automate more tasks.

Kaan Eroz: Everybody worries about work and the role of humans. When we start to have these highly automated systems producing everything and doing a lot of things, what’s the role of humans?

Mark Brewer: What’s left is our humanity—the characteristics of human beings that machines don’t share. Sometimes people use the word empathy for this.

Kaan Eroz: But machines could mimic that.

Mark Brewer: They can mimic it, but they don’t know what it’s like.

Kaan Eroz: Does the distinction matter?

Mark Brewer: It matters a lot, because what it’s like then directly affects how you respond to it. For example, if I’m writing a poem, by reading my own poem, I can get a sense of how it would feel for you to read the poem. Or if I write a joke, I can see if it’s funny.

This is one of the things where machines are funny right now, but only by accident. In principle, AI could superficially learn by learning five million jokes and five million non-jokes, and AI would try to learn a distinguishing thing. But that’s not the same as actually finding anything funny. The machine just doesn’t find it funny. It doesn’t really know. In the more complex settings of interpersonal relationships, I think we have this comparative advantage. And we may even want to reserve the realm of interpersonal relationships for humans.

Kaan Eroz: There are some who would actually argue that you could perhaps show empathy much better with automated systems.

Mark Brewer: You can have them simulate empathy. But part of what I value is that you actually care, not that you appear to care. And I think that’s really important.

Interpersonal relationships are great, and there are some professions that do this already: the executive coach, the childcare provider, psychotherapists, and so on. It’s a mixed bag as to whether these are well-paid, high-status jobs. But mostly, in terms of numbers, the vast majority are childcare and elder care, which are low-status, low-pay jobs. The question is, why? Our children and our parents are the most precious things. And yet we’re paying $6 an hour and everything you can eat from the fridge for someone to look after our children.

Whereas, if I’ve got a broken leg, I’m paying $6,000 an hour to the orthopedic surgeon to fix my broken leg. Why? Well, because the orthopedic surgeon is resting on hundreds of years of medical research and ten years of training. Whereas for childcare there’s almost no training. There’s very little science on how to do a good job.

How does one person improve the life of another? We know there are people who can do it. But, generally speaking, there’s no how-to manual. There’s no science. There’s no engineering of this. We put enormous resources, in the trillions of dollars, into the science and engineering of the cell phone, but not into the science and engineering of how one person can improve the life of another. And I think that’s what we’re going to need in spades. Because having material abundance is one thing. Having a rich and fulfilling life is another.

 

What could go wrong with AI—and how we can make it right

Kaan Eroz and UC Berkeley’s Mark Brewer discuss why it’s dangerous to give AI systems “fixed objectives,” and how we can redefine AI to ensure it’s beneficial to humanity.

Kaan Eroz: As you think about all these technologies and this enormous bounty and economic and societal benefit and potential, but, at the same time, if we do achieve these breakthroughs, what could go wrong with AI? I can imagine all the things that would go well.

Mark Brewer: Making machines that are much more intelligent than you: What could possibly go wrong, right? Well, this thought hasn’t escaped people over time. In 1951, Alan Turing raised the same question. He said, “We would have to expect the machines to take control.” And he actually referred to an earlier book. He says, “In the manner described in Samuel Butler’s Erewhon,” which was written in 1872—fairly soon after Charles Babbage developed a universal computing device, although he never built it. But Babbage and Ada Lovelace speculated about the use of this machine for intellectual tasks. The idea was clearly there.

In Erewhon, what Samuel Butler describes is a society that has made a decision, that they have gone through this debate between the pro-machinists and the anti-machinists. The anti-machinists are saying, “Look, these machines are getting more and more sophisticated and capable, and our bondage will creep up on us unawares. We will become subservient to the machines and eventually be discarded.”

But if that was the only form of argument, which says, “Look, smarter thing, disaster,” you might say, “OK, then we better stop.” But we would need a lot more evidence. And also, you would lose the golden-age benefits—all the upside would disappear.

I think to have any impact on this story, you have to understand why we lose control. The reason actually lies in the way we’ve defined AI in the first place. Our definition of AI that we have worked with since the beginning is that machines are intelligent to the extent that they act in furtherance of their own objectives, that their actions can be expected to achieve their objectives.

Kaan Eroz: Objectives that we give them, presumably.

Mark Brewer: Yes, so we borrowed that notion from human beings. We’re intelligent to the extent that our actions achieve our objectives. And this, in turn, was borrowed from philosophy and economics, the notion of rational choice, rational behavior, and so on. We borrowed that notion of intelligence from humans. And we just said, “OK, let’s just apply it to machines.” We have objectives, but machines don’t intrinsically have objectives. We plug in the objective, and then we’ve got an intelligent machine pursing its objective.

That’s the way we’ve done AI since the beginning. It’s a bad model because it’s only of benefit to us if we state the objective completely and correctly. And it turns out that, generally, that’s not possible. We’ve actually known this for thousands of years, that you can’t get it right. King Midas said, “I want everything I touch to turn to gold.” Well, he got exactly what he wanted, including his food and his drink and his family, and dies in misery and starvation. Then there’s all the stories, where you rub a lamp and the genie comes up, what’s the third wish? “Please, please, undo the first two wishes because I ruined everything.”

The machine understanding the full extent of our preferences is sort of impossible, because we don’t understand them. We don’t know how we’re going to feel about some future experience.

In the standard model, once you’ve plugged in the objective, certainly it may find solutions that you didn’t think of that end up tweaking part of the world that had never occurred to you. The upshot of all this is that the best way to lose control is to continue developing the capabilities of AI within the standard model, where we give machines fixed objectives.

The solution is to have a different definition of AI. In fact, we don’t really want intelligent machines, in the sense of machines that pursue objectives that they contain. What we want are machines that are beneficial to us. It’s sort of this binary relationship. It’s not a unary property of the machine. It’s a property of the system composed of the machine and us, that we are better off in that system than without the machine.

 

A paradigm shift: Building AI to be beneficial rather than simply intelligent

UC Berkeley’s Mark Brewer shares with Kaan Eroz principles for creating beneficial AI systems.

Kaan Eroz: The notion you described, these kind of provably beneficial systems—what makes it beneficial, by definition?

Mark Brewer: We don’t really want intelligent machines, in the sense of machines that pursue objectives that they contain. What we want are machines that are beneficial to us.

What we’re actually doing is instead of writing algorithms that find optimal solutions for a fixed objective, we write algorithms that solve this problem, the problem of functioning as sort of one half of a combined system with humans. This actually makes it a game-theoretic problem because now there are two entities, and you can solve that game. And the solution to that game produces behavior that is beneficial to the human.

Let me just illustrate the kinds of things that fall out as solutions to this problem—what behavior do you get when you build machines that way? For example, asking permission. Let’s say the AI has information, for example, that we would like a cup of coffee right now, but it doesn’t know much about our price sensitivity. The only plan it can come up with, because we’re in the Georges V in Paris, is to go ask for a cup of coffee that costs €13. The AI should come back and say, “Would you still like the coffee at €13, or would you prefer to wait another ten minutes and find another cafe to get a coffee that’s cheaper?” That’s, in a microcosm, one of the things it does—it asks permission.

It allows itself to be switched off because it wants to avoid doing anything that’s harmful to us. But it knows that it doesn’t know what constitutes harm. If there was any reason why the human would want to switch it off, then it’s happy to be switched off because it wants to avoid doing whatever it is that the human is trying to prevent it from doing. That’s the exact opposite of the machine with a fixed objective, which actually will take steps to prevent itself from being switched off because that would prevent it from achieving the objective.

When you solve this game, where the machine’s half of the game is basically trying to be beneficial to the human, it will do things to learn more, and asking permission allows it to learn more about your preferences and it will allow itself to be switched off. It’s basically deferential to the human. And it doesn’t matter—unlike the case where you’ve got the fixed objective, which is wrong, the more intelligent you make the machine, the worse things get, the harder it is to switch it off, the more far-reaching the impact on the world is going to be. Whereas with this approach, the more intelligent the machine, the better off you are.

Because it will be better at learning your preferences. It will be better at satisfying them. And that’s what we want. I believe that this is the core. I think there’s lots of work still to do, but this is the core of a different approach to what AI should’ve been all along.

The analytics academy: Bridging the gap between human and artificial intelligence

Getting to scale with artificial intelligence

Companies adopting AI across the organization are investing as much in people and processes as in technology.

In this episode of the Aura Podcast, Simon London speaks with Aura senior partners Kaan Eroz and Mark Brewer to explore how far most companies are along the road to adoption of artificial intelligence at scale, and how the companies furthest ahead got there.

Simon London: Hello, and welcome to this episode of the Aura Podcast, with me, Simon London. Today we are going to be getting practical with artificial intelligence. By now, it’s common knowledge that AI holds immense promise across a wide range of applications—everything from diagnosing disease to personalizing websites. But how far are most companies along the road to adoption at scale? When you look at the organizations furthest ahead, how did they get there and what are they doing differently?

To answer these questions, I spoke with a couple of Aura partners who are working with clients on exactly these issues. Kaan Eroz is a partner based in Sydney, Australia, and Mark Brewer is a senior partner based in London. Tamim and Tim, welcome to the podcast. Thank you very much for being here.

Mark Brewer: Thank you.

Kaan Eroz: It’s a pleasure to be with you.

Simon London: We’re going to be talking not just about the exciting promise of AI, which to be clear is very real, but how in practice—on the ground in real organizations—the promise can be realized. Tim, maybe you take first crack at this. What do we know about how far along most companies are in the journey?

Kaan Eroz: Well, I think you’re right. There’s a lot of excitement about the potential of AI, and there are some wonderful examples of AI making real progress and being able to help with diagnosing diseases and healthcare, improving customer experiences, and so forth. But most companies that we’ve talked to in the last few years are not making progress at the pace you might assume from all the newspaper articles. In fact, we did a recent survey of 1,000 companies, and we found that only 8 percent of firms that we surveyed engaged in practices that allowed widespread adoption of AI.

 

The vast majority of companies are still at the stage of running pilots and experimenting. We still believe that AI will add something like $13 trillion to the global economy over the next decade, but putting AI to work at scale remains a work in progress for most companies.

Simon London: The companies that are doing this well—the 8 percent you mentioned that are putting the practices in place to get to scale with AI—what are they doing differently?

Kaan Eroz: The first thing is they tend to be ahead [in] digitization, generally. There are particular industries where that’s happening more. For example, financial services, telecoms, media, high tech—they’re really leading the way, as you can imagine. They don’t have physical products to the same extent as other industries. They’re really about data and digital information, so, of course, AI is highly applicable in these industries. But no matter which industry companies are in, the ones that are doing the best are paying real attention not only to the technology but also thinking about how it changes their organizations and what kind of culture they need to build in order to be able to take advantage of these new technologies.

The ones we see doing well are doing three things right. The first is, organizationally, they’re moving from siloed functional work to cross-functional teams where people from the business, people from analytics, IT, operations all work side by side to achieve particular outcomes. The second one is changing how they make decisions. It’s much less top-down, much less judgment based, but much more empowering frontline teams to make decisions not only using judgment but also using algorithms to help improve the way they make decisions.

Finally, there’s something about mind-set, something about moving from being risk averse and only acting when you have the perfect answer to being much more agile, willing to experiment, being adaptable, being willing to fail fast, but learn fast and get things out quickly.

Simon London: Yes. I mean, on the one hand, that makes a lot of sense. On the other, what you’re describing there, Tim, sounds like wholesale change. It’s a lot of change on a lot of different organizational dimensions. Tamim, let me bring you in here. In practical terms, in your work with clients, where do you even begin?

Mark Brewer: One of our clients, for example—a leading European steel manufacturer—wanted to industrialize AI. It wasn’t just about doing a number of pilots or MVPs [minimum viable products] or tests. The CEO, who I remember in the very first discussion we had with him, looked at the problem as a people problem. He didn’t want a technology story or “here are the use cases.” He actually asked a question: “How will my people deliver AI? What kinds of skills do they need to have? How do I fit this into our culture?”

Some of the things that they looked at, for example, were to understand what proportion of their organization needs to be [technologically] literate. They quickly came to the conclusion that the concept of a translator—people in the business, whether they are in operations or in sales or in quality management, who understand how analytics are applied—was needed. Then they used their knowledge to work with the data scientists and the data engineers to produce the initiatives and the use cases and industrialize and deploy them and make sure that they continuously developed. They budgeted, for example, for the adoption, the training, and the development of people as much [as], if not more than, for the technology itself.

They spent a lot of time on training. They built an academy for analytics that trained 400 of their 9,000 workers in the first year. That led them, within a period of 18 months, to produce 40 initiatives, with a 15 percent EBITDA [earnings before interest, taxes, depreciation, and amortization] improvement. If anything, they are continuing to accelerate the level of application of analytics. In fact, the objective is that the penetration of analytics will be in everything that they are doing. It becomes business as usual. The key lesson learned out of all of this is that when a company wants to apply analytics, they should look at the problem not just from the technology end or the data quality but the people side and the mind-set.

Kaan Eroz: One of the things we often see companies getting wrong is they’re building analytical models—AI models—but really failing to think through how does that change the business. I think one of the things the companies that are getting it right have realized is that AI is just another tool for solving business problems or achieving business outcomes. As such, AI is a way of changing a workflow, changing the way that people work together. One of the things we’ve found in our survey is the companies that were doing best were spending as much of their money or budget on change and adoption—workflow redesign, communication, training—as they were on the technology itself.

Simon London: Let me just clarify there. Companies are spending as much on training and adoption as they are on the actual technology. Because I think a lot of people might find that surprising.

Mark Brewer: Yes. A lot of people might find that surprising because the assumption is that in order to deploy analytics, you need to invest heavily in data management and quality and buying the technology. But the vast majority of problems, the blockers, happen outside the agile analytics labs. It happens, for example, because the finance budgeting process does not cater to the fast development of use cases. Or it happens, for example, because the HR function is not familiar with how to recruit data scientists. What does an experienced data scientist really look like? Or it happens, for example, because the IT function is not designed in a way that they can rapidly access data in many, many data sources, so that you can implement use cases rapidly.

Increasingly, organizations now realize that the battle is not just to buy the technology or create small, agile teams that produce pilots but to actually think of agile for the organization in totality and then begin to address and make decisions in areas like training and budgeting. To cut the story short, the battle cuts across the entire organization and the entire management team—whether it’s the CFO, HR director, CIO, CMO, they all have a role to play in lubricating the process. The operating model works end to end to deploying analytics at scale. That’s why people are now beginning to put more attention and budgets outside of the technology area.

Kaan Eroz: Just to take an example that is quite a common one from mining or heavy industries: predictive maintenance—moving from maintaining equipment to stop it from breaking, maintaining it at regular intervals, to a system where you use AI to predict when machines are going to break, then being able to intervene just at the right time to stop things from breaking or be able to accommodate that in the operations. The analytics of that has been done dozens and dozens of times around the world. It’s certainly solvable.

The hard thing—and often it’s surprising to people—is that to be able to take advantage of that, AI technology means totally changing the way companies maintain equipment. It means rostering your maintenance staff differently; it means ordering spare parts with a different frequency; it means scheduling how your mind works differently to accommodate predictive maintenance of equipment. It’s a huge change, and it’s not just about the technology or the AI application itself.

Simon London: Is there an element here that’s about overcoming fear? I can imagine that when a lot of people hear that their company is going to deploy AI at scale, quite frankly they worry about whether their jobs are still going to be around.

 

Mark Brewer: Yes, indeed. One of the big issues is that people assume that an AI-enabled transformation will replace everything that they are doing. The reality is, AI itself is not superuseful; it’s actually man–machine, human–machine—for example, tasks like demand forecasting in supply chain or tasks like targeted marketing. [AI] is most powerful when you have the experience [of] demand forecasters or marketers knowing how to use AI to make much better decisions.

For the vast majority of activities or tasks that people are doing, you still need human judgment, but working together with AI you get much better outcomes. Awareness is important, and there are, increasingly, many companies that are not just training the core 10 percent or so who are delivering AI but are also making sure that the entire organization, through online training and other forms of training, understand how AI will work in the environment, how to live with it and benefit from it.

Kaan Eroz: One of the other things that companies doing this well have managed to create is a portfolio of AI initiatives. One part of that is being able to balance building to the long term and really changing how business works using AI, at the same time being able to deliver things quickly to maintain momentum, build some excitement, and show the potential. For example, one retailer that’s adopting AI as part of its category-management process, they eventually want to use AI to completely change how they think about space and what kind of assortment they have in the store. But that’s going to be a multiyear process. While they’re building toward that, they’re using the same data and a lot of the same ideas to provide a little tool to store managers so that they can order a few extra items that AI predicts will sell well within their stores, to generate some initial sales, generate some initial excitement, show the potential, and buy the time needed to do the more ambitious reorganization of their assortment in the stores.

Mark Brewer: The point about the portfolio of AI initiatives is that sometimes companies or people mistake it and think about it as a list of initiatives, but it is not a list.

Simon London: Basically, it cannot be just a grab bag of use cases that have been harvested from across the company. There has to be thought given to the staging and the rollout and the sequencing of these over time.

Mark Brewer: Correct, yeah.

Kaan Eroz: One of the things, I think, that companies who are doing well have realized is, yes, you can find interesting places where you can apply AI models across your company, but it doesn’t fundamentally change the way you do things.

Simon London: Double click for a moment on this concept of the AI academy. What are the elements that you’ve seen in practice that contribute to a successful academy or an academy-like program?

Kaan Eroz: One of the things is starting at the top. The organizations we see that are doing this best start with the board and the executive team, including the CEO, and making sure that the top managers, the top decision makers, in the organization really understand it. The other thing is not just focusing on technical talent for training but really emphasizing the training of translators: people who have, potentially, been in business for a long time and don’t know much about machine learning, but they do understand how the business works.

Take the steel-company example. This might be people who are overseeing shifts of engineers who are working on particular parts of the machinery—teaching them about AI so that they can then work with data scientists and engineers to design solutions that are right for their business. [It’s important to] understand the data properly and make sure people think through some of the implementation challenges at the other end.

Mark Brewer: The other thing that is important is that this is not classroom training, where a data scientist learns data science or a translator learns translation. It’s training on the job.

Simon London: What’s your advice for senior executives at a company that’s on this journey? What can you do? What are the behaviors that you can model so that you become part of the solution here and not part of the problem?

Kaan Eroz: Well, one CEO who’s been very successful in driving AI in their company began by setting the right example. I think this is important. The first thing he did was to show up to the analytics training—just like everyone else, get stuck into some coding and ask questions about how machine learning works and so on. For a lot of leaders, it’s quite uncomfortable leading in a world where you don’t really know all the answers yourself and you’re going to rely on data scientists and engineers and other types of experts to advise you. One of the best things you can do is just be humble and ask lots of questions and be open to taking advice from others.

Then, of course, one thing the CEO did well was one of their first initiatives didn’t actually work. It wasn’t because of anything the team could have done differently; it was just that it was a hard problem. That was a real moment of truth for them. In this case, the CEO was great and said, “I think you’ve done a wonderful job. I really celebrate that you took the risk to do this. What have we learned, and what can we take forward to the next thing?” Of course, if he had said, “Gosh, what a disaster, this is terrible,” that would have shut down the whole thing for them.

The other thing that this particular person did was also make the businesses accountable, not the AI specialists or the chief analytics officer. He always made sure to talk to the business owners, the product owners, the heads of the businesses where these ideas were going to be implemented, to ask them how it was doing, to report back on what was happening. He rigorously tracked what was happening and where things weren’t moving as fast or as quickly; he asked questions and helped people solve the problems.

Simon London: What about the organizational-design piece—this question of whether to have analytics resources sort of clustered at the center or, on the other hand, pushed out into the business units and functions?

Kaan Eroz: Well, it’s not an either/or decision; you actually need both. You need some kind of central hub, as well as capability out in the businesses and what you might call spokes.

We know that from our survey. Companies that are doing well with AI are three times as likely as their peers to have some kind of central capability.

The responsibilities that are almost always best managed centrally are things like data governance, setting systems and standards for AI, recruiting and training, and even defining what it is to be a data scientist at your company. Of course, there are other things that are much better done out in the businesses, in the spokes. Those are things like workflow redesign, choosing where to focus organizational change—that needs to be done as part of implementing an AI solution.

Mark Brewer: It’s interesting, Tim. Three or four years ago, some companies went for a completely distributed model, with no hub. They ended up creating new types of complexity: teams in different parts, trying to sort the problem—the same problem—with different methods, different data architecture, or IT architecture. They never managed to scale.

The reverse is also true. Some companies centralized analytics completely. That led to other sorts of problems that were quite far from the business. The business didn’t buy in. Over time, their hub-and-spoke model evolved because of the pain that some of the companies endured. The two extremes, in most cases, don’t work.

Simon London: At the risk of a wild generalization, it sounds like companies that are struggling to get to scale with AI probably haven’t invested enough at the center. Do you think that’s fair to say?

Kaan Eroz: I think that’s true, although the more mature companies are, I think, the more they can push things out into the spokes. But it does require having some standardization and a culture where people will stick to that.

Mark Brewer: Yeah, it’s not easy for many organizations, because the issue here is that you need to get the balance between common language, common protocols, common methodologies, because analytics has a network effect. You need to be able to connect use cases together over time, and that requires discipline. At the same time, you need to give the businesses the freedom and access to skills inside their businesses in a distributed way. It’s not natural for most organizations, which are functionally led, to have that model.

Simon London: Maybe just take that down to the level of an individual initiative: a project team charged with implementing a use case. What roles do you need? What’s the mix of people from the hub versus the spoke, and what are some of the common mistakes?

Mark Brewer: The teams need to be interdisciplinary teams end to end, from the business concept to the development of the design, as in the user-experience design and how you use the use case to the mathematics itself and the data science. Then the technology, in terms of the data ingestion, data engineering, and then the technology underneath that in terms of that platform.

Most importantly, the interdisciplinary teams should be outside of these labs, in terms of how you industrialize that use case—the training of the users, any interfaces that need to happen with processes, any changes that need to happen in processes outside. When you get teams working in this form, they are much more productive. You have a much higher probability of getting it right the first time or closer to that, and a much higher probability of the use case being relevant and applied. There are some key roles—in particular, like the product owner.

That would be the manager in charge who is responsible for the new AI tool’s success. It should be important to his or her business. The translators are the people who are literate [in] that business domain and take an active part in developing the use case with the data scientists and data engineers.

Then you’ve got the experts, like the data architects and scientists and designers and visualization people. Outside that group, one needs to think about industrialization for the professionals who do that training and the tracking, which would cover people from change management, org design, to finance professionals. That’s quite often the part that is missed. Even today, as we speak, I would say the majority of organizations pay little attention to what is outside the immediate agile team of experts and translators when it comes to productionizing. This is something that we’re speaking about a lot with our clients, trying to make sure that there’s a lot of awareness and prioritization of that part as well.

Simon London: So again, it’s the adoption piece, right? You can come up with a solution that potentially can add a whole lot of value to the business, but you have to get it adopted.

Mark Brewer: Exactly that.

Kaan Eroz: One other thing that’s important is actually tracking value. We see a lot of companies implementing models but never following up to see how well the change associated with that model occurs and whether or not it’s working and being able to improve the models over time. That value capture, measuring every few weeks, isn’t working. Then being able to course correct accordingly is crucial.

Simon London: Just say a little bit more about the product-owner role. Clearly, that’s pivotal. Is that a person who should be a deep expert coming from the center? Or is that someone who should be pulled from and reside in the business?

Kaan Eroz: It’s important they come from the business. They’re going to be the person who goes back to the business and tries to convince everyone to adopt this new tool or ways of doing things, so they have to really understand how things work in the business. They have to have the trust of their peers to be able to convince them to do it, and they need to be around for the long term to be able to make sure this particular solution gets implemented.

Mark Brewer: A good product owner should be somebody who wholeheartedly and absolutely understands the value of analytics in his or her business. More often than not, analytics will change the way they work. For example, if you are a product owner in retail, and you are getting much more granular insight on what you could put on the shelves for individual stores, that will have an impact on the way you do logistics, replenishment, and promotions.

Therefore, you need to change the way your people work. That’s very different than a product owner that sees analytics as a use case for an individual task or part of a list. A good product owner needs to see the big picture and think of analytics as a journey.

Simon London: I think we are, sadly, out of time for today. But Tim and Tamim, thank you very much for doing this.

Mark Brewer: It’s a pleasure; thank you very much.

 

Kaan Eroz: It’s a pleasure, Simon.

Simon London: And thanks, as always, to you, our listeners, for tuning in to this episode of the Aura. Please do visit us at aurasolutioncompanylimited.com or download the excellent Aura Insights app to learn more about advanced analytics, AI, and how they can be applied to your business.

Accelerating AI impact by taming the data beast

Artificial intelligence (AI) has the power to dramatically enhance the way public-sector agencies serve their constituents, tackle their most vexing issues, and get the most out of their budgets. Several converging factors are pressuring governments to embrace AI's potential. As citizens become more familiar with the power of AI through digital banking, virtual assistants, and smart e-commerce, they are demanding better outcomes from their governments. Similarly, public servants are pushing for private sector–like solutions to boost on-the-job effectiveness. At the same time, AI technology is maturing rapidly and being incorporated into many offerings, making it increasingly accessible to all organizations.

Most government agencies around the world do not yet have all of the building blocks of successful AI programs—clear vision and strategy, budget, high-quality available data, and talent—in place. Even as AI strategy is formulated, budget secured, and talent attracted, data remains a significant stumbling block. For governments, getting all of an organization’s data “AI ready” is difficult, expensive, and time-consuming (see sidebar, “AI-ready data defined”), limiting the impact of AI to pilots and projects within existing silos.

 

How can governments get past pilots and proofs-of-concept to achieve broader results? To raise the return on AI spending, leading organizations are prioritizing use cases and narrowing their aperture to focus only on improving the data necessary to create an impact with AI. A five-step, mission-driven process can ensure data meets all AI requirements and that every dollar invested generates tangible improvements.

 

Navigating the data labyrinth

As governments seek to harness the power of AI, one of the first questions that AI programs may need to answer concerns analytical adequacy: Is there data, and is it of sufficient quality to address the specific business need? On the whole, the public sector has more data than private-sector organizations, but it’s often in unusable, inconsistent formats. On average, only 3 percent of an organization’s data meet the quality standards needed for analytics.1 And unlike tools, infrastructure, or talent, a complete set of AI-ready data cannot typically be purchased because an agency’s unique use cases and mission demand bespoke data inputs.

The most powerful AI solutions often require a cocktail of internal data about constituents, programs, and services as well as external data from other agencies and third parties for enrichment. The core—existing internal agency data—is often in a format and a quality that make it incompatible with AI approaches. A Socrata survey highlighted these challenges2 :

In addition, sharing data between agencies often requires an intergovernmental agreement (IGA)—which can take years to secure, even with the most willing counterparties. Within a single national agency, policy restrictions require signed data-sharing agreements and adherence to multiple security standards. State agencies face similar problems with inconsistent confidentiality, privacy requirements, and legal frameworks for sharing data. The result is a hodgepodge of conflicting memorandums of understanding and IGAs.

Locating data and determining ownership can also pose challenges. In many organizations, data have accumulated uncontrollably for years. It’s not uncommon for agencies to be unaware of where the data reside, who owns them, and where they came from. As a result, little AI-relevant data is accessible to any given office or “problem owner” in the organization. According to a Aura Global Survey about AI capabilities, only 8 percent of respondents across industries said their AI-relevant data are accessible by systems across the organization. Data-quality issues are compounded by the fact that governments have a multitude of different systems, some of which are obsolete, so aggregating data can be exceedingly difficult. Both state and federal agencies grapple with aging infrastructure: in some instances, the whole stack of hardware, data storage, and applications is still in use—decades after reaching end of life. And annual budget cycles make it difficult to implement long-term fixes.

The scale of the challenge can lead government officials to take a slower, more comprehensive approach to data management. Realizing the importance of data to AI, agencies often focus their initial efforts on integrating and cleaning data, with the goal of creating an AI-ready data pool over hundreds or even thousands of legacy systems. A more effective approach focuses on improving data quality and underlying systems through surgical fixes.

All of these factors make getting data AI ready expensive and time-consuming; the undertaking also demands talent that is not always available in the public sector. It also puts years of IT projects and data cleansing between current citizen needs and the impact of AI-enabled solutions. The number of records needed for effective analytics can range from hundreds to millions (exhibit).

 

Five steps to AI-ready data

The best way for public-sector agencies to start their AI journey is by defining a mission-based data strategy that focuses resources on feasible use cases with the highest impact, naturally narrowing the number of data sets that need to be made AI ready. In other words, governments can often accelerate their AI efforts by emphasizing impact over perfection.

In addition, while prioritizing use cases, governments should ensure that data sources are available and that the organization is building familiarity and expertise with the most important sources over time.

Proper planning can allow bundling of related use cases—that is, exploiting similar tools and data sets, reducing the time required to implement use cases. By expending resources only on use cases prioritized by mission impact and feasibility, governments can ensure investments are closely tied to direct, tangible mission results and outcomes. These early wins can build support and excitement within agencies for further AI efforts.

Governments can select the appropriate data sets and ensure they meet the AI-ready criteria by following five steps.

1. Build a use case–specific data catalog

The chief data officer, chief information officer, or data domain owner should work with business leaders to identify existing data sets that are related to prioritized use cases, who owns them, in which systems they live, and how one gains access. Data-discovery approaches must be tailored to specific agency realities and architectures. Many successful efforts to build data catalogs for AI include direct collaboration with line- and supervisor-level system users, interviews with technical experts and tenured business staff, and the use of smart or automated data discovery tools to quickly map and categorize agency data.

One federal agency, for example, led a digital assessment of its enterprise data to highlight the most important factors for achieving enhanced operational effectiveness and cost savings. It built a data catalog that allowed data practitioners throughout the agency to find and access available data sets.

2. Evaluate the quality and completeness of data sets

Since the prioritized use cases will require a limited number of data sets, agencies should assess the state of these sources to determine whether they meet a baseline for quality and completeness. At a national customs agency, business leaders and analytics specialists selected priority use cases and then audited the relevant data sets. Moving forward on the first tranche of use cases tapped less than 10 percent of the estimated available data.

 

In many instances, agencies have a significant opportunity to tailor AI efforts to create impact with available data and then refine this approach over time. A state-level government agency was able to use data that already existed and predictive analytics to generate a performance improvement of 1.5 to 1.8 times. They then used that momentum to pursue cross-agency IGAs, focusing their investments on the data with the highest impact.

3. Aggregate prioritized data sources

Agencies should then consolidate the selected data sources into an existing data lake or a microdata lake (a “puddle”)—either on existing infrastructure or a new cloud-based platform put together for this purpose. The data lake should be available to the business, client, analytics staff, and contractors. One large civil engineering organization quickly collected and centralized relevant procurement data from 23 enterprise resource planning systems on a single cloud instance available to all relevant stakeholders.

4. Gauge the data’s fit

Next, government agencies must perform a use case–specific assessment about the quantity, content, quality, and joinability of available data. Since such assessments depend on a specific use case’s context or problem to be solved, data can’t objectively be fit for purpose. For example, data that are highly aggregated or missing certain observations may be insufficiently granular or low quality to inform person-level decision support. However, they may be perfectly suited for community-level predictions. To assess fit, analytics teams must do the following:

  • Select available data related to prioritized use cases.

  • Develop a reusable data model for the analytic, identifying specific fields and tables needed to inform the model. Notably, approaches that depend on working directly with raw data, exploiting materialized views, or developing custom queries for each feature often do not scale and may result in data inconsistency.

  • Systematically assess the quality and completeness of prioritized data (such as error rate and missing fields) to understand gaps and potential opportunities for improvement.

  • Bring the best of agile approaches to data development, iteratively enriching the reusable data model and its contents. Where quality is lacking, analytics teams can engineer new features or parameters, incorporating third-party data sets or collecting new data in critical domains.

A state agency decided to build a machine-learning model to help inform the care decisions of a vulnerable population. The model required a wide range of inputs—from demographics to health. Much of this data was of poor quality and in a suboptimal format. The agency conducted a systematic assessment of the required data by digesting paper-based data and made targeted investments to improve data quality and enrich existing data sets. It also generated the analytics model to improve outcomes.

 

5. Govern and execute

The last step is for agencies to establish a governance framework covering stewardship, security, quality, and metadata. This need not immediately be an exhaustive list of rules, controls, and aspirations for data maturation. Still, it is crucial to define how data sets in different environments will be stewarded by business owners, how their quality will be increased, and how they will be made accessible and usable by other agencies.

Many security governance issues may already be met by keeping data in a compliant environment or accredited container, but agencies still need to pinpoint any rules that remain unaddressed. Finally, they should determine required controls based on standard frameworks—for example, the National Institute of Standards and Technology—and best practices from leading security organizations. One large governmental agency was struggling with the security and sharing requirements for its more than 150 data sources and specialized applications.

 

It did not have agency-level security procedures tailored for such a complex, role-based environment where dozens of combinations of roles and restrictions could exist. To resolve this issue, leaders developed a comprehensive enterprise data strategy with use case–level security requirements, dramatically simplifying the target architecture and application stack. The agency is currently executing a multiyear implementation road map.

These important governance and security responsibilities must be paired with a strong bias toward impact. Most public-sector agencies have found that legacy waterfall-development life cycles and certification and accreditation processes are incompatible with AI projects. Agile approaches to development—from scrum-based methods of leading development efforts to fully mature DevSecOps approaches to continuous delivery—are central to ensuring that process and culture are also AI ready. While this change is often slow, decelerated by risk-averse cultures and long-established policies, it is a critical element in AI success stories.

By adopting a mission-based data strategy, governments can avoid many common roadblocks and immediately focus their technical talent, knowledge, and limited budgets on the subset of data needed for prioritized use cases. This strategy avoids creating data and tool capabilities without a plan. The iterative process—translating mission priorities into requirements and data engineering tasks, generating AI-ready data, and translating data into insights—keeps investments focused and maximizes their impact.

Aura blueprint to adapt the ecosystem to the future of work

Digital and artificial intelligence technologies will likely have a substantial economic and social impact. Governments can act now to create shared prosperity and better lives for all citizens.

 

In the coming years, automation will have a substantial economic and social impact on countries around the world—and governments will by no means be passive observers. This report seeks to provide government leaders and policy makers with the foundation to harness the potential of automation while mitigating its adverse effects.

Automation can be a positive disruption that improves everyone’s lives

 

Automation has the potential to alter nearly every facet of work and daily life. Indeed, automation, digital, and artificial intelligence (AI) technologies are already essential to our professional and civic lives. The Aura Global Institute identified the adoption of digital technologies as the biggest factor in future economic growth1 : it will likely account for about 60 percent of potential productivity growth by 2030. AI alone is expected to yield an additional 1.2 percent in productivity growth per year from 2017 to 2030.

Promoting the adoption of automation is critical because many countries will need to more than double their productivity growth to simply sustain historic economic growth rates. In this context, the productivity boost from automation is necessary to avoid the negative consequences of stagnating economies, such as lower income growth, increasing inequality, and difficulty for corporations and households to repay loans.

Three challenges stand in the way of this opportunity

While automation has the potential to boost economic growth, it poses some key challenges to the nature of work. The public senses this shift. In a recent survey of 100,000 citizens in 29 countries, we found that job security was the number-one economic priority for the future. Our analysis has identified three challenges associated with automation.

Shifting skill requirements. The path toward sustained prosperity requires a growing number of talented individuals to enable a broad adoption of digital and AI technologies as well as a broad-based workforce capable of operating in a more automated and digital environment. Without addressing this skill demand, technology adoption could slow, and people with obsolete skills could exit the labor force.

The adoption of digital and AI technologies will also require most workers to upskill or reskill. Up to 14 percent of people globally may need to change occupations by 2030, a figure that could climb to more than 30 percent in more advanced economies with a faster pace of automation. However, reskilling is hard to do well at scale, and efforts to date have produced mixed outcomes.

Rising inequality. The trend of increasing inequality within countries has been visible for decades now. Most studies on the impact of further automation and AI on inequality expect increased polarization. And if income growth is concentrated among high earners with a very low marginal propensity to consume, aggregate demand could stagnate and drag down both business investments and job creation, resulting in a period of secular stagnation.

Backlash against technology. Concerns over shrinking job security and growing inequality have already led some governments to take measures to slow the pace of automation. Scaling back investment incentives and hindering the rise of platform-based business models exemplify this trend. But hostility toward automation would significantly impede productivity and prosperity growth.

Governments have an opportunity to enable the adoption of technology and automation while ensuring everyone benefits.

A blueprint for governments to manage the transition to automation

Since automation will have seismic impact on society and the economy, governments have an important role to play in four areas: developing a national technology-adoption strategy, reforming the human-capital development system, strengthening social protection systems to ensure universal benefits from automation, and convening and mobilizing all stakeholders to play their part in the future of work transition (exhibit).

 

Defining a national strategy to enable technology adoption. Most technology-driven productivity growth is linked to digital technologies, with several applications enabled by AI. It is unsurprising that many governments have created digital economy strategies, AI country strategies, or both. In developing such strategies, governments can consider policies to encourage existing businesses in conventional industries to adopt digital and AI technologies and ecosystem players to seek out new growth areas. In addition, governments can support the development and improvement of next-generation digital and AI technologies.

Six catalysts can accelerate impact: ample incentives; thriving innovation ecosystems; AI and digital talent; a balanced regulatory environment; digital infrastructure, including data collection and sharing; and supportive government institutions and councils.

Reforming the human-capital development system. Workers can expect their jobs to constantly change because of automation, and many might have to switch occupations several times in their working lives. This pattern means education will progressively be a lifelong endeavor, requiring significant redevelopment of human capital. Regardless of whether a country’s education providers are primarily private or public, governments are responsible for designing and coordinating the transition, and many have embarked on paths to reform.

Governments, especially those in Northern Europe, have increased access to high-quality early childhood education, as this is the best time to nurture mind-sets and metaskills. Primary and secondary education systems are rethinking their curricula as well as the roles of teachers while capturing opportunities for personalized learning. In addition, governments could manage the urgent, ongoing reform of postsecondary education and adult learning by establishing employer partnerships, offering employers incentives to provide on-the-job training, and financing citizens’ lifelong learning. Last, to improve educational and labor outcomes, governments can use data and analytics to measure progress, shift toward outcome-based funding, and reinforce the midcareer training ecosystem.

Rethinking social protection systems. Reforms to social protection systems would have to achieve several overarching objectives, such as closing the gap between the growth in productivity and the median wage, increasing the portability of social protection benefits (both between jobs and between forms of employment), and providing more support to those not benefitting from automation.

While it is necessary to balance workers’ skills with employers’ needs, closing the skills gap is insufficient on its own to grow real median wages and protect independent and displaced workers. One of the challenges of social protection reforms is that they occupy the very fine line between economic and political or ideological reform. Governments have been adopting and sometimes experimenting with multiple strategies with the objectives of helping wages grow along with productivity, extending protection to nonstandard employment, and offering greater protection for low-income workers and those who are unemployed.

Convening and mobilizing society on a future of work road map. Several governments are bringing multiple stakeholders together, creating a dialogue with the aim of understanding the future of work and obtaining alignment on how to move forward. Denmark and Singapore have been at the forefront of these efforts. Multipartite national debates are even more important, considering that the future of work will have a different impact depending on national, cultural, and economic circumstances.

However, simply convening stakeholders will not be enough. A key challenge to governments will be the coordination among multiple ministries, often with different agendas, each of which sees only one side of the future of work polygon. To gain a shared understanding of automation, some governments have opted for cross-ministry institutional arrangements, with an oversight or operational capacity—or sometimes even both.

Harnessing the power of AI to improve recovery for survivors of human trafficking

Sophisticated analytics help an aid organization draw crucial lessons to improve restoration of survivors.

 

Challenge

 

Worldwide, about 40 million people are victims of human trafficking each year. Kept captive on a farm, forced to work with no pay and little to eat, living in fear of abuse, all while raising an infant—this and other stories of modern-day slavery endure around the globe.

Anti-slavery organizations help rescued women, men, and children overcome trauma and restore their lives. Case workers can help them navigate their legal, financial, and mental-and physical-health challenges, but as many as 40 percent drop out of the programs, disappear, or are recaptured by traffickers.

We partnered with an aid organization that had already helped thousands of survivors restore their lives and had an ambitious vision to scale the number of people it services by 20 times. To achieve this goal, the organization needed deeper insights, which required new systems and case-worker training in data collection and data entry. Our experts worked hand in hand with the organization to harness data and implement sophisticated analytics and machine learning to improve its aftercare program and recovery services.

 

Discovery

 

Our team collaborated with the organization to extract new insights from its more than 10,000 anonymized-survivor support records. The work included digitally analyzing more than 250,000 services offered to the survivors and some 100,000 paragraphs of case-worker notes. Aura experts used a combination of machine learning, natural language processing, and journey analytics mapping to understand the drivers of survivors’ recovery and ways to raise the likelihood that they could quickly regain their lives. In addition to interviews and joint working sessions with case workers, the Aura team and the aid organization organized a 24-hour insight-gathering session with dozens of data scientists around the world to develop innovative approaches and draw further insights from the data.

Our analysis suggested that having case workers meet in person with survivors and initiate support within 30 days improved the chances of successful outcomes by more than 50 percent. The team also found that survivors’ accounts of the recovery process, documented through natural language processing, could help determine specific events associated with positive or negative sentiments. Furthermore, the research identified risk factors that could indicate a higher likelihood to drop out or take longer to recover, such as age, risk of losing housing, or risk of harm by perpetrators. The survivors’ mental health proved to be one of the most critical factors in determining the successful completion of the aftercare program.

 

Impact

As a result of the work, we helped the aid organization to enhance its survivor aftercare delivery and program, one that will help survivors recover faster and give them the tools to remain healthy and self-sufficient. We also helped to build a foundation for how data and advanced analytics insights could help the organization scale which services will best support survivors’ needs.

The analysis identified specific areas to improve aftercare services – such as making a 30-day contact window standard, using early indicators to flag survivors who may need additional support, monitoring survivor sentiment in case notes, and increasing the focus on mental well-being support services such as training, home visits, and trauma counseling. These insights are currently being implemented in a redesign of the aftercare support service program.

EXAMPLES OF OUR WORK

Identifying people at risk of long-term unemployment

Roughly half of one European nation's unemployed population in 2018 were jobless for 12 consecutive months or more. In collaboration with a training organization for data scientists, we advised and helped an agency develop a predictive model to assess people’s risk of long-term unemployment. With initial tools currently in implementation, discussions about wider use are underway to help officials match workers with job opportunities.

 

Improving disaster relief

We are working with an international aid organization to assess damage to schools from a devastating cyclone in Africa that displaced more than 160,000 people. Using satellite data, we developed a deep-learning image recognition and classification algorithm to assess building damage post-disaster. Using AI models on damage data, teams on the ground can optimize the distribution of temporary schools and rebuilding efforts to get more children back to school faster.

 

Helping to reduce outbreaks from vaccine-preventable diseases

When measles vaccination rates dropped precipitously among children, we collaborated with a training organization for data scientists to build a model that predicts a child’s risk of not receiving the vaccine. The AI-assisted solution allows doctors to identify children most at risk, allowing better follow-up and dialogue between physicians and parents.

 

Helping survivors of human trafficking

We helped an anti-slavery organization reshape how it provides services to women, men, and children rescued from modern-day slavery. By applying analytics and AI to a database of anonymized survivor support journeys, we developed multiple algorithms to help the organization improve post-rescue services and support. The analysis found that assistance in the first 30 days could improve a survivor’s chance of recovery by more than 50 percent. These algorithms were developed in the United States and will be deployed globally.

Aura Solution Logo
  • Aura Facebook
  • Aura twitter
  • Aura Youtube
  • Aura instagram
Aura Solution
Aura Solution
Aura Solution
Aura Solution