
The Evolving Leader
The Evolving Leader Podcast is a show set in the context of the world’s ‘great transition’ – technological, environmental and societal upheaval – that requires deeper, more committed leadership to confront the world’s biggest challenges. Hosts, Jean Gomes (a New York Times best selling author) and Scott Allender (an award winning leadership development specialist working in the creative industries) approach complex topics with an urgency that matches the speed of change. This show will give insights about how today’s leaders can grow their capacity for leading tomorrow’s rapidly evolving world. With accomplished guests from business, neuroscience, psychology, and more, the Evolving Leader Podcast is a call to action for deep personal reflection, and conscious evolution. The world is evolving, are you?
A little more about the hosts:
New York Times best selling author, Jean Gomes, has more than 30 years experience working with leaders and their teams to help them face their organisation’s most challenging issues. His clients span industries and include Google, BMW, Toyota, eBay, Coca-Cola, Microsoft, Warner Music, Sony Electronics, Alexander McQueen, Stella McCartney, the UK Olympic system and many others.
Award winning leadership development specialist, Scott Allender has over 20 years experience working with leaders across various businesses, including his current role heading up global leadership development at Warner Music. An expert practitioner in emotional intelligence and psychometric tools, Scott has worked to help teams around the world develop radical self-awareness and build high performing cultures.
The Evolving Leader podcast is produced by Phil Kerby at Outside © 2024
The Evolving Leader music is a Ron Robinson composition, © 2022
The Evolving Leader
The Atomic Human with Neil Lawrence
During this episode of The Evolving Leader podcast, co-hosts Jean Gomes and Scott Allender talk to Professor Neil Lawrence. Neil is one of the world’s most influential thinkers on the future of machine intelligence and the implications of what it means to be human. He’s the DeepMind Professor of Machine Learning at the University of Cambridge and a Visiting Professor at the University of Sheffield. In his book ‘The Atomic Human’, he explores the differences between AI and human intelligence. For three years (2016 – 2019) Neil was also Director of Machine Learning at Amazon.
Other reading from Jean Gomes and Scott Allender:
Leading In A Non-Linear World (J Gomes, 2023)
The Enneagram of Emotional Intelligence (S Allender, 2023)
Social:
Instagram @evolvingleader
LinkedIn The Evolving Leader Podcast
Bluesky @evolvingleader.bsky.social
YouTube @evolvingleader
The Evolving Leader is researched, written and presented by Jean Gomes and Scott Allender with production by Phil Kerby. It is an Outside production.
What are humans for in an automating world? This is a question of huge significance today. It can be approached at many levels, societal, economic and philosophical. What constitutes a good life when robots permeate every aspect of your work, home, even nature. We've had some lively debates on this topic, and our guest in the show will help you to form your thinking on this question. Further still, he's been a shaping force in the realities that we now face. Neil Lawrence has been both an academic AI pioneer at Cambridge and a commercial leader heading up Amazon's AI efforts you've just written the atomic human. Its big idea is that when you strip away everything that machine intelligence can automate, you are left with the core of humanity that we need to better understand and amplify so that our world gets better, not worse. So tune in to a fascinating and important conversation on the evolving leader foreign.
Scott Allender:Hey folks, welcome to The Evolving Leader, the show born from the belief that we need deeper, more accountable and more human leadership to confront the world's biggest challenges. I'm Scott Allender
Jean Gomes:and I'm Jean Gomes.
Scott Allender:How are you feeling today, Mr. Gomes?
Jean Gomes:I am feeling the anticipation of getting on a plane to New York to come and see you. After the weekend, I am feeling exhausted. After this week, I've been traveling non stop. I got locked in a train for several hours. Wouldn't let anybody out. I had to make an escape through the drivers cabin, which is exciting. Where were you? Apart from that, I'm feeling great.
Scott Allender:Well, I need to know more about the train.
Jean Gomes:I'll tell you when I see you.
Scott Allender:Okay, leave our leaders in suspense. I'm feeling contented. Had a holiday weekend here in the States, and so I've had family around, and feeling really grateful to have spent time with them, and feeling really grateful and filled with anticipation as well, to be with you and Emma next week in New York is going to be delightful, and I'm feeling particularly enthused about our conversation because it's such an important one we're going to have today we're joined by one of the world's most influential thinkers on the future of machine intelligence and the implications of what it means to be human. He is the Deep Mind professor of machine learning at the University of Cambridge and a visiting professor at the University of Sheffield. He's written a book called the atomic human, which explores the differences between AI and human intelligence. And Jean, I know from your summer reading recommendations, this was listed as one of your favorite reads of the year. And his experience isn't solely confined to academia. He also spent three years as director of machine learning at Amazon. Neil, welcome to The Evolving Leader.
Neil Lawrence:Thanks very much for having me. Scott, Jean, it's great to be here.
Jean Gomes:Neil, welcome to the show. How are you feeling today?
Neil Lawrence:Good. Been a tiring week, a busy week. I haven't been stuck on any trains, but it been in lots of work. This last three months, been very busy with teaching, doing workshops, policy work, and talking about the book. But it's nice to be Friday, as Scott says, not a holiday weekend here, but I'm looking forward to hopefully watching Sheffield United beat Sunderland tonight on the TV, so that'll be nice.
Jean Gomes:Excellent. Well, in this conversation, we're really interested in what leaders need to consider, in how they think about AI and how they create businesses and organizations where humans thrive. So could we start with why you wrote Atomic Human? Because you've written it for not just a specialist audience, it's a wide audience. What does it help the person in the street to understand?
Neil Lawrence:I think it's a great question, and part of me hopes that it's about the person in the street getting confidence in what they already know, because I feel the conversation has been such poor quality, but involving some very intelligent people that the person in the street, whether that's a leader or a regular person, is having their instincts about what it means to be intelligent, what it means to be a human, undermined. And I think that's deeply problematic. One thing stinks. I mean, the the sense that it's probably a little bit more complicated than than what we're hearing from, say, big tech or startup companies. I mean, you're, you're both experts in this, probably to a greater extent than me. But what I felt I could bring to it is, look, even if, even if you do this stuff in machines, it's much more complicated than what you're hearing. And the idea of the book was to use the machine and the way it does things as a place to stand and look back at who we are and sort of marvel at what we are, rather than, I think, the prevading narrative, which is one about how we're all going to be made a revel irrelevant and redundant by this technology, which I think is utterly wrong and very undermining of people.
Scott Allender:So let's, let's unpack that a little bit, because first I want to understand you call instead of AI, use the term machine intelligence. Can we start with that? I'm curious to know more about why you prefer it that way.
Neil Lawrence:Yeah. And I think, you know, you do these post book reflections, you also wonder whether intelligence is the right term. But the reason for machine intelligence was because, I think the artificial intelligence term, it has particular meanings in science fiction. It causes people to think of certain things that we expect. These sort of, like, I don't know, Robbie, the robot type entities that can communicate with us and are never wrong or data from Star Trek. it's an approach that triggers something in people, because it's something that I think we've historically thought of as unique to us. So you're effectively being told this thing, that intelligence isn't unique to you. So one of the things I try to do in the book is say, Well, okay, if you want to call and I mean, it's not explicit, but as you know from the book, it sort of highlights. If you want to call machine intelligence intelligence, then there's a bunch of other things you have to think of as intelligence, whether that's immune systems or social insects or just the ecology around us. And I then I'm getting more comfortable with maybe that's the way we go, that if you want to use that term for machines, let's use that for a bunch of other interesting decision making systems that surround us, that are not the same as us, but but work in different ways and do interesting things.
Jean Gomes:So, you know, I love the book. It was, it's it's brilliant. It's incredibly ambitious as well, but it's setting out to change the way that we think about AI. So what are we let's dive into a little bit more detail about what we're getting wrong about the way we're thinking about its role in our futures and across all domains, business, society, you know, the media, yeah. What does that look like for people?
Neil Lawrence:I think the primary thing is that we have a tendency to anthropomorphize. And when we see an entity that is making decisions that are of the former human might make, we assume that those entities are making the decisions in the same way. So, you know, right at the beginning of the book, one of the messages is that's not what's happening that. And you know, I use a few different analogies in the book, but since talking about the book, I've got further analogies that the rate at which machines are consuming information versus the way that humans exchange information is the difference between walking pace and light speed that machines go 300 million times faster than us in their ability to consume information. And the absurd thing we're hearing from some very intelligent people, people I respect, colleagues and interesting people, is, oh, that we're entering an era of artificial intelligence, where you might have machines that are orders of magnitude more intelligent than a human. But. That's an undefined concept. Intelligence, you know, as I think you both know, is is multifaceted. It has different aspects. And one of the beauties of being a human is the way we collaborate with others who have different strengths, and building teams and as a leader, that's key to how we build on those capabilities in the team. So the notion that you can talk about one intelligence being orders of magnitude better than another is nonsensical. Intelligence is always contextual. But in terms of information exchange, we can definitely talk about that, because it's a sort of very fundamental quantity, like energy. We can already say that machines are eight orders of magnitude faster than us the difference between walking pace and light speed in exchanging information, and that's true before AI. That's true before chatgpt. That's the reality of our lives, and that's the big transformation. Well, this is a big transformation too, but we're already dealing with the consequences of that and what it has done to modern, digitized societies, where leaders have often been separated from their information ecosystem, because increasingly, businesses become digitized. And in the past, any leader would be able to go and open the books and see what's going on in the accounts if they really wanted to. And of course, with digital technology, in some ways that's become easier, but in some ways that's become much harder. The complexity, for example, of Amazon's information ecosystems, managing the supply chain, were well beyond any individual to understand, and it wasn't easy to answer a vice president's question about why something had gone wrong. Yeah, Amazon are more advanced than anyone else. I mean, that's why I went to work there. I wanted to see well, wanted to see well, how do the best do it? And the answer is not very well and better than everyone else, but it's that old story of the guy who's putting on trainers after they've seen, I never remember if it's a lion or a cheater, and the other guy says, You're not going to outrun the cheetah in those and the other guy says, I don't have to outrun the cheetah. I just have to outrun you. You know, that's, that's how the best digitized companies operate. They're just out running other companies. They haven't got this stuff sewn up. And, yeah, I think that that's creating a really interesting world, and where we're already seeing the damaging effects of that. And the question is, well, where are we going next with this technology? Are we going to wield this technology in a way that ameliorates, that makes things better, or are we going to deploy it in such a way that these problems get worse? And my answer is, I don't know, but I believe that getting more people involved in the conversation and confident about their understanding is part of the way that we ensure that the best outcomes occur in as widely distributed a manner as possible.
Scott Allender:So you mentioned the anthropomorphizing quandary here, where we tend to ascribe to AI, our human qualities, and I and I'm assuming that's because we don't yet know how to think about it, right? We don't know really, what we're what we want with it, like, what is the problem we're trying to solve with it? What are the opportunities so we ascribe in its mysteriousness ourselves? So how? What are some ways we can start to reframe how we think about it?
Neil Lawrence:Yeah, it's interesting. I have a colleague at Zurich who was talking about trying some interesting things, like getting people to do sort of mental exercises before they engage with chat bots to remind themselves that, If you get in a position where you don't have to spend any money on the marketing, it's not that you can fire your marketing department. You just have to find a new way of differentiating that indicates we are serious about this product. So this form of disruption that initially appears to, I think, naive business leaders of, oh, this is great. I can get rid of the marketing team. Means now you're going to have to get real imaginative about how you express to people that the thing you're providing is a cut above what everyone else is providing, because those standards by which we were doing in the past getting human creative types to produce this this form of material have been undermined by this technology.
Jean Gomes:I want to come back to the point you were making about a colleague describing a way of almost like priming yourself to be able to interact with bots or chat GPT or whatever, because there is a parallel, you know, that part of our brain switches off when we use sat nav. You know, we've lost the capacity to actually find, you know, for direction. You know, a lot of people just don't even look where they're going anymore because they're relying on on on automation. So how do we avoid that? How do we avoid losing some of the qualities that elevators above the machine?
Neil Lawrence:Yeah, so I should say that this is work that's coming from men at all asadi, who's at ETH Zurich. But I think you're the parallel that you're talking about is bang on, because there's a sort of a lot of circumstantial evidence that suggests that when we're planning, I mean, and you both perhaps know more about this than I do, but when I'm thinking my way through a problem, I'm leveraging mechanisms that initially evolve for navigation. So the hippocampus fires in sympathy with the prefrontal cortex. And you know, it sort of feels like that, because often we if we talk about problem, we navigate our way. We use narratives to explain. Narratives feel like journeys. So this has a lot of intuitive sense. I don't know what the you know latest evidence for it is, but if it were true, then exactly as you said, John, we're headed for some trouble, aren't we, because we already have all these stories about people who drove into rivers or harbors or drove to the Paris in the wrong country or state because they didn't check their sat nav and drove for 18 hours without querying where they were going, which is. Driven by this tendency to accept instruction from a machine, which I think is also quite widely studied. And if we have that in this with this extraordinary technology, and let's be clear, as a totally extraordinary technology, I mean, you can do some amazing things with it, but I think, you know, I feel very lucky. For example, I wrote the book without touching a large language model for creating text, and I was really explicit. I wanted to do that. I worked hard to find my voice. You know, do that thing where it's, it's, it's a written version of you that feels like a spoken version of you, or is what I was going for. And it took a lot of rewrites. And imagine, if you have access to this technology in the future, are people going to put that effort in to find their voice? And if they don't, I find that incredibly sad, because, of course, it has extraordinary voices already. I mean, their voice is copied from humans from the past, but their voice is copied from the very best humans of the past, and then tying this back with well, navigation of a problem. So if we think about the professions and the time it takes for a lawyer or a nurse or a doctor or anyone to start gaining the intuition they need for their subject area, and a lot of that, like navigation, requires having made the mistakes on the route, having gone down the wrong road under supervision and understood not to do that again and then going forward so you remember which the right turn is. And just as Sat Nav gives you the sense of that experience, and there's sort of really great work. And I think it was Matt Wilson at MIT, was talking about this at the Europe's conference 1011, years ago, about you can show in rats that if you have one rat towing another rat in a cart behind, the rat in the cart behind doesn't learn the maze. It's the rat that's at the front trying to work through the maze that learns it, and we're becoming the rat in the car. And what are the consequences of that going forward? I think these are major issues around how we educate, how we train, how we bring forward the next generation. Of course, there's immense possibilities in terms of sharing knowledge if this technology is used well. But yeah, it's a really big issue, this sort of, I think some people call it mental atrophy, or Yeah,
Scott Allender:do you have thoughts on on how we can start to do that? Because I'm really, really interested in what you're saying this, this idea that we're going to lose this capacity and already have, I feel like my my sense of direction, is getting worse already. These things are so convenient, right? So it's really difficult to sort of say, hey, as the world gets more automated and you have more and more conveniences, but you should watch out and not always engage with them. Like, how do we get people to wrap their heads around this as an issue, and then what would you suggest they do?
Neil Lawrence:It's a really key question, because on the you know, let's talk about the positive side of GPS. I don't so much anymore, but I used to run a lot, and every time I went to a new city, I would try and run a half marathon. Now, this turned out not to be always a great idea, because when I went to Baltimore and I ran around the bay, and I showed it to my friend later, they said I went through some Fauci places, seemed fine to me, but they they were pretty freaked out about where I ran. But actually, at the time, you couldn't, you couldn't do Strava following of other runners, right? So I was mapping out a route I wanted to go around the bay. I didn't want to double back on myself, so I went inland. And so anyone who lives in the Baltimore area knows exactly where I went running, and it was fine. As a runner, no one bothered me. It was fine. I enjoyed it. And actually, I remember those places more than I remember the bay, because I was running through some extraordinary places with some extraordinary people, and it sticks in my mind. And of course, I was somewhat navigating because, you know, I'm not going to be getting my phone out every two minutes. I was, I was making mistakes on my own, and I've done that in Kampala. I went running down and ended up running into the market area, and it was like not many tall English folk could run that with that market area and Kampala. And that sticks in my mind as well. And I couldn't have done these things without GPS. I'm not saying that people should be doing all these things, but there are ways of giving yourselves experience that you just can't imagine that still plug into these errors that still plug into your understanding, because it's taking you that much further than you could have gone before. So I think that's part of it, but I think another part of it is, I think one of the problems we have in this space, with the conversation at the moment, and this is one of the things the book's trying to address, is all the voices we are hearing. In the press, or, you know, celebrated as Silicon Valley founders, are people who are most confident about what the future is going to look like, but the only thing we can confidently say is that people who are confident about what the future is looking like are going to be wrong, and what we need is an ecosystem where those who don't have confidence are coming forward, because we need that diversity of understandings. They may not know about AI, but they know about the thing that they do, and if they're taught a little bit about AI, they can explain how it's going to manifest in their area and what changes we might need. And if they feel confident and informed, they can be the early signals that something's going wrong. What really concerns me is a lot of these signals are difficult to measure in the classic ways that we might measure, things like economic routes or efficiency performance, because so much of this stuff is about the core of a human and to me, I don't talk about this so much in the book, but I've just written an op ed that hopefully will be in the Financial Times next week, where I sort of try and highlight that as we cut away capabilities that we formally thought of as human capabilities, but now can be done with the machine. As we get close to the core of what I call the atomic human what the book's about? The book doesn't really talk about this idea much, but it's something I feel quite keenly. There's almost like an uncertainty principle. What do I mean by that, any facet of human productivity or attention that is easy to measure is going to be a facet that is easier to replace by a machine. Any facet of human capability that is hard to measure is going to be something that's difficult to replace by the machine. So as we slice away humans and take things and give them to machines, whether that's mental or manual labor, we're making the human contribution harder to measure, and this is a deep problem, because we aren't then seeing how things are failing, because you're really getting to the core. And you know, the example I give in the op ed that I don't talk about in the book, because people, I'm doing a lot trying to work in AI and education, and people say things like, Oh, well, you know, soon you won't need teachers, because these chat bots already know so much. You just learn from that. And I say, Go and watch the video of Ian Wright, the England national footballer, meeting his teacher who turned his life around when he was living in an abusive home and no one believed in him, and one teacher understood that despite the fact he was playing up at school, there was something in that little boy, and by that teacher believing in him entirely turned his life around. And no one's been measuring that. That doesn't coming out, that that one act in the statistics, but it's absolutely vital to someone who became a very productive and admired citizen's life. And these are the pieces, the atomic human but how we preserve and sustain and build on these things, I think, is requiring more sophistication from how we approach businesses and public sector in particular, it's what I think of is it's the core of human capital that is difficult to measure. You want to measure human capital in terms of various measures of productivity, fine, but that will always be somehow doable by a machine. You want to measure human capital by what sometimes we might call hidden labor is a useful term to use in this sense, hidden labor is this sort of character, and it is rarely recognized properly in businesses. That is the thing that will remain, and we have to work out ways of rewarding, sustaining and celebrating that type of work in businesses today, because I think we've gone too far in the direction of where we will monitor everything. Of course we need to monitor of course we want to make things efficient and make sure we're spending money wisely, et cetera, et cetera. But going too far in that direction is really making these weird predictions true, that humans are redundant because the machine can do everything better. Well, yes, it can do everything. It probably can do everything. We can measure better. So that means the the immeasurable becomes more important, but that leads to this uncertainty of, what is that immeasurable and that, you know, sorry, coming all the way back, Scott, to your question, that means we have. Respond as we respond in the presence of uncertainty, which is, we gather a more diverse group of voices, and we carefully monitor what we're doing, and we carefully listen to how that's panning out, which is, you know, for many businesses, for example, operationally in the supply chain, in Amazon, that's not what you're used to doing. We need to make a decision now. Someone needs to make that decision, we don't have time to listen to what everyone thinks, and that's fine as well. That is how you make sure that you've got stock on your shelves. But like I tried to encourage us to do in Amazon, sure do that. Monday, Tuesday, Wednesday, I had this thing. Thursday is thoughts day. We're going to sit back on Thursday and we're going to listen to diverse voices and hear what went wrong, and shift our way of thinking to something that is picking up on these problems before they grow into things that are really challenging for us to deal with.
Scott Allender:Did that help? Did that work?
Neil Lawrence:Did that work? You know, I don't know, because I started introducing it, then I took my new job and left.
Scott Allender:Okay,
Neil Lawrence:so I suspect it didn't one of the, I mean, but it came about. But, I mean, I was so admiring of the leaders, but early on, because initially I would be looking at them thinking, These people are doing superhuman jobs. I can't understand how they're managing this, this large supply chain, you're sort of hundreds of millions of dollars being spent every week. But then at some point, you notice, yes, but they're running a playbook. The way they manage the superhuman nature of the job is they're running a playbook, which is a tried and tested playbook that they understand, but it's a playbook and what would go wrong? And I realized what I was doing as a sort of, you know, the scientist in the team, what I would get wrong is I would enter with a move that's outside the playbook in a meeting when they're running the playbook, and then they would find that very difficult to see. It's this difference between, like, focused attention on the problem in front of you, and what about in the book, I like to talk about the picture of Newton from Blake, and the sort of the scanning, the sort of Marmot looking around the place. And I don't think you can expect these leaders when they're operationally attentive, making sure that you're not. You've not just lost $50 million or whatever it is that week to do, to do the horizon scanning. So I I started in introducing this because I realized, well, look, Monday and Tuesday, we were doing weekly business reviews. You can't expect them to be that. Wednesday, you're sort of doing the work that you know. You're sort of chasing down all the things you suggested. So Thursday felt like a good day. But like I say, I think to transform a company the size of Amazon at that time, I mean, I would have had to move to Seattle and dedicate a large amount of time to that. And we started working it up. I don't know it was a big organization, 1200 people. I think it would have taken a lot. I think we could have done it, but, um, but you see what I mean? You can't expect the leaders to be context switching in a given day. You want them to be going in in the morning, Thursday. Oh, this today. We're going to run a different playbook. We're going to run the playbook where we listen, rather than Monday, Tuesday, Wednesday. Even though it's not quite the Amazon culture, because they're operational, they tended to run a much more closed mindset, and it was just interesting to understand why they were doing that, and see that they are doing the right thing operationally, but it affects the business's long term ability to see problems on their eyes.
Jean Gomes:Why I was particularly excited to have this conversation with you is exactly the point you were making a moment ago about the atomic human when you slice away all the things that can be automated, and what you're left with is the hidden labor, the intangible assets, the things that are incredibly hard to measure, that is really where human beings are going to get elevated with AI, as opposed to commodified. So I'm interested to just get a sense, because you talk about the fact that we're and you've illuminated some of these things about the myths and misinformation that that we're laboring under, particularly within, you know, leadership roles. Can we talk a little bit about trying to help leaders in this conversation understand how we're currently looking at AI in a way that might stop us from seeing how to move forward in its adoption. Yeah,
Neil Lawrence:so I do quite a lot with the Judge Business School teaching on this and these things are less in the book. I mean, I'm glad business leaders do seem to enjoy the book, but it doesn't do the thing that they always teach me to do in the judge, which is, at the end, you have to say, here's the three things you need to do tomorrow. It's nice that you did all this philosophy. And the way that I tend to do that is, I think it's inattention, bias. And for the moment, I've forgotten the original. All authors of the work, but this, this famous sort of video that lots of business schools, I'm sure, use of the gorilla walking into the basketball thing. So there's people passing the basketball. You have to count. It's difficult. And because you're so focused on the counting of the basketball, you don't see this gorilla walking in. And one of the things I say in those sessions, because it's my idea, is to support leaders in lifting their vision. Because the problem I think leaders have at the moment is that they've come through the business without this technology existing. So where their business instincts are operational is not in the area of what might happen if you and let's just not talk about AI. You know, this is also true of sort of digitization, and I used to think of like the challenges of digitization as two fold. One is actually making sure that your systems are reflecting the outside world, which Amazon was amazing at. But then there's a second problem, that once you've got that data inside the company, how are you tracking and making sure your secondary statistics actually are representing what you care about, and that turns out to be a lot harder, and that was the sort of area where things might go wrong in Amazon. But like I say, I think they're ahead of it anyone else in that regard, you know? And this is not a problem that comes up for Google or Facebook, right? Because their whole world is virtual. This is about the transition between the physical and digital world. Now what I would say to people is that the reason you're a senior leader is not because you're an expert in machine learning or AI or digitization, is because you have a sense of what does and doesn't work in the business. And what tends to happen to senior leaders is is they feel the obligation. And I saw this with leaders in Amazon, as we introduced as machine learning technology. Even though they're pretty technically advanced, you'd see the same thing. You the leader knows that this is important. I mean, we had a whole Bezos remit, saying everyone has to explain how they're using machine learning in their systems. They're then told it's very technical. And there's this sort of technical presentation where it's like, well, actually, it's really hard how these things work. Let me explain. And what do they do? They start counting basketball passes. They're distracted from what's actually they're there for, which is gorilla spotting, yeah. And this is an enormous problem, and this is where so many businesses go wrong. And the effect you sort of see is the sort of following one that they've lost their ability to make a calibrated judgment about when and where to ship. And as a result, they tend to be all in or all out. And it tends to go this way. They're initially all in that project utterly fails, because no one brought the sound business judgment of what would work and wouldn't work, because the people often pushing the project are new to the business. They're keen, they understand the technology, but they don't understand the business. And then after that initial failure, the leader never wants to touch that stuff again. And of course, the right reaction is the calibrated one where this is an interesting idea, but it won't work on that product. It might work on this one. And what we're going to do is we'll do a pilot study, and we'll understand what the problems are in deployment before we go big, because the problem that really occurs is, and this is really high pressure on particularly large companies. At the moment, there's pressure on the board to say what they're doing around AI. So there's pressure on the C suite executives to sort of get the attention of the CEO around who's doing what. And then there's pressure on the orgs below. And that means that, instead of these ideas being allowed to incubate and integrate with the processes of business, the person who's most confident and who sounds like they know what they're talking about. This is a pattern. Here is the one that makes picked up by the senior vice president, often to the exclusion of the people who actually do know how it goes wrong, because they will present a more nuanced point of view. And the projects that are celebrated at the top become what I think of as Potemkin projects, ones that basically can't be allowed to fail because they've been shown to the CEO, and anything which has the CEO's eyes on it as an ongoing project is actually doomed to fail if you're uncertain about how it's going to be going on. So what needs to be happening, but this has to be a bottom up process, is that there's interest in deploying in these projects, but there's also a culture of sharing how they're going wrong, and that has to percolate in some way, such that middle managers know which ones to bring up and share how to do that in individual businesses. Varies, but, but one of the first things people get wrong is that if you haven't got your data infrastructure right, you ain't going to get your AI project right. And the one area of a business that likely has its data infrastructure right, and I say this to CTOs, CEOs, CIOs and CDOs, a lot. Who has the bigger department? You are the CFO. Always the CFO. What does the CFO do, other than deal with data with dollar signs in front of it? How big would your department need to be? Because very often your data is larger than the CFOs data, but the business doesn't know how to quantify the value of your data, so it's not willing to invest in the accounting and the provenance around that data. But when it comes to actual accounting, that's just data with dollar signs. Now, if you want to be at the standard of the CFO How big is your department going to be and that's not going to happen. But how do you get there? Well, you actually have to be careful about using data in projects in such a way that allows you to start quantifying the value and improve its quality. That is probably the biggest cultural change most businesses face. So how do you launch data science projects in the business. Where do you start? In the CFOs department, you start with projects where you have good data, because it's financial data, and you do that in such a way that the CFO is interested in the project. They're not determining it, but they're agreeing. This is an interesting one, in an open way, so that you're hearing about what the failings of the project are that you've got the CEO integrated, that you've got your senior tech team, and you've got a top data scientist that is given the cover to talk honestly about where and when this is working and not working, and you maybe have that as your thoughts day thing. And if you're not doing that, you know you are just constantly in this danger of having a bunch of people who are overstating their claims. My end thing on the slide is, see the gorilla Don't be the gorilla, because what tends to happen, particularly with male leads, is they don't like the fact that they don't know, and they operate in this alpha male way. And they don't listen to the junior people. They don't see the problems before they happen. They operate like as a male gorilla would. And therefore, you know, the project is successful, whether it's successful or not until it totally blows a hole in the company. So the the senior male leads on that thought stay. They have to tone down, tone down. Their alpha male nature. Get into listening mode, build the confidence of the team at the coalface. Start to learn where things work and where things don't work, and keep their awareness of, you know, the real insight, it's incredibly
Jean Gomes:helpful.
Scott Allender:What are some other ways that leaders can be more human and help manage the overwhelm that people are feeling at this what seems to be a technology that's beyond most of our comprehension.
Neil Lawrence:Yeah, I think it's particularly hard because there's interesting things about you know, as leaders in organizations. When we're doing that, we're actually at the forefront of the removal of the atomic human from process. What do I mean by that? Well, you know, there was a time before we developed writing and got together in cities, whatever, 5000 6000 years ago, when all people didn't need laws, they engaged in moral labor and listened to each other and made decisions based on history. And it's incredibly cognitively demanding. You know, my my wife's from South Italy, and I would say in southern Italian culture, they engage in much more moral labor than we do. In English culture, there's a lot more unspoken obligation that they understand intuitively to family things like, everyone's extremely offended if I land in Naples and I try and get a taxi or train, you know, I don't have a member of the family pick me up that, you know? You know, up. I think for UK culture, people are like, Oh, not bother you. But in Italy, it's considered offensive that I didn't ask a member of the family, because, of course, they do that for me. So it's quite complex and it's quite hard work. And what sort of happens when we develop cities is we develop processes and administration. Code of Hammurabi, 1700 BC, tries to codify some of our ideas about what moral labor might involve in I think the second set of laws. Of course, when we're a leader, we're in the same position. So if I'm being asked to cut an organization which has 1000 P. People, and I'm being asked to cut 300 of those people. I'm not engaging in the moral labor of interacting with whether someone's a single parent or whether they have a family when with health problems. I'm just following a process that the company says and in the country, says, often in the law, this is, this is how you have to deal with this. You might have to pay some compensation. You know, very country to country, there's no right answer on this. And we accept as a society that those processes are allowed to occur, that people do things that individually we might find, morally represent, reprehensible, but allow for efficient process, allow us to create products that are better, allow us to collaborate in ways that we wouldn't be able to collaborate if we were relying on direct connection. So that, first of all, you have to realize, oh, that that's part of what we're being told to do. You know this expression, it's just business is basically like, oh, suspend your normal human moral compass. And as a result, what you get is companies often behave like children, because the obligations we put on companies are not the same as the obligations we put on adults or public institutions, such as universities, we expect other institutions to behave in a more morally upstanding way, and there are benefits from that. So I think this is where it sort of gets tricky, right? Because, given that we clearly accept that there's some aspects of the atomic human that we're prepared to sacrifice for efficiency and and sort of improve productivity these measurables, it becomes the case that there's it's not clear that where the dividing line will fall in The future. So when we're thinking about a leader's role, I mean, do we really believe a leader who has no human aspect is what we want in the future? I don't think so. But we're also, in some sense, asking leaders, any leader you know, to make decisions which we kind of know are compromising them in ways as a human being. So I think, and I don't know the answer to this, and you probably both have more sophisticated thoughts on how we support leaders in dealing with this, but I think that the problem we get into is that people move into a switch. It all off mode. The switch, it all off mode being well, if I'm being asked to make those morally difficult decisions, I'm just going to follow the process, and I'm not going to put a piece of myself into the equation, because that's too disturbing. And I'm going to tell everyone that's just business. But if that's the way we're going. Then, then you are replaceable by a machine. And I don't think that that's, that's what we want out of this. I think actually, you know, a good leader, you know. And my my boss, Andrew Hamel, said this to me once. I was saying, Oh, I don't know I got, I got some good people. I think I just got lucky. And whether he was right or not, the point he was making was great. He sort of said, Look, you don't get lucky by attracting good people. You've inspired them in some way. And whether that's right or not, I mean, I think it's potentially a circumstantial I'm not saying I was somehow some mega inspiring leader, all sorts of flaws as a leader, but I do believe what he said is correct, and part of that inspiration is bringing yourself as a human to the team and sharing a vision, which sometimes is actually go beyond anything the company is giving you. Because everyone knows, I mean, well, they don't know that. They tend to believe that the company is, you know, all with them and everything else, right, until you know that day where you get told, Well, you're out on your ear. Which, which companies will do. But a leader within that organization can inspire more from the people within that organization in the service of a wider organization that, at the end of the day, doesn't have that human component in it, apart from through those leaders. So it's an incredibly complex and somewhat self deceiving system, but it feels to me incredibly important that that human piece is in there,
Sara Deschamps:Evolving Leader. Friends. If you're curious to get more insights directly from our hosts, consider ordering Jean's book leading in a non linear world, which contains a wealth of research backed insights on how to see and solve our greatest challenges, as well as Scott's book, The Enneagram of emotional intelligence, which can help you unlock the power of self awareness sustainably in every dimension of your life.
Jean Gomes:As we come to the end of this hour, I'd like to finish with a bit of a thought experiment. So in order to create this healthy symbiosis between the atomic human and all of the amazing things that machine intelligence might be able to do over the next decade or two, how do we how do we do that? How if you were going to start up a new business and you wanted to amplify human value creation and not have it undermined by by automation. What are the kind of things that you might start to do in building a business, designing it and building it?
Neil Lawrence:It's a great question, and this is, I mean, it's already something I'm thinking of actively working with a few colleagues, because I sort of really, I'm not saying I'm a great business lead, but I kind of think it's interesting. To try and put your money where your mouth is or your effort. And I think that one of the if I were to go for a single problematic point, I would say it's the notion of artificial general intelligence, which I think is nonsensical, I mean, and the way I'll talk about that sort of post the book, I hint at this in the book is, you know, it's like talking about an artificial general vehicle. Which is it? Is it an aircraft, or is it a Brompton bicycle? Or is it that train you got stuck in John? You know, it's totally contextually dependent. You might have wished you'd had a Brompton bicycle when you're on that train, indeed, but, but at outset, it probably seemed like a train was a better idea. And the interesting thing is, there are, of course, general principles to vehicle. I mean, the reason you go in a train is because it reduces friction, but it sort of undermines the ability of that vehicle to sort of go wherever you want it to go. There's air resistance, there's wheels, there's wings, there's all sorts of things that could be applied in different ways to vehicles. So it's not that there aren't general principles to vehicles, but there is no such thing as an artificial general vehicle. It's an absurdity. We have to know the context before we talk about what a good vehicle might look like. And you know, if you if people are trying to persuade me that intelligence is less complex than a vehicle. Well, I think they're wrong. And, in fact, I think there's a strong parallel, because the navigation examples we were using before, how do you want to go about your problem? You know, what are you trying to do? You're trying to scale. What's the uncertainty level? What's the time frame you're looking at? So stepping back from that, I think it's just an incredibly damaging and simplistic and incorrect concept that, when we trace it back, is eugenic in origin, because the term general intelligence comes from the eugenicists. Okay, so what's my fix specialized intelligence? We don't actually want entities. We want to be able to be able to communicate with these things, which is just an extraordinary thing about these chat bots, that they suddenly allow normal humans to communicate with the machine for the first time. That is transformational. Forget AGI. That's transformational. I mean, you don't want to indulge in hyperbole, but it seems at least as transformational as any other revolution, even on the back of this information revolution we're on. But what about artificial specialized intelligence? What about just building things that do the job that you want them to do, and being able to build them quickly so you don't need a generalized thing, but you do rapidly want to compose something that I don't know supports me with my bird watching hobby, or supports me in my ability to run a football team or whatever it is I'm doing. And I think that that the possibilities for that are emerging. And I think if you view even with these very general tools, you know, they are certainly general tools, and they have interesting general properties, I think, sort of trying to approach it from a business perspective of but actually we see them as tools, things that are there to at the end of the day, reflect and express the will of some human operator that feels like the right way to be going. And I think if we don't go that way, then I think societally, we end up in a lot of trouble.
Jean Gomes:Do you think we'll need to invest even more effort in education, and I don't mean just the kind of traditional academic but also craft based and so on, to help human beings to to do that, you know, to be the capable of being the atomic human. And I'm also thinking about things like, you know, our self awareness, our meta cognition, so that we can actually understand how to interact with these things.
Neil Lawrence:I think absolutely and probably utterly redefine what we think of as education. I think that the other thing that it's so transformational this technology, that the way we've decomposed society, how we've separated into different separation of concerns. You're a teacher, you know, I'm a student that's tailored around a stable society where the assumption is. Some group of people know what to do, and other people don't. Let's be clear, no one knows what to do and and the only way you can start understanding what to do is by bi directional communication across society. That means learning as much as possible from what we can from, say, school students, about how they're using these technologies and feeding that back to the teachers to sort of for they can better understand to teach or learning as much as possible what a nurse's job actually involves, and how we eliminate the fact that they're spending all their time doing data entry and no time with patients, and accepting that even though I, you know, might be seen as an AI expert, I am not an expert of how we should be applying this technology in the domain of a busy hospital. And the amazing thing is that this technology should be able to help us with that. So all the previous disasters, like the UK ones, the horizon program, they're centrally deployed digital systems that didn't talk to the people who were affected. Someone in the center thought they knew, and then they deployed it on people without talking to them about the effects. We cannot afford to make those errors. And those errors are the errors where we're getting communicated by big tech companies. That's what they want to do. Here's we're going to run the AI, we'll deploy it. You know, don't worry. Like that worked in the past, like we all love word, don't we? That's really helped. You know, they keep promising they're going to solve the problems in society, but they fail to, because they actually don't engage with those people who are most familiar with those problems. Now, these problems are never going to go away. Problems in health, social care, education, you know, security, these are so called wicked problems. But what we can do is empower the people who are chipping away at them, trying to reduce the size they're always going to be there, but the people who are trying to chip away at them is what's the tool you need to do that chipping and listen to them, rather than having some central person, some bureaucrat who, however well intentioned, actually doesn't understand the problem and doesn't understand the technology imposing this technology on them, and that means like, a revolution in the way we educate, and a revolution in the way we understand who knows what. But it's actually really exciting. A lot of and we're doing a lot of it in Cambridge at the moment, with academics. We're starting to work with local government. We're trying to understand how to do this with teachers. It's a lot of peer to peer work. A lot of us listening to what their problems and what their solutions are, and supporting them and building confidence in which solutions are working, so ensuring they're an environment where they can deploy these things in a safe and ethical way by talking to, you know, critical friends, and then when they've learned how that works, that hopefully frees up a bit of their time and enables them to go out and teach someone else how to do it a sort of, you know, see, one teach, one, organize one approach, because that scales, and it's like a productivity flywheel that builds on the human capital in the system. Doesn't require that be to translate it back into money and then back into reinvested to sort of get the productivity flywheel going, it requires that we get a little bit of time back for those nurses, doctors, teachers, whoever, and then hopefully we can persuade those people that it's worthwhile spending a little bit of that time doesn't need to be too much of it. Re engaging and spreading the word, and you know, you can see what I'm trying to do here. I'm trying to create something that scales exponentially but is entirely dependent on the human capital, and building on these notions of atomic human that that should vary in each area of deployment, right? It shouldn't be that someone centric decides how it looks. It should be that those who understand how they bring themselves to their job are the ones that are spreading that message.
Jean Gomes:I have one final question. I can't help it. Sorry. I know we're running over, but it's, it's the kind of leading up families. So you've got kids, Neil and both got an idea as well. So What? What? What's the advice that we should be thinking about in terms of our children to be an atomic human in this future?
Neil Lawrence:I mean, I'm very lucky with my kids that they're, I mean, they're both quite academic, and they're both passionate about what they do. And, you know, of course, my oldest one, who's starting to look at the job market, and he's like, This is annoying. I really want to be a chemist, and no one wants to give chemists a good job. And actually, so what he's looking at doing now is, okay, can I build some more skills on top of my chemistry so that I can, you know, some data science skills, so I can do the two together. So I find it's been relatively easy for them, because I think the technical skill sets are still important. And I think understanding an area deeply technically is really useful, and you'll be able to combine it with the sort of more generic AI data science skills and steer in whichever direction. I think the really interesting and potentially troubling thing is for people who are in the creative sector. Yeah, where when you look at the incredible skills, and in fact, I haven't published this yet, but we often work with graphical scribes, and I just think what they do is amazing, summarizing a meeting in terms of drawings and and one of my favorite scribes, I asked him to draw the book, so he's done an image for every chapter, and we haven't sat down and talked about it yet, but, you know, I know that he and other scribes are concerned with, well, this skill I have, you know, it's someone just going to say, well, here's a computer doing that. Well, I feel not because I think everyone really admires and enjoys the skill as a human in the room doing that, and they can go and talk to them, but there are a number of creative disciplines, whether that's sort of in entertainment, film or whatever, that's even just doing traditional CGI, which are going to be displaced. And these are already quite, I mean, fragile jobs. In many respects, it's, it's lots of people want to do creative stuff, and not all of them get selected, so I think it's very disruptive in that field. I think it's hard to know what to say, other than people should follow their passions, but try and be pragmatic. And you know, I always say, if you've got the ability to make a decision that keeps your options open do that. But of course, I get to say that because I feel like I've spent my whole career doing that. Look at me now. I still get to do all this diversity. I never had to choose a job.
Jean Gomes:Yeah, pivoted quite a lot within this and you
Neil Lawrence:and it's but I do think that the that we must be optimistic, and we must make them optimistic, because the thing I most strongly feel is pessimism is self fulfilling, right? If we all agree that this is all over, you know, then it is. It genuinely is, and the only way it isn't is by continuing to and I'm not religious, right, but I almost get spiritual at the end, because you realize, oh, that it's actually sort of a matter of faith. It's a matter of faith and belief in other human beings, and that that's important. And you know, if there's a loss of faith in other human beings, that is real, that's horrific. And so although I've gone through my life not sort of thinking about so much about matters of faith and how I feel about them, I feel very strongly that we need to bring up the next generation to be optimistic and to demand to be confident, and to demand the power to do these things themselves, so that they can steer many of the decisions that are coming up. They have the ability to do that, the understanding to do that, and the confidence to do that, and the optimism to do that together and well,
Scott Allender:that's a lovely place to land the conversation. Neil Lawrence, thank you for your time. Thank you for the work you're doing. Thank you for writing an incredible, incredible book. I loved every minute of this conversation. We really appreciate it. Oh,
Neil Lawrence:thanks, Jean and Scott. I mean, I always feel like I would. I could have learned so much from both of
Jean Gomes:No not at all, that's what you're here for. you. I always ramble too much, so apologies for doing that
Scott Allender:Absolutely, and folks, if you do not have your copy of the atomic human yet, stop what you're doing right now and place your order, because it is worth your attention, you're gonna love every minute we've only we've gotten in some good content here, but there's so much more to get from that book. So do, do pick it up, and until next time, remember the world is evolving. Are you? You?