
The Evolving Leader
The Evolving Leader Podcast is a show set in the context of the world’s ‘great transition’ – technological, environmental and societal upheaval – that requires deeper, more committed leadership to confront the world’s biggest challenges. Hosts, Jean Gomes (a New York Times best selling author) and Scott Allender (an award winning leadership development specialist working in the creative industries) approach complex topics with an urgency that matches the speed of change. This show will give insights about how today’s leaders can grow their capacity for leading tomorrow’s rapidly evolving world. With accomplished guests from business, neuroscience, psychology, and more, the Evolving Leader Podcast is a call to action for deep personal reflection, and conscious evolution. The world is evolving, are you?
A little more about the hosts:
New York Times best selling author, Jean Gomes, has more than 30 years experience working with leaders and their teams to help them face their organisation’s most challenging issues. His clients span industries and include Google, BMW, Toyota, eBay, Coca-Cola, Microsoft, Warner Music, Sony Electronics, Alexander McQueen, Stella McCartney, the UK Olympic system and many others.
Award winning leadership development specialist, Scott Allender has over 20 years experience working with leaders across various businesses, including his current role heading up global leadership development at Warner Music. An expert practitioner in emotional intelligence and psychometric tools, Scott has worked to help teams around the world develop radical self-awareness and build high performing cultures.
The Evolving Leader podcast is produced by Phil Kerby at Outside © 2024
The Evolving Leader music is a Ron Robinson composition, © 2022
The Evolving Leader
BONUS: Working in AI with Damian Lowe
In this bonus episode of The Evolving Leader, host Jean Gomes speaks with Damian Lowe, a senior software engineer at Symphony AI. They discuss how many of us have moved beyond the initial reactions to services like ChatGPT towards a current phase focused on experimenting, learning, and building. Damian shares that for programmers, AI tools like large language models (LLMs) are already providing a tangible 10-20% productivity gain for tasks like code suggestions and reviews, noting that both the technology and human adaptation are still maturing. However he adds that while AI excels at basic or medium tasks, its performance degrades quickly at higher complexity levels, often producing "nonsense," and he emphasizes that effective adaptation requires not just better prompting, but a crucial understanding of the difference between how machines "think" (based on probabilities) and how humans think.
Looking ahead, Damian is optimistic that within 3-5 years AI could make many jobs significantly easier, potentially reducing "grunt work" and fostering more creativity through effective human-machine collaboration. He underscores the importance for leaders in helping teams understand AI's capabilities and limits, encouraging experimentation, and stresses the need to maintain critical thinking and human skills alongside AI adoption.
Other reading from Jean Gomes and Scott Allender:
Leading In A Non-Linear World (J Gomes, 2023)
The Enneagram of Emotional Intelligence (S Allender, 2023)
Social:
Instagram @evolvingleader
LinkedIn The Evolving Leader Podcast
Twitter @Evolving_Leader
YouTube @evolvingleader
The Evolving Leader is researched, written and presented by Jean Gomes and Scott Allender with production by Phil Kerby. It is an Outside production.
Most of us, I'm sure, felt the compulsive reactions and feelings when chat GPT and other seemingly magical AI services were launched a couple of years ago, excitement, anxiety, fear, overwhelm, doubt, scepticism, even denial, were dominating leadership conversations all around the world now, as we enter a new, hopefully calmer phase of experimenting, learning and building, using these new capabilities, leads us to settling back into recognising that, as with every new disruptive technology, their first priority is to ask the right questions of themselves, their teams and the organisation. In this show, we talk to the highly experienced software engineer, Damian Lowe, who works in one of the world's leading AI enterprise companies, about his thoughts on those questions. Tune in for an important conversation on The Evolving Leader you Music. Welcome to The Evolving Leader, a show born from the belief that we need deeper, more accountable and more human leadership to confront the world's challenges. I'm Jean Gomes, and today I'm feeling really open and energised because on today's show, we are joined by Damien Lowe. We've had several guests talking to us about the impact of AI, what it's having and what it will have in our world. But today we're talking to someone who's right in the middle of the work. Damian is an experienced senior software engineer working at Symphony AI, a fast growing company that provides enterprise wide solutions to corporates using generative and predictive AI, I met with Damien last year and exchanged thoughts about what it means to be human in an automated world, and I thought he'd be a brilliant guest for The Evolving Leader. So Damien, welcome to the show how you feeling today.
Damian Lowe:Thank you very much, Jean. It's, yeah, it's a pleasure and an honour to be here. And yeah, primarily I'm just, I'm feeling excited. Yeah, this is a fascinating area. It's new, although it's not new, it's impacts are obviously getting bigger and bigger at the moment, in the 20s. So yeah, just just very excited to to explore all these areas, wonderful.
Jean Gomes:So we have taught, you know, at a fairly conceptual and strategic level about AI on the show before, and then just love to get the programmers perspective, and remember, I represent, you know, part of the audience who are usefully ignorant on this topic. So break it down for me in simple terms. But can you share, from a starting point, how AI has shaped your career, where you started, at the beginning of your career today, how has it kind of changed how you think about things and the nature of your work?
Damian Lowe:Yeah, absolutely. I'm not sure how many people understand how long AI has actually been being used out there in the corporate world and and probably in governments as well. Just, just Yeah, in the world in general. As you're probably aware, AI machine intelligence, machine learning, has been an area of research for many decades, way back into the 20th century, right? And that, and you can and that sort of represented by a lot of the science fiction books that came out in the 20s and films in the 20th century that were exploring the ideas so we've been, we've been trying to come up with some sort of machine level of intelligence for a long time, but, and up until, I think, really the 2010s there were many, many blocks to actually making it, making it work and make that much of a difference. A lot of those were compute power hardware blocks. But around, I mean, I'm being rough here, but around about the beginning of the 2010s there were some crossover points in terms of affordable compute power for companies to harness, and much of that was also moving into the cloud. More so companies, organisations didn't need to be doing things locally. They could sort of offload the processing to companies that specialised in that. So your result was what is now called AI, and wasn't called that so much in the earlier days, has gradually infiltrated into society. So a lot of things like recommendation engines, you know, if you different television and streaming platforms, you know, you how do they know what, what they think you're going to like to watch next? That's a form of AI and machine learning, same sort of thing for retail, you know, purchasing recommendations, all sorts of analytics and business intelligence stuff. So it's actually been. Going on for a long time. And one could argue that that AI goes right the way back to the start of programming, but I didn't see it much until the roughly the end of the end of the noughties, beginning of the 2010 where I started to be involved in various ways with what would be called Predictive AI and analytics, and as data volume started to scale up, and yeah, we're talking early, early Teenies. And yeah, looking at looking at large statistical models run on large data sets, just pulling out insight, particularly when I was working in trying to counter financial crime. That was that was one of the big areas that I was involved with, and it's just scaled up and up and up nowadays. I work for a company, as you know, that specialises in AI for the enterprise. We use AI to help ourselves write code, to write better. Ai, it's everywhere now, and we use it for research or just to have a chat.
Jean Gomes:How's it changing your your job? I mean, how do you think about your work now compared to when you started out?
Damian Lowe:So again, if I look at, if I look at the evolution, when people say, Oh, I'm going to Google that they've been using AI for years. So just the fact you can search for things. If I think back to the Dark Ages when I started programming professionally, Google did exist, but it was quite primitive. Nowadays, I can just go on to one of these llms, like chat GPT or Microsoft co pilot, I certainly tend to use the GPT models in in preference to search engines all the time now because, because I want to have context aware conversations about what I want to search for, and because I can refine my search in real time, in context with the machine, right? So I can say, Oh, that's not quite what I meant. I meant this just just that is a huge step forward from you just type, type a search prompt into Google, and if you didn't write a prompt very well, or if something relevant wasn't available, you might just get nonsense. So that that is a time saver, just in and of itself, and that's kind of just the beginning of things. As a professional software developer, I can actually get the machine to start suggesting code, reviewing code, suggesting tests, doing all sorts of things that are very useful and potentially time saving. I use my words carefully. Have you
Jean Gomes:got any sense of you know, like if there was a Damien low pre chat GPT versus what it can do now to help you with your productivity is that 2x 5x 10x potential. What's your productivity like compared to without it?
Damian Lowe:I'd love to tell you that it was 2x or 5x realistically, I don't think it is. I think, I think there has been an improvement. We are actually trying to measure that in in our company, which itself, is proving quite difficult, but we have, we have gained some metrics, and we do reckon there's some improvement for programmers and other, not just programmers, but across the board. But it's, it's probably more like 10 to 20% at the moment, that I don't think that's because it's not very good. I think it's because it's not mature. Yet the technology is not mature, and our human response to the technology is not fully adapted either.
Jean Gomes:That's really interesting, because my, you know, understanding of this is out of step with that experience, because I, you know, like you read this, you know news about the amount of programming, it's now done automatically, versus programmer work and so on. And you hear these huge numbers. So it sounds like that's not quite like the reality, but, you know, it's like, it's a couple of years since chat GPT was launched and ignited the public awareness, and all the executives that we were working with all got, you know, very fired up in lots of different ways. What? What's your sense of where we are in the hype cycle? Then what's the impact it's had them there, not just in terms of impact, but, you know, the awareness part of it, the human and the machine side of it,
Damian Lowe:if one is to if one sets a basic or perhaps medium level complexity programming task, or indeed several other technical type tasks, research tasks, to these GPT models, at the moment, they will often come back with a perfect or very good answer quite quickly, which can potentially save a lot of time. That's excellent. A running theme I've seen amongst our software developers and other other professionals involved in complex areas. Is there some sort of complexity threshold? But if you go beyond that, you. Results can change from being perfect or good to being mediocre at best, and sometimes much worse, and the change can be very quick. And I think there are reasons that are increasingly becoming understood as to why that's happening, in the way these models function, and in the way that they're trained and what they're trained on as well. There are quite a lot of factors, but there are, there are clearly lots of potential ways in which there could be lots of money and time savings with this technology. I don't know if always the C suite understands that the road to get there might actually make some things take longer before they take less time again, because we've got work to do on the quotes, machine side and on the human side.
Jean Gomes:Talk to me a bit about the human side. What do, not just programmes, but people in general, need to do to start adapting, to be in some sort of partnership with with AI,
Damian Lowe:to some extent, one could simply answer with become better prompt engineers. And that's sort of true, and there is training available for doing that, and that's that is worth doing, and it's probably worth for some organisations paying money to get training like that rolled out. But I think there's, I think there's more than that. Again, I think understanding how the machines, quote, think versus how we think is very, very useful. And a lot of the time, something the machines have taught me and the way they interact with humans, is that a lot of humans don't understand the way they think very well in the first place. Therefore, when they're presented with something that thinks in a way that's better in some ways and potentially worse in worse in other ways, that they don't know what to do with that. Ask a simple I've asked a simple question, why didn't I get a simple answer? Or why didn't I get a factual answer, or why didn't I get a consistent answer? Asking better questions, working on one's own thinking communication style, improving mental clarity, and getting a better sense for what the machines are doing and what they're not doing, as well as just experience using them, right? All of those things help. I mean, I'm certainly, I certainly get a lot more out of using GPT models than I did two years ago. When I just come across them, I know what they can't do much better. So I know what to not waste my time on.
Jean Gomes:So give us a give us a sense of a few of those things. What should we not be wasting our time on?
Damian Lowe:It depends. It depends on the specialist area, because, and this is one of the things that I made sure to check before this conversation. And as I suspected, it's definitely the case that the original training set of data that these GP so GPT just, I don't want to get hyper technical, but this is very relevant, right? GPT stands for generative, pre trained transformer model, right? So generative is, yes, they can it. Can generate new text, new ideas, potentially, or assimilate from other ideas in a unique way. Transformer is, is the tech, technical mathematical modelling that's going on that we don't need to worry about as end users, particularly, but pre trained is very important. Although the models are adaptive. To some extent. The bulk of their learning is done in the early stage of their creation. They can be, so to speak, re pre trained successively, but it takes a long time and a lot of money, because that's what takes the huge compute power, right? So it's not happening every day. It might happen every six months or a year or something. The rest of the changes they're happening in smaller chunks. It's more like tweaking than fundamental retraining. So understand, are you working with a generalised GPT model? So your Microsoft co pilot, your Google Gemini, your chat GPT, they are trained, roughly speaking, on a cross section of the entire internet at a period of time. It might be one or two years ago, right? So they don't necessarily have specialist knowledge, but you can enhance them. You can add extra knowledge sources at different stages in model development. So yeah, if you're trying to ask about a very specialist area where there might be general or almost no knowledge, just on the internet at large that's probably available, you may get anything up to nonsense out of the model, but it'll it'll display the same level of confidence as with everything else. And I think a lot of us have seen that, yeah, be aware of that. Now, if it's something you could have probably got something useful with a general Google Search. You'll probably be okay. But yeah, specialist knowledge, different situation. I think one of the major issues with these models as they are at the moment. I'm sure a lot of these things will continue to improve, but yeah, I think one of the major issues is. Is they don't know when they don't know what they don't know. They don't have a sense of that, because the entire way they work is to simply find the highest probability answer based off all the information they have. They're pre training huge amount of data, and then whatever extra bits has been added, they just get a probability, find the highest probability, and then start producing stuff based on that. Yeah.
Jean Gomes:Well, human tendency to kind of project human values and humanise the technology means that we start to think of it as a human being, and then expect things that it can't possibly do Absolutely. Do you notice that at all in your own dealings with it. You know, do you? Do you find yourself sometimes kind of falling into that trap? Um,
Damian Lowe:actually, not that much, but that's just because I've got a very keen sense of what it is I'm dealing with. Yeah, and I've spent ages deliberately trying to trip it up and find its limits and what, and I keep a very close tab on how it evolves, but I don't expect the general public to do that, no, but, but this business about making specialised models is really important, though, because if it's in one's own company or organisation, there are ways that are not that difficult to hugely enhance the specialist Knowledge that a model has, you can you can use the sort of off the shelf into general internet, train knowledge and add your own knowledge sources from your own organisation, yeah, because that that could potentially unlock an awful lot more value.
Jean Gomes:What's the starting point for how you go about doing that?
Damian Lowe:Identity, identify how much knowledge you have actually got inside computers somewhere, you know, is it actually written down and saved, saved on computers. If, you know, if, in some areas it might, might be hard copy books or print outs of things, get them scanned in. OCR is very good nowadays, so one can scan large amounts of documentation, but a lot of stuff is is in computer somewhere find it. It might be worth paying for some specialists to actually grab that information and get it into a form that it can be added to GPT models. But basically, if the information is already in a database, it's not that difficult to add it to GPP models. What's
Jean Gomes:the consideration around the security of that? So you put your own kind of intellectual property as a company or as an individual into one of these models? How confident are we that, that you retain control over that, that it doesn't just get pulled back into the into the mothership at some point?
Damian Lowe:Yeah, that's a great question. It's a very important question. And the biggest thing there is likely going to be, how much of the model are you going to pull in house? Because it is now possible to take some of the open source models that are similar to chat GPT, they're not quite as good, but they're similar, but they're freely available. You can take those models, pull them in house so they've already had their large pre training done. Then you can pull them in house and entirely closed door. Add your own data sources to them that you could keep entirely outside the cloud, outside the public Internet, therefore secure, or you could do a halfway house and still be using cloud compute to do a lot of this, which you might you might need, because you can't afford that much compute in house, but you can pay for quite a high level of security in the cloud. Explicitly that we actually do that sort of thing in Symfony AI, but we actually have our own in house models and in house model augmentation, but we're still hosting in cloud, but yeah, in very secure cloud, so that would then mean it's pretty much the same security as any other corporate or organisational IP that's that uses cloud hosting, right? Yeah,
Jean Gomes:okay, that's a really interesting so can we change the conversation a little bit and kind of open up to your thoughts on what AI is proliferation is going to lead to, potentially, perhaps starting in the workplace, what what might work look like in three to five years time, if companies like Symphony AI are successful in scaling their offering offerings and getting adopted, you know, as as the kind of de facto standard for like word is,
Damian Lowe:yeah. So this is very hard to predict, but in terms of positive best guess. I do think within five years, a number of jobs, yeah, could could get significantly easier, particularly where you've got high complexity, even if the machines can't do all of it for us, their ability to augment our capabilities, I think. Is going to increase more and more again, as the models get better and as humans understand how to exploit them better and critically collaborate with them better, I think best case scenario, we get to do less grunt work and be more creative and get more ideas from the machines.
Jean Gomes:We're starting to see some interesting research being done in academic circles around the impact that CO pilots are having with different levels. And some of that's counterintuitive in terms of the productivity gains can be made in lots of different parts of the economy, from low skill workers right up to the kind of highest forms of knowledge, work and and so on. But at every level, you've got this, have this self awareness of of what you're working with and what it can do. What what are you seeing in terms of the challenges to help leaders, you know, enable their workers and their teams to to kind of ensure that that becomes a productive relationship?
Damian Lowe:Yeah, it's a great question there. It's also something that we've we've partially covered some of that already, actually coming back to this business about, well, don't think that AI is human. It's not human. It kind of thinks, but not in the same way we do better in some ways worse in others. And it's changing. And also my point, yeah, before about how, depending on complexity levels and relevant knowledge level in the pre training of the model, that the AI response could go from being very useful and time saving to producing nonsense just in an absolute flash, that can be a surprise for people who haven't come across that don't know what to do about it or when to just abandon it. So getting more experience with the tools, very important, asking better questions, very important, finding the limits for your domain, you, or perhaps some leaders representing you, depending on you know who you are, what you're trying to do, the complexity of the task, I think it's worth investing some time in testing the limits of the models you're going to use in a domain that that it perhaps If a small group of people did that up front, you know, if a particular model with particular pre training was being brought into an area that might save a lot of time, you know, for the general working population that was trying to use it. And then, yeah, because training could probably be rolled out to some extent. And I have experimented with this myself to some extent. These GPT models can train us on what they can do if one asks the right questions, right? That is possible to an extent. And again, I I see it back and forth. I ask the machines questions, and I ask them to ask me questions, so they learn from me, I learn from them. That's, that's the way I see it, which is part of my collaborative point, right? I know a lot of people don't do that, but I've found a lot of use from that, because they have a certain amount of self knowledge, right? And they have a lot of knowledge about a lot of things. They I always know that chance, if it's an LLM large language model, which is what all the big, what your chat, GPT and things. They are all large language models, right, which have been trained on at least the whole of the internet at one point, in a lot of areas, they know far more knowledge than I will ever know my entire life. But they are knowledge, strong wisdom, light at the moment. So work out what they know and where they'll break, then you can avoid problems.
Jean Gomes:Yeah, so blowing this question up kind of even further to a kind of more philosophical and moral level, one of the things that we're really interested in The Evolving Leader is, how do we make work more not less human, particularly as automation and other these technologies start to grip because if we think about what's happened with, you know, with the proliferation of social media and other technologies, people tend to react to them and adapt in some unusual unforeseen consequences. So you know, people's spatial reasoning and ability to read maps and know where they are in situations pay attention to the journey they're on has switched off because of their GPS system in their car. And similarly, we've become more prone to multitasking and therefore feel worse because we're constantly shifting forwards from tasks to email and back to task and so on. How do we avoid doing that and lose you know, we don't lose our souls and our minds as we start to defer to to AI. What's your big picture thoughts on that?
Damian Lowe:Take time away from machines and Tech. Like, take some time to walk out in the woods, and, and, and, you know, look at the sky if you're into meditation, mindfulness, things like that. I think that's an excellent thing. I think it's an excellent foil in general, certainly something I do regularly and have done for a long time, or something like, yeah, walking, running, swimming exercise, which is good for us anyway? Well, yeah. I mean, playing a musical instrument, something that is that is really engaging, that has got nothing to do with technology and remind us of ourselves. I think being involved with things like that is good for our health.
Jean Gomes:So what else? What else do you think about in terms of the the future of AI? What else should we be talking about?
Damian Lowe:At the moment, I see what we're doing as still building better tools. Fundamentally, there's still there's still knives and hammers. They can enable us to to do more things in the world, to build things and solve certain problems, and potentially make things better for people in society, in some ways. But like all tools, right, they have dangers and downsides as well. Knives are great for cutting up your food, but you can cut your own hand off with them, right? I mean, that's it's that there are sort of equivalents to all that sort of thing, and great power and great responsibility, all that, all of those things still apply. I don't think there has been an AI revolution. I think there is a continuing AI and machines in our world in general evolution. And it's a great headline to say, you know, machines are suddenly sent in, and there's been a step change in everything. It looked like that might have happened for a while, but not really. It doesn't mean that might not happen, but it might happen so slowly we never quite notice, right? I think they, I think they can make our lives better in a lot of ways, but we, yeah, we need to. We need to remember our humanity and that, remember that we're animals, like other animals, and, you know, all of that type of stuff, if
Jean Gomes:you, were talking to a group of 12 year olds at school about what you've learned and what you think they need to understand, they're just embarking on the later stage of education their first jobs, what would your advice be to them? Stay
Damian Lowe:open. Stay curious. Learn from machines, just as they learn from you. Keep focused on your heart as well as your head. It's very easy to to lose that, and lose our connection to to nature and something that I that I do think about from time to time, because I did this when I was a kid, before a lot of this took off. But yeah, read Isaac Asimov's books about robots. They predict, they predicted an awful lot of this stuff, and the challenges and the ethics and the the moral dilemmas and all these sorts of things. I think they're very relevant today, more relevant in some ways than when they were creating the 20th century. I didn't realise what I was reading when I was a kid would actually happen in my lifetime. Yeah,
Jean Gomes:it's interesting, going back to the foundation series and thinking about, you know, how that is actually playing out?
Damian Lowe:Yeah, absolutely. There were some deviations from that that are important. But I mean, I obviously, I work with different generations in my job, and the younger generations that are coming in, well, they've got all sorts of challenges. They've got economic challenges that I didn't have. But one of the things I've said to new graduates coming in is you have many more avenues to learn now, again, going back to my point about you can, not only can you research information faster using AI, but you could get it to teach you and test you. Don't think enough people really understand how far that can go. And again, I suspect that would probably change quite soon. We've already had revolution with far more training available online, and sometimes of better quality than even at universities. I think that will continue information democratisation, if that's the right word, yeah, I think it can become easier for everyone to learn. I
Jean Gomes:really like the idea of that you planted this thought about being in a two way relationship with with the AI. A lot of it is we kind of put the questions in, expect the answers to come out, but actually asking it to ask you questions, that's that's a kind of new thought for me. Can you give us a sense of the kind of things that we might start to ask it to ask of ourselves
Damian Lowe:so well. From my own investigations, anything that I want to learn about, I'll I'll ask it some questions to get a sense of how much it knows and if it can solve some small useful problems. And then I'll say, right, test my knowledge or write me a test. Write me an exam. To test my knowledge, or we'll co design something. That type of thing can work. Another thing that one can do is to ask it to present knowledge at different levels. So you can say, give me a 10 year old school child version of this. Give me a 15 year old, 20 year old, university, undergraduate, postgraduate, whatever you can. You can you can scale up and down with what it's explained to you, and again, you can flip that switch then and get it to test you in those ways. You can get it to scale knowledge up, or scale its expectations up. And you can get it to design tests and examples for you. Always, if you get more specific and more detailed in your questions, it will typically get better in its answers, so long as it's not hallucinating. I think, I think if you've got a completely blank slate in an area, it's quite hard to get the best value out of the models, because, because, again, like with just traditional search engines, you don't have any clue what questions to ask. It's useful to have some initial training. You know, if you really didn't know an area, try and get some initial training in a more old fashioned way, and then start working with the machines. You've got enough base context to know what to ask. I think that's a big I think that's a useful area, because in areas where I know quite a lot about I can refine my knowledge and research very quickly, and I can jump out along my knowledge graph and follow along with it and collaborate with it. But yeah, if I don't, if I don't know where to start, then, well, I just don't know where to start.
Jean Gomes:Do I well, a human being, if you ask the human being to say, give me a quick start guide to tell me everything I need to know about it. The human beings got a context for that, and it's got a an opinion, a judgement and a feeling about who you are. And so that information comes with empathy and consciousness and self awareness and so on, whereas the machine doesn't have any of that at all, so it doesn't Well, that's not totally true, though. Okay, give me well,
Damian Lowe:because now Victor that again, that's it's such an interesting point this. I think it's accurate to say that the machine doesn't have, technically, its own view or consciousness or ethics or whatever. But it does have what we trained it on, yeah, and that's the point. So the company, organisation that did the pre training will entirely bias the model, possibly for its whole lifetime. So it has bias. Yes, oh, bias is a huge we haven't really talked about that, have we? Yes. Bias is a massive issue, and a lot has been researched and written about that there's bias in the internet in general. I mean, if one just browses around Wikipedia, right, which is a great source of knowledge, I'm a huge fan of Wikipedia, but it's still quite biassed. Though it has a very mainstream view, it will simply block out certain unfashionable ideas. Whether or not that's really correct. So there's, there's sort of a mainstream bias. Well, I see the same mainstream bias in the large GPT models because they were trained on if you ask questions of the general internet, it will go to Wikipedia for a lot of stuff. So is it really that different? And I'm not saying that's right or wrong, right? It's just, I mean, there's, there's so called wisdom of the crowd, right? There's a mainstream bias. Is probably right most of the time, but there'll be outliers that people miss. That is the next theory of everything, right? So it still goes back to well, make sure your own, one's own critical thinking, testing oneself, being creative, being open minded is as important as ever. That hasn't really changed, because the machine is the toy. So I think of the models as a representation of human knowledge, because that is kind of what they are, right? They were trained on. I mean, it's nobody else's internet. We built the internet, right? And then we trained the machines on the internet, and then they start train themselves a bit, but it's all but there isn't any other information coming in.
Jean Gomes:No. And I think even if you were trying to draw the kind of like the thought process a bit further, so if you were talking to somebody who you admired from a knowledge point of view, but you knew that person had zero self awareness, you would adapt your way of
Damian Lowe:you would GPT No.
Jean Gomes:They would never be in they would never be that extreme, but that you would judge their answers differently than you would with somebody with a heightened sense of self awareness, who is naturally processing their thoughts in context of you. They're paying attention to how you might respond to it and so on. It would be a different experience. So, yeah, I'm just pushing at the like, what's the skill set that we need to adopt in shifting our thinking around relate? To something that has no self awareness. That's quite an interesting challenge. When we start to try and project our human values onto something like this that we we might become so dependent on in the future,
Damian Lowe:absolutely. And my initial answer is, first of all, know ourselves better, because that's always important. Self knowledge, self awareness for sure, and then, like I previously mentioned, understand, you don't have to understand hyper technical knowledge about how these models work. I think I've outlined some of the general principles and ideas. Think of them as knowledge amalgamations and like predictive creative engines, that it's called generative AI. But whether generative is actually an accurate term, I'm not totally decided on because they do generate things, but it's it's not the same as as an human artist, right? I mean, there was a lot of fuss, wasn't there in the early days when we had all this image generation coming out, look, they've produced incredible works of art, until people start to realise that, well, actually, what they've done is just smushed together a lot of human great works of art in a way that is unique, but it's kind of cheating, and it's like they didn't actually create something new. They just amalgamated a lot of things, which is nowhere near the same thing. They all
Jean Gomes:have the same kind of eerie quality that makes you kind of feel slightly uneasy.
Damian Lowe:Yeah. But and the brilliant I've seen some amazing paintings that were generated, which, you know, following the prompt to generate the images Dali and all this sort of technology, right? Draw me a picture of, you know, people in this room, and they're doing this, and they're holding these objects, and in the style of such and such as Human artist, it comes up something amazing. But one of the people in the picture has six fingers on their hand. And, I mean, it's almost beautiful what it's produced, but it has no compre has no comprehension of what a hand is, though, has no reference to the material world. In that sense, I don't actually know what's happened to things, because that was a huge amount of fuss about that sort of thing two years ago, and it's all seems to have disappeared from the mainstream media. I'm sure there's a lot of research still going on. I haven't looked so much in the image generation side of things, but there's a lot of the technology is quite similar to what the to the language generation, not exactly, but it is quite similar. So a lot of the situation will be, I think, at least similar from what I know,
Sara Deschamps:if the conversations we've been having on The Evolving Leader have helped you in any way. Please head over to Apple podcasts and leave us a rating and review. Thank you for listening. Now let's get back to the conversation. What's
Jean Gomes:your next challenge? Were you excited about? Were you learning and developing your your your work
Damian Lowe:with AI specifically, I'm interested in the improvements that are in the works with the models, particularly their ability to get better abstract reasoning, because that's still quite a deficit area in the models at the moment. Tell us a little bit about that. So I one of one of the areas where I've seen useful outputs do that kind of flip that I described from being really useful to being nonsense, is it is where they they might have kept up with the general context of the question in so far as as they understands, but it simply it fails to build the equivalent of a mental model in the way that humans do, or for a complex problem. So it suddenly starts saying things where you're thinking, well, there's no, I mean literally illogical statement. It comes it starts off being really clever and really insightful, and you end up thinking, Oh, you just repeated some book knowledge. And when I went past the end of the book and said, Well, I'm trying to design something new. It just talks like, I mean properly, nonsense I'm not talking about it's a bit wrong. Starts making statements that are patently illogical, and then you tell you say to it that doesn't make sense, that is illogical. Can you see why? Oh, yes, good point. And then it just repeats the same wrong answer. I've had that problem multiple times in complex less now, because I know, I know to not even bother, but it's kind of a shame, but that that I'm pretty sure from what I've seen from research, that that is like to improve, and part of it would be brute force, just having models with more parameters and stuffing more data in. But it's not just as simple as that. We need more nuance than that. That's something that I'm very interested in. How can we make the models smarter, not just kind of stronger, because there's a lot of just put more data in them. Just put more compute power in them. It's looking like there may be some limits to how much they can improve with doing that. We might. It's a plateau where they don't actually get that much better for years, or we might not that. I'm very interested to see what will happen with that.
Jean Gomes:Yeah, there's been some predictions on that, haven't there that we're fast approaching that point. But as we know it, with these kind of things, never be sure. Never,
Damian Lowe:no, you can't, never be sure. And this it's, it's so big. There's so many people involved with it all over the world, doing so many things, as well as so many machines now contributing in their own way, making things better and or worse. So, yeah, it's a it's a fascinating area to watch.
Jean Gomes:Well, Damian, this has been a wonderful conversation. I think those people listening into the show will have gained some new perspectives and probably increase their awareness of the opportunity to to improve their interactions with AI and start to build this this mindset and skill set of how to coexist successfully or more productively and satisfyingly with with this new technology. So we're really grateful for the time you've spent talking to us and wish you well in your your endeavours to make this technology even better for all of us. So thank you and listeners, until the next time, remember the world is evolving. Are you? You?