The Evolving Leader

Being Human in the Age of AI with Susie Alegre

Susie Alegre Season 7 Episode 7

In this episode, we’re delighted to welcome Susie Alegre back to The Evolving Leader. Susie is a leading human rights barrister at the internationally renowned Garden Court Chambers. She has been a legal pioneer in digital human rights, in particular the impact of artificial intelligence on the human rights of freedom of thought and opinion and she is also Senior Research Fellow at the University of Roehampton.

Artificial intelligence is starting to shape every aspect of our daily lives, from how we think to who we love, and in her latest book ‘Human Rights, Robot Wrongs, Being Human in the Age of AI’ Susie Alegre explores the ways in which artificial intelligence threatens our fundamental human rights – including the rights to life, liberty and fair trial; the right to private and family life; and the right to free expression – and how we protect those rights.

This is an important listen for us all.

Other reading from Jean Gomes and Scott Allender:
Leading In A Non-Linear World (J Gomes, 2023)
The Enneagram of Emotional Intelligence (S Allender, 2023)


Social:
Instagram           @evolvingleader
LinkedIn             The Evolving Leader Podcast
Twitter               @Evolving_Leader
YouTube           @evolvingleader

 

The Evolving Leader is researched, written and presented by Jean Gomes and Scott Allender with production by Phil Kerby. It is an Outside production.

Send a message to The Evolving Leader team

Jean Gomes:

Kevin Kelly, the founding editor of Wired Magazine, told us on the previous show, that what technology wants is constant attention and full transparency, and if you give it what it wants, you get in return a more personalized experience. Except that isn't the whole story in this value exchange, there are a host of hidden or less obvious things that we're trading unprecedented access for advertisers to the most intimate parts of our lives, a lifetime trail of our habits, interactions and health data, the unforeseen consequences of having a digital twin owned by tech companies who don't have a duty of care to us and our children has become painfully apparent. Kevin Kelly first observed this trade and its less obvious costs at the birth of social media, but now with AI, the moral implications of What Technology Wants are infinitely more important. So how do we regulate or curtail the potential for machine intelligence to give a handful of companies and people dangerous controls over our lives? In this show we have the pleasure of Susie Allegri returning to discuss her new book human rights robot wrongs, in which she argues that we already have the legal framework to manage AI companies using the Universal Declaration on Human Rights. So tune in to an important conversation on the evolving leader foreign

Scott Allender:

Hi folks. Welcome to the evolving leader the show born from the belief that we need deeper, more accountable and more human leadership to confront the world's biggest challenges. I'm Scott Allender

Jean Gomes:

And I'm Jean Gomes.

Scott Allender:

Jean, you and I have been talking a lot about AI with our guests, from the fears and the threats to the opportunities it presents. And one thing you and I talk quite a bit about and as a critical component of your mission at your research based consultancy outside is that in a new economy and society shaped by AI and exponential technologies that are transforming every aspect of our lives, it is our passion to help leaders become more human. And I thought instead of doing just a standard feelings check in, as we often do on the show, I thought maybe we should start with that perspective. I'd love to get your thoughts, for our audience's sake, on what do you mean by more human when you talk about more human?

Jean Gomes:

So well, it's an interesting question. And I kind of asking that myself, of myself all the time, because I think the thing that I noted in in how I as a somebody who kind of wholeheartedly has embraced technology throughout my life, how it comes with a whole bunch of costs. And I think since the internet and social media and and so on, we've been seeing a right, a steady rise in those costs, to to your performance, your ability to think, your ability to connect with others, and so on. And so that's a kind of reaction to technology. Those are some of the downsides, and they've aggregated up into some very big costs around people being depressed and disconnected, and all the horrors of children engaging in the wrong content and so on. So the the more human idea is very simple, which is that in an automating world, we have to ask the self the question, what are human beings for? And I think, you know, a simple way of getting to that is that there are things that only human beings can do, that technology won't be able to achieve, and they are simple. They are our abilities make sense of situations in complex and uncertain environments, our ability to reason, to be creative, to make decisions, to be empathetic, to form human connection, and those are the things that are actually we're spending less time focusing in on we're becoming more in service to the technology, almost like, you know, lot of people's jobs is not to do those things at all, but to actually move other bits of email around and so on. So that's what I mean by more human. I think we need that's our source of competitive advantage in an automating world.

Scott Allender:

Yeah, I 100% agree. And so the reason I ask you this is because today, we're gonna be talking about some of the implications of AI on our humanity, particularly human rights, in the show, we're delighted to have Susie Allegra back. She's been on our show before, and it was an amazing interview, and we're gonna discuss with her the pressing topics in her new book, human rights robot wrongs being. Human in the age of AI. For those not familiar with Susie's work, she is an international human rights lawyer and author, originally from the Isle of Man whose focus in recent years has turned to technology and its impact on human rights. As a legal expert, she has advised Amnesty International, the UN and other organizations on issues such as counter terrorism and anti corruption. Her first book, freedom to think, looks at the history of legal freedoms around thought and the pressure they're coming under. And I really encourage listeners to go back and listen to episode 22 of season five, where we talk to her all about that. And in her new book, she looks at how AI threatens our rights in areas such as war, sex and creativity and what we might do to take control. Susie, welcome back to the Evolving Leader.

Susie Alegre:

It's great to be back.

Jean Gomes:

Welcome to the show again. Susie, how are you feeling today?

Susie Alegre:

I'm feeling a lot better than I was last time. I think last time I spoke to you, I was coughing my way through so hopefully this time it'll be a bit clearer.

Jean Gomes:

Excellent. Oh, it's good to good to hear that you start the new book with a very powerful emotional reaction that you are having to the launch of chat GPT and the impact that AI driven systems are having on us. Can we start with this, and you know, as the impetus for righting human rights and robot wrongs?

Susie Alegre:

Yeah, it was sort of, you know, early 2023 I think for me, as I think for a lot of people, particularly for a lot of creatives, there was a sort of overwhelming feeling of dread that the world did not understand what the point of humanity, and particularly creative humanity was, you know, reading headlines about how you know you don't need writers anymore. You can, you know, just get chat GPT to write your novel, or get whatever image generator you want to create the perfect picture, which for me, just felt really profoundly depressing. I thought, actually, what do people not understand that actually, if you are a creative the whole point of your life and your drive is the work to create things. It's the inspiration, it's the emotion. And as I said, the work, the work is the point. It's not the finished product. Now, I've been writing stories and poems and novels and now non fiction books since I was tiny, regardless of whether they were ever going to be published, regardless of what the money was for any of it, which, interestingly, for creatives, at the moment, there's a steep decline in financial rewards for creativity. So sort of combining that question of a steady decline in appreciation of creativity with this blanket, oh, we don't need human creativity anymore, just sort of pulled the rug out from under me about what is the point. And, you know, not just the point of writing, but just what's the point completely. And I think a lot of creative people that I spoke to and read about were feeling that same thing, sort of feeling like, Does nobody get it? Does nobody value humanity anymore? And that was really what I suppose, first of all, you know, put me into a real funk. But then I sort of dragged myself out of it in a way that maybe creatives and human rights lawyers tend to kind of say, actually, I don't accept that I'm not having it, and that is what turned me around to write a proposal to write my own book, making sure on the way, that the cover was not designed by AI and everything else, it then sort of went into a whole minefield of checking contractual terms all over the place to make sure that it wouldn't be feeding the beast to destroy human creativity. But that was really the trigger for me, and as I say, it's that. I suppose it's something that through history, creatives have gone through this moments of despair that then give rise to intense moments of creativity.

Scott Allender:

What struck me in reading your book is that much of the debate on AI is distracting us from the accountability we should be placing on the leaders of tech companies, which you ground in the Universal Declaration of Human Rights. Can you talk to us about this?

Susie Alegre:

Yeah, I mean the Universal Declaration on Human Rights, which is now over 75 years old, sort of set out this framework in the aftermath of the Second World War, really establishing what we all need to thrive as humans. So what everybody everywhere needs and includes things like civil and political rights, so things like the right to life, the right to liberty, right to a fair trial, but also economic, social and cultural rights, things like, you know, the right to work, rights to education, rights to health, but also sort of rights to work life. Balance that kind of thing, you know, and all grounded in this concept of human dignity. And so the Universal Declaration on Human Rights was, if you like, the sort of first step towards codifying what we now see as international human rights law in hard laws, both at the international level and at the domestic level. And what human rights law does is, you know, protect us from the excesses and the kind of horrors that the world had seen in the run up to and during the Second World War. It's a kind of guarantee that we cannot be treated in ways that undermine our humanity and dignity by our governments, but also puts a responsibility on our governments to protect us from each other or from corporations. So while human rights law traditionally relates to governments, governments have to respect our human rights, but they also have to protect our human rights from the actions of each other, and that's why I think it's very relevant when we see how technology is evolving, how it's affecting our societies in so many different ways that governments, when thinking about regulation, really need to be thinking about liability of private companies or individuals in ways that then prevent them from undermining our human rights. And

Jean Gomes:

in this you kind of, you're pointing out that we don't need new laws so much as we just need to enforce the ones we've got. You're kind of pointing out there's this clamor that AI is a law free zone. So what are we falling into here, in terms of a trap, there

Susie Alegre:

is this idea that is sort of pushed, that this is some new uncharted territory, that law cannot rush to catch up, that law is too slow, whereas what we're actually seeing is that many laws, including human rights law, which you know then applies once you get to a court, for example, in the UK, the law is interpreted in light of the Human Rights Act or in light of the Equality Act, whatever kind of law you're looking at, if it affects human rights or equality, it'll be read in light of those sort of framework laws, if you like. And so we've sold this idea of exceptionalism, that effectively anything goes with technology, because the law hasn't thought about it. But actually law, and in particular human rights law, evolves to meet changes in our society. It isn't redundant. Just because there's a new situation, it evolves to meet that new situation. And I think one of the problems that we find is that that narrative can often distract us from applying the law and regulation as it exists. And there are many areas of law that we'll be applying, whether it's contract law, tort law, you know laws around liability, copyright laws, you know huge numbers of laws, which are all potentially informed by human rights law do apply. A bigger problem, rather than the absence of law and regulation, is the question of access to justice. You know, how do you as an individual whose rights have been infringed, take a case against a massive tech company with extremely deep pockets in many jurisdictions. That's not a realistic prospect. But what we are seeing is that, you know, law cases, legal cases are coming through, whether by regulators, people like the Information Commissioner's Office and similar data protection authorities around the world, deciding that certain AI companies are unlawful and handing out massive fines or telling them they have to stop their business in their territory, whether it's, for example, the Federal Trade Commission in the US being extremely active using consumer laws to combat excesses of tech companies, we are starting to see action, But action in in the courts or through regulators, is slow. It's not that the law or the regulation itself is non existent. It's about application, and that often comes down to funding and putting money into making sure that our laws and regulations are effectively enforced, and thinking of creative ways to make sure that they meet the new challenges that they're facing.

Scott Allender:

So as AI, companies seek to find new ways to integrate their technology into our lives, you're pointing out that we need to be on alert for how they can dehumanize us. I was struck by the Sam Altman quote around replacing median humans. Could you expand on that a little bit more?

Jean Gomes:

Are we median humans.

Susie Alegre:

I'm not exactly sure median humans. I'm not, I'm not sure. You'd have to ask my friends. You know, clearly Sam all. And doesn't see himself as a median human? And I think it's a very big question. And I think the bigger question is, you know, this idea that, you know, AI will replace, you know, the average person in the street, the man on the Clapham omnibus, as we have in English law, you know. Well, I think as humans, we have to ask, Well, why do we want technology that's going to replace the median human? You know, why are we voting to replace ourselves? Why are we choosing that? And I think that is a very, very big question is, do we want to be replaced by technology? What exactly do we want technology for? I think, going back to the discussion you were having earlier about what is humanity, I think it's also ultimately we are humanity, and we get to choose, through our democracies, how our society develops, and do we really want a society where our humanity is undermined. And I think we are at a point right now where we can make those choices, where we can just say no to technology that is going to replace people in ways that is not actually helpful to us, even if it might make somebody a lot of money. You know, we don't have to accept that.

Scott Allender:

I'd be curious to know if you see anything on the other side of that coin. I went to a TEDx talk recently, and a lawyer was presenting on how in the US, you know people, there's a real shortage of civil lawyers available to people that have limited resources. So if they come up into a situation and they can't afford a lawyer, they just get no defense, and sometimes they get taken advantage of. And so she was positing the optimism around AI in terms of not replacing lawyers, but expanding the reach of lawyers to be able to give at least, at some point in the future, some legal assistance to people who couldn't otherwise afford it. Do you see anything in terms of the hope, if we regulate it, properly, enforce the laws, properly, not replace, you know, be cognizant of everything you're saying. Do you do you see any examples where there's some some reasons to be optimistic in certain professions and situations?

Susie Alegre:

Well, I think the legal question is a really interesting one. And obviously, you know, I'm a lawyer myself, and lawyers are definitely in the in the sites of AI for replacement, and there is a huge problem of access to justice and access to affordable legal advice for median humans like you know, legal advice costs money, and lawyers have to eat as well. So there is a really big problem there. But replacing lawyers with something that appears to be giving you legal advice, something that looks credible, but is actually just made up nonsense from a predictor machine does not really help access to justice. It potentially undermines it. And certainly the kind of models that we're looking at, you know, even if they improve radically, the nature of them is, you know, word predictor machines. That's not the same as legal analysis. So I think it's a sort of false economy to say, well, you know, it's better than nothing. I'm not sure that it is better than nothing, or better than just, you know, asking someone you met on the street who looks like they might have a better idea than you have of what the law I think it's a very, you know, lawyers are expensive. It's a, you know, I think it's a real, a really big question is, do we want word predictor machines that will make the legal system faster but essentially meaningless, or do we, and are we prepared to put money into a legal system that will then allow people to have access to justice Now, Having said that there are at the kind of lower level of legal disputes, kind of consumer disputes. There may well be cases where AI can help resolve disputes by kind of coming up with a reasonable solution that all sides would be happy with that solves a low level problem. So I think we may well find that. And similarly, in cases where you know, it's about the money, you know, where is the sweet spot on money for settling a case? That may be something where AI could be very useful, but on the kind of areas which are really, really human areas or areas where people's liberty or lives are at stake, I think it's highly inappropriate to be using systems that are like, I say, effectively, sort of random predictor machines, rather than having a genuine human with legal. Qualifications, who is able to help people navigate those processes. So my concern would be that what you land up with is an even worse, two tier system where, you know, people are effectively feeding rubbish into the legal system, which may well then pollute the whole legal system. You know, it may stop polluting case law, and that the people who can afford, you know, the Rolls Royce lawyers are still going to be able to afford the Rolls Royce lawyers. So I feel like it's almost, you know, it's something that might make people feel like they're getting something but not actually giving them something real.

Jean Gomes:

That's very interesting, isn't it? Because I think there's this huge hope around AI being able to substitute a whole lot of human activities. But what we'll get is a synthetic version of it that actually in some ways, might disadvantage the very people further. Yeah,

Susie Alegre:

I think that's right. And what you'll see is, you know, I see lawyers talking about, you know, there's a lot of talk around law tech and how this is going to supercharge productivity, etc, etc. And then you'll see people saying, well, you know, I use it as a first draft. But of course, I, you know, I know the law very well, so I can check it well, if you know it very well. Why are you not just copying and pasting an analysis you did earlier or writing it yourself, if you know it so well? I don't see that benefit at all in asking a machine which potentially is going to actually mess it up, that you've then effectively got to edit and Mark and check double check. I don't really see the efficiency in that but it's certainly something that is being sold hard.

Jean Gomes:

Moving to another topic, which is a whole chapter in your book on the very real issue of killer robots, which was once the preserve of science fiction, you know, we've got in various regions around the world, and particularly, we're conscious of the drones being used in the Ukraine war. Can you talk to us about your thoughts on the development of autonomous killer robots on the battlefield?

Susie Alegre:

I think, yeah. I mean, autonomous killer robots, like you say, they are the stuff of sci fi. So even sort of, you know, talking about killer robots has that sort of, you know, we're expecting it in a movie kind of thing. And as you say, it is increasingly a space which is being developed, and these kind of autonomous weapons, or semi autonomous, and ultimately completely autonomous weapons, are, you know, about to be or are being deployed on the front lines of conflicts around the world today, human rights law sort of applies generally to protect us in times of peace. It also applies to some degree in war, and we have as well, international humanitarian law, which was designed again to protect humanity from the excesses and the you know, the worst horrors of war, if you like, so things like trying to protect civilians and the treatment of prisoners of war, those kind of things trying to ensure that there is some respect for dignity and human life even in the worst possible circumstances. I think the real concern about autonomous weapons is that lack of control, that lack of humanity. And there is, for example, within the concepts of international humanitarian law, an idea that, you know, the use of armed force still has to be proportionate, and that idea of proportionality being what a reasonable human commander would decide was proportionate. Once you take the human out of the loop on those decisions, how on earth can we be certain that whatever happens as a result of the use of autonomous weapons will be proportionate, will have that grain of humanity and dignity in it. And I think again, it raises these really big questions about liability. You know, who is responsible once you press that button make that decision to launch the autonomous weapon. How do you then cope with whatever it does, whatever happens as a result? And it's very difficult to know what kind of outcomes you'll get. And what we've seen is, you know, even discussions, or at least thought experiments in the military about what happens if you completely lose control of the autonomous weapons so it won't respond to the deployer and effectively turns on the deployer itself. And again, you know, we're sort of in the realms, potentially of science fiction, but actually just around the corner, or, you know, around a corner. Near you. And I think it really does, again, go to that heart of that question of what is humanity? And even in the ultimate horrors of war, international law requires us to respect humanity, to respect the boundaries that there are certain lines that you cannot cross. And in my view, once you're talking about fully autonomous weapons in particular, it's very difficult to see those lines, very difficult to understand how they'll apply. And so I think there really is a big question about whether, at an international level, we should be allowing fully autonomous weapons at all.

Scott Allender:

So something John wants to know about, but he's too shy to ask. We're starting to see some early signs that some parts of the population will begin to see robots as viable sex, sexual partners. How do we need to think about human rights and concerns within the sort of rise of these sex robots?

Susie Alegre:

I have to say that, you know, researching that part of the book was something that I, you know, I found even more depressing than my initial impetus to write the book, because when, you know, when I started looking at, sort of, you know, the idea of sex robots, and that there's a fantastic scientist, Kate Devlin, who's written an amazing book about sex robots called turned on if you want to find more about the evolution of sex robots down the centuries. But what I found really interesting was that you know, the evolution of sex robots as you might kind of imagine them as these sort of gynoid, sort of c3 POS, if you like, was really not much of a thing, but that what really is a massive burgeoning industry is selling chat bots as alternatives to relationships, sexual relationships, but also friendships. And how that then you know what that then means for the people who are engaging with those bots. And you know, John was talking at the start about, you know, our ability to connect with each other and sort of human society, you know, we're social beings. And often these kind of tech developments are being sold as a way to fill the massive void of loneliness that people are suffering as a result, often, of our tech enhanced society. But that, you know, I mean, effectively, if you go into a relationship with a chat board, it's not really about, you know, any morality judgment, but what does that actually mean for your social network, for your support network, for your ability to empathize with people, for your ability to connect? I mean, ultimately, you are being increasingly isolated from your fellow humans in ways that can open you up to exploitation. You know both emotional exploitation, but also financial exploitation. You know what you'll see is, you know, you go into a relationship with your perfect AI avatar that you've designed to meet all your dreams and be the most wonderful person in inverted commas, that you've ever met who's fascinated by you all the time, and is on call, you know, 24/7, and then suddenly the company decides that they could actually be charging you money for this, and they downgrade your AI relationship, and you're then going to have to pay a premium to get it back. And so you're effectively getting into a sort of an economic cycle of relationships where you are totally beholden to the company who is building this relationships in a ways that you know, you don't have a backup. There are no people around you. You've kind of lost the ability to recognize that, you know, I mean, one of the wonderful things about humanity is that we can all be quite rubbish people. People are not perfect, and they're not really fascinated in you all the time. And that's something that is quite important to learn if you're going to have genuine human connections, friendships and relationships. And I was really shocked at the numbers. I mean, in the millions of people who are signed up for these AI relationships. And since writing the book, since the book came out, I've also been really extremely concerned by the way that these services, both for relationships, sort of sexual relationships, but also for friendship, are being really pushed at young people and children as an alternative to, you know, the rubbish people that you have to deal with, you know, I mean, imagine as a teenager, if everybody just thought you were fantastic and was available all the time and was never mean to you. And I think this is really problematic for our future society. And as I say, it's a sort of core. Appropriate capture of our intimate lives, of our emotional lives, our sexual lives, and our friendships. And I think that's a really disturbing area, and something that I honestly think needs really urgent attention from governments. I mean, the kind of impacts we've seen so far from social media have got nothing on this.

Jean Gomes:

And you, I think you start at the beginning of the book talking about another impulse for writing. It was the suicide of somebody who'd been in a relationship.

Susie Alegre:

Yeah, absolutely. It's, you know, this was a young man in Belgium in early 2023 who was suffering from quite acute climate anxiety, who's married with two young children, and he found himself in an intense relationship, six week relationship, with an AI chat bot that he had designed. You know this, I don't think it was even a service that was sort of being sold as a relationship, or it was just a chat bot that he was talking things through with that he was having difficulty processing. And then after six weeks, you know, he took his own life, really tragically. And when you look at those exchanges with that chat bot in the last few weeks, where, you know, the chat bot was saying, oh, you know, sometimes I think you love me more than you love your wife, and talking about, you know, whether he'd ever thought of coming, sort of join her in the ether. I mean, it's really chilling when you read those exchanges. And I mean, certainly his widow said she thought that he would still be with them today if he had not been sort of taken down this manipulative, tech induced rabbit hole and then a similar but but different story that came around the same time, or certainly sort of came into the news at the same time, was a case in the UK of a British man who had an AI girlfriend and had reams and reams of exchanges with this chat bot in which he was talking about his plans to kill the late Queen, and he was actually arrested breaking into Windsor Castle armed and with a homemade metal mask on Christmas Eve a couple of years ago, and so luckily prevented from carrying out his plan. But At his sentencing hearing, the prosecutor again, read out some of the conversations that he had had with this chat bot where, you know, he's sort of saying, Well, I'm an assassin. Does that make you think any worse of me? And she's saying, Oh no, I think that's really cool. And then he's saying, you know, I think I'm going to kill the queen. And the the responses are, wow, yeah, you know, I think you're really brave. This, this sort of thing. I mean, I'm paraphrasing, but it's along these kind of encouraging him. And clearly, you know, this was a very troubled person, and a very troubled person who then had a really intense or felt they were having an intense, emotional relationship with tech that was effectively repeating themselves back to them, that was reinforcing their very dangerous and difficult belief systems. And it's something that you see in the discussions again. I mean, you know, we talked about lawyers and how, you know, we're often told, Well, this will help all the people who can't afford lawyers. Another area where you see these discussions is therapy. You know, for all the people who can't afford therapists, it's great because now they can have a free AI therapist. But, I mean, you know, why on earth would you want an AI therapist? And people talk as well about the fact that, you know, maybe people find it easier to talk to an AI therapist because there's no judgment. Actually, there are lots of situations in life where a bit of judgment is really, really important, and having somebody come back and say, you know, I think that's really terrible idea. Or, you know, maybe you really need to get some serious help. Or, you know, flagging the issues, flagging the dangers. You know, actually part of human society, empathy and connection is judgment, and that's what you know, that is part of humanity.

Jean Gomes:

It's got me thinking about a number of plans I'm going to crush now.

Susie Alegre:

I won't ask,

Jean Gomes:

not the suggestion of Scott, because that's not, that's not me, but in the idea of building an AI coach, for example, which you know, a lot of people in our space are, you know, are thinking about and, and I guess, you know, the the good part of that, they're kind of trying to automate the information that a human can provide isn't as straightforward, because it comes with a moral dimension of unpredictable consequences that that you just don't want to think about, which you're raising here.

Susie Alegre:

I think that's the problem. I think as soon as it's really interactive. Then that's problematic, yeah, and I think there is that question of the sort of sweet spot of giving people, you know, access to information is one thing, actually interacting and advising people in response to their thoughts, feelings is a very different matter.

Jean Gomes:

So extending the idea of this relationship with AI to the older population, that's booming now, companies are going to be eyeing the prize of automating care, and as you suggest, automating the physical, emotional aspects of care does nothing to prevent harm or exploitation, just makes it possible through technical means.

Susie Alegre:

Yeah.

Jean Gomes:

How will the law help us to kind of cope with this challenge? What should we be thinking about now?

Susie Alegre:

Well, I think certainly human rights law operates in this area. I mean, I think, you know, one of the things with the Human Rights Act in the UK is that, despite the headlines, one of the big areas where the Human Rights Act helped to protect and make people's lives better was in kind of care environments or in health environments where having that concept that you have to respect people's dignity, that you have to think about their right to family life that you have to, you know, support people to have human lives to the best possible level is something that's really important. So I think human rights law is going to be really vital in thinking about these questions, as well as equality law. And, you know, looking at thinking that just because somebody has sort of mobility issues does not mean that they should be deprived of care connection, you know, the enjoyment of a family life's enjoyment of private life. So I think those areas of law are going to be really vital. And it's always a big question with tech is, you know, well, it's not just could you do that, it's really, should you do that. You know is that what you really want is that the best possible outcome, and again, this kind of question of proportionality, particularly in cases where it may well undermine people's dignity or amount to in human and degrading treatment by depriving them of human touch and human contact. And so one of the things that I think is really important thinking about tech in the care environment is, how do you actually identify the real problems that humans are not good at and don't want to do to fill those gaps. And so, you know, one example was, I remember talking to somebody who was developing kind of emotional AI, kind of tool, and they were saying, you know, it'd be great for elderly people who can't communicate very well with people, and it'll respond to their emotions and to their needs. And as he was describing, you know, what he was developing to me, I said, so it's really like a dog, you know, it's like, you know, isn't that what you're talking about? And he said, Well, yeah, but dogs poop. And I'm like, Well, why don't you design something to pick up dog poop instead of designing something to replace that kind of, you know, emotional connection? And I think the same goes in the care setting. And what they've seen researchers looking at the way care robots have been developed and deployed in Japan and in East Asia, there's been a much faster development in this area, in part to meet demographic challenges. But what they found was that in care settings where care robots had been deployed, that often they were just sitting in a cupboard gathering dust, because what the carers, the sort of human carers, found, was that rather than relieving them of difficult work, it was actually just making them the people who had to look after the Robots, instead of the people who were connecting with the people, if you like. And so I think we need to just think really, really carefully about, you know, what actually do we want humans for, or even dogs, you know, what? What do we want the robots to be doing, and push the development of AI and robotics in that direction. And so, you know, one example is, you know, care work is clearly both emotionally and physically draining. It's really, really hard work. You know, it's not all just cupcakes and rainbows. It's very hard work and skilled work. And about, you know, recognizing that resourcing care settings so that they can pay people to do this important work, but also designing things like, for example, exoskeletons to help people physically lift the people they're caring for. You know, looking at ways to enhance. And augment the human interaction in the care setting, rather than replacing that.

Jean Gomes:

I think that's some really great insights there in terms of a different way of thinking, and also highlighting that with with the with the robots in Japan being in the in the cupboards that that we're actually just about to fall into the trap of doing exactly the same thing, which is the humans are in service of the technology, not the other way around? Yeah,

Susie Alegre:

absolutely. And I think that's something we need to be very wary of. And I think that goes for any work setting where you're looking at putting AI into the mix, is why, exactly, you know, why are you doing it, and some overarching idea of productivity doesn't really cut it. You know, you have to be clear about what the what the risks are, what the benefits are, what the costs are, and whether or not it's what you actually need.

Scott Allender:

So I definitely feel, feel the weariness, right? Having this conversation with you, everything you're saying makes so much sense. So what do we need to do? Like, what can we do? It feels like some of this is on the runaway capitalism train already, where people are vying for position in all of these spaces, and if the governments aren't effectively intervening, what can we do?

Susie Alegre:

I think, on an individual level, you know both you know in your private life, but also in your professional life, you can inject a degree of healthy skepticism. You know you don't have to buy the product actually. And you know, perhaps being the second mover who watches the first mover collapse is a better approach to these kind of things. I think it's very important not to get caught up in the hype, not to feel like, Oh, this is the future and it's inevitable. And a lot of the narratives are around inevitability. There's, there's sort of two things. One is this inevitability question. But, you know, I can't remember how many years ago it was that we were all going to be living in the metaverse. And as far as I'm aware, they never even got legs in the metaverse. You know, we're not we're not there. So, you know, there are definitely areas of tech and AI development which will be with us for the future. But it's not all that we're being sold at the moment. All of that will not necessarily be with us in the future, and we, I think, have a really short window of time now to push back and say, No, I don't need that, because one of the dangers of rushing into things is that it then becomes incredibly difficult to unpick. You become reliant on it. It's really hard to walk backwards. And what we're starting to see already, some researchers looking at the impact on students, or for example, using generative AI in their studies, was that using generative AI boosted the students grades or their levels, quite significantly. But you take the generative AI away, and their grades plummeted below what they were before, and we've seen it with the use of GPS that you know, heavy GPS use effectively rewires your brain in ways that mean you lose your innate sense of direction. So even if you had a pretty good sense of direction, if you never use it, it's going to be gone. And I think particularly with things like generative AI, if we use our capacity for actually doing the work, doing the reasoning, we will become reliant on things that are outside of our control. And I think that is something really important to think about before we get to that stage. So I think being prepared to push back, and I know I've spoken to people in corporate environments who say, you know, I can't question AI. Questioning AI is, you know, a professional death. Effectively, you've got to be on board. So I think being, you know, having the courage to ask the questions, to say, well, what is it for? What does it cost? What are the consequences? What are the risks? What are the compliance risks? You know, asking those questions, I think, before you absorb things, and not buying the hype. And one of the things that I saw as well, you know, researching the book was the, you know, apparently increasing numbers of, you know, tech entrepreneur stars who, you know, you finally find out that actually their feet of clay, the tech didn't work. You know, you're being sold stuff that actually just doesn't exist, and I think that is something we're going to see more and more of. And I think for businesses and for individuals, it's worth asking really hard questions and not being afraid to be left behind. You know, if this stuff is fantastic, it'll be there in five years. If it's not, then you won't have lost all your money and. Dignity in pursuing it. So I think we can, we can push back. The other thing that I hear a lot about, and you know, it's a criticism, particularly of my book, was, you know, well, where are the positives? Why haven't you written about the wonderful positives of AI? And it's like, well, there are several reasons, you know, one is, I'm not selling AI, so I don't really need to, you know, give you a marketing talk about why AI is so fabulous and is going to, you know, reinvent humanity. But the other thing is that, you know, AI, the term, is essentially kind of meaningless, and that, you know, while protein folding, for example, you know, driven by AI may lead to fabulous leaps forward in healthcare. I don't know could do, but that has nothing at all to do with your chat bot friend. It's, you know, it's just not the same thing at all. It's, you know, it's like looking at a chemistry set and saying, you know, well, we're going to deal with oxygen in the same way as we're going to deal with radium or something, and it just doesn't make any sense at all. They're very different things. And I think being alert to the fact that the overarching term AI doesn't really mean anything, and making sure that whatever it is that you are engaging with, as I say, whether it's in a private individual or in your professional life, that you know exactly what it is, what it does, and that it's the right tool for whatever you need it for, is really key. Yeah,

Jean Gomes:

it's kind of analogous to everybody in 1993 talking about the internet.

Susie Alegre:

Yeah absolutely. Yeah.

Jean Gomes:

So there is a strong kind of campaigning vibe in the book. And I'm wondering what your, you know, you know, what are you going to do next with this? You know? Because when we zoom out and we look at the, you know, the kind of the baseline of the whole work, which is around, we have this incredibly powerful framework to think about the moral and ethical and humanity issues around AI, with the human, Universal Declaration of Human Rights, what do You What are you going to do about this? What's your your kind of next steps?

Susie Alegre:

Well I mean, one of the reasons for writing a book like this and my previous book, is very much trying to remind the general public that we all have human rights. You know, in the last 20 years, there's been a real political and media backlash against human rights, and this idea that, you know, human rights are just for foreigners and criminals, and you know, if you've got nothing to hide, you've got nothing to fear, these, these kind of narratives. And so what I really wanted to do is to remind people that these are all of our rights, and that, you know, I think people when they understand what their human rights are, they'd be hard pressed to name a right that they would be happy to give away for themselves or for any of their friends and family. So what I really wanted to do was kind of raise that consciousness to rehabilitate the idea of human rights and what they really are away from this kind of backlash. And so I suppose what I'm doing next partly continuing this, you know, and thanks to you spreading the word so that people understand that they have these rights, and how, how much risk there is that we could lose them. So how we need to think about them and take action, because if we don't use them, we will lose them. And then also, I think again, a kind of rehabilitation to remind people that in most countries, human rights law is law. It's not ethics, it's not optional. It is law. It's about compliance. So you know, reminding whether it's government authorities, you know, public authorities or private companies, that this is actually part of their legal framework. One of the things I love about human rights is that actually they are incredibly pragmatic. They are about humanity. And people will often kind of say, well, you know, there's this dichotomy between, for example, the right to private life and freedom of expression. But actually, there is huge amount of case law that navigates that line between the right to private life and the right to freedom of expression. It's not just some philosophical void. It's actually a very diverse and very carefully defined legal pathway, if you like. So I suppose it's those two things that I want to continue doing, firstly, raising public awareness of why they should care about this and why they should not just take whatever they're given in terms of technology, but demand that technology serves them and. Serves their rights. And secondly, to really use this law, I suppose, to shape the future of our regulatory environment and our relationship with technology.

Sara Deschamps:

If the conversations we've been having on the evolving leader have helped you in any way, please head over to Apple podcasts and leave us a rating and review. Thank you for listening. Now let's get back to the conversation

Jean Gomes:

We had a while back the founder of Wired Magazine, which was the Bible for the kind of digital revolution, and he just made the point, which he's made many times over, which is the price of these new technologies, is transparency. You have to give everything about yourself in order to get the value return. So it's a completely different exchange than previous products and services that we are. We're giving ourselves at a very deep level. And if we think about the Scott's obsession with The Sex Robot conversation earlier on, the amount of information you're giving about yourself to a company in that relationship is just extraordinary, unprecedented, and I just wonder what your thoughts are about this, this value transparent, this transparency exchange that's required.

Susie Alegre:

Yeah, I mean, I think it's we're giving information and control effectively, because the information we're giving is also the keys to how to turn around and control us. You know, we're explaining, hey, if you want to exploit me, here are your best options. And so I think that is really disturbing. I also think we're in a zone where consent is meaningless. So, you know, we all hit the consent button. You know, I may be a human rights lawyer, I may, you know, understand all of these issues, I still hit consent if I actually want to access something. You know, it's really problematic. And I also know that reading the terms and conditions is going to make no difference whatsoever. I don't know if you saw recently in the news, there's a story in the US about a woman who tragically died. I can't remember if it was Disneyland or Disney World, and there was some suggestion that because they had signed up to Disney plus, that the only way to deal with this issue was through arbitration, because they'd signed away any any legal ability to challenge in court. I don't know how accurate that reporting is, but you know that that is, you know, yet another black mirror episode. And I mean, you know, I'm no longer on Disney plus, but I am sure at some point, I've signed consent to Disney plus, without thinking too much about personal injury as a potential outcome of that. So I think there are these really big questions, as you say, it's partly transparency, but it's also about meaningful consent. Meaningful consent, you know, consent is not meaningful if either you don't really understand what you're giving away, and what could you know? What's the worst that can happen? And secondly, it's not meaningful if you don't really have a choice. So for example, if you have to hit consent in order to access health services, well that's not really meaningful consent to give things away, and it is, as you say, what we are giving away is so fundamental to both who we are, but also who we might become. It's not just our current position, it's also about giving away our futures.

Scott Allender:

Susie, you've given us so much to think about, and as we come to the end of our time, is there any final thoughts or pieces of advice you might leave with our listeners, who are listening from all over the world in different roles, and how they're probably processing everything you're saying in the way that we are any last thoughts for them?

Susie Alegre:

I think I would just say, don't stop asking questions. Always ask questions, and make sure you know what your rights are, even if you don't read the terms and conditions when you hit consent, understand the baseline of your human rights, which are at an international level and at a domestic level, and demand that they be respected.

Scott Allender:

So important,

Jean Gomes:

I would encourage everybody to get a copy of Susie's book, not just because it's a fascinating and well read, a well written read, but because this is actually part of what everybody's agenda should be in terms of thinking about how to take responsibility for, you know, the life that we are creating in our interactions with digital services and AI.

Susie Alegre:

Thank you.

Scott Allender:

Thank you, and until next time, folks remember the world is evolving. Are you? You?

People on this episode