The Evolving Leader

Humans and AI with Anna Ivanova

November 22, 2023 Anna Ivanova Season 6 Episode 10
The Evolving Leader
Humans and AI with Anna Ivanova
Show Notes Transcript Chapter Markers

How neuroscience, machine learnig and AI are shaping our future

 In this episode of The Evolving Leader podcast, co-hosts Emma Sinclair and Arjun Sahdev talk to Anna Ivanova. Anna is a postdoctoral researcher at MIT and is studying the relationship between language and other aspects of human cognition using tools from cognitive neuroscience and artificial intelligence (such as large language models). This is a fascinating conversation that ultimately addresses the big question, are today’s leaders under threat from AI (now or in the near future)? 

 Referenced during this episode:
TEDx – To build smarter chatbots, look to the brain | Anna Ivanova (May 2023)

 Other reading from Jean Gomes and Scott Allender:
Leading In A Non-Linear World (J Gomes, 2023)
The Enneagram of Emotional Intelligence (S Allender, 2023)

Social:

Instagram           @evolvingleader
LinkedIn             The Evolving Leader Podcast
Twitter               @Evolving_Leader
YouTube           @evolvingleader

 

The Evolving Leader is researched, written and presented by Jean Gomes and Scott Allender with production by Phil Kerby. It is an Outside production.

Scott Allender:

So Jean, while you and I were off gallivanting around the globe, we handed over the reins of the evolving leader to Emma and Arjun, and it sounded like they had quite a bit of fun.

Jean Gomes:

Yeah, I agree. They this conversation with the neuroscientists, Anna Ivanova over her work on the intersection of AI and machine learning and neuroscience was really terrific.

Scott Allender:

It really was. And one thing that stood out for me, I found it particularly interesting that we hold this intuitive belief that language and thought are, you know, if not the same thing, that they're highly integrated, particularly when you think about your inner voice. But it turns out that research is showing that thought and language are formed by quite different parts of the brain. In fact, as our listeners will hear Anna talk about math is not even part of the brains, language networks.

Jean Gomes:

Yeah. And I think that's really going to help drive this understanding of how we uniquely experience the world. And the fact that's moving out of the lab now into really practical applications, and hopefully one day that's going to help everybody have more true empathy for each other, and how they experience the world.

Scott Allender:

Yeah, I was intrigued in the attempts to understand and measure how our inner voice are different between people do you do you have an inner voice, Jean?

Jean Gomes:

I have got several, but we'll keep that between ourselves for now. But I do agree. I mean, I think the conversation about how this works is going to inform AI is really interesting and really important for our listeners to consider what that means preparing themselves for the coming transformation of work and life.

Scott Allender:

Yeah, completely. As I listened, I had some feelings of regret at missing out on the opportunity to speak firsthand to such an interesting guest, but quite a bit of pride in the incredible questions and conversation that Emma and Arjun lead.

Jean Gomes:

Yeah, fully with you there, Scott. So let's tune in.

Scott Allender:

Let's do it.

Emma Sinclair:

Welcome to the evolving leader, the show born from the belief that we need deeper, more accountable and more human leadership to confront the world's biggest challenges. I'm Emma Sinclair, host of the show today, while John and Scott are off travelling in different directions for work and pleasure. And joining me for the day, I am delighted to share the company of guest co host friend of mine and friend of the evolving leader, Arjun Sahdev. And we always begin the show with an important question, so I'm going to ask it now Arjun, how are you feeling today?

Arjun Sahdev:

Thank you, Emma. I am delighted to be here. This is such a treat. My first time on the show so very excited as well and also feeling very cosy as I sit here observing the rain hit my window, because it's a very, very rainy day today. So I'm feeling cosy as I'm indoors and got a nice hot mug of tea next to me as well ready for this conversation? How are you feeling Emma?

Emma Sinclair:

I'm feeling similar. I'm feeling a calm I use that word a lot but I am I'm feeling calm this afternoon cosy to the weather has completely changed here in the UK, we've gone from heat to rain. And yeah, that means that kind of feeling. Feeling like I'm not trapped but quite kind of cosy in my home today. But also very intrigued, open and delighted to be sharing the space with our guests today. And I'm looking forward to this conversation. I'm looking forward to do this conversation with you and as well, because today I'm absolutely delighted to introduce our guest, Anna Ivanova and we're delighted to be joined by Anna today who is a neuroscientist and postdoctoral researcher at the MIT quest for intelligence. This is a community of scientists, engineers and researchers that are seeking to uncover the foundations of human and machine intelligence and deliver transformative new tech for humankind. Anna is deeply interested in studying the relationship between language and other aspects of human cognition. She's now using her neuroscience background to expand her research beyond humans to better understand the capabilities and limitations of artificial intelligence, artificial neural networks and large language models like chat DBT So Anna, sounds fascinating. Welcome to the evolving leader. How are you feeling today?

Anna Ivanova:

Thank you for inviting me. It's great to be here. I'm excited to join from Boston. And it's actually sunny here so quite different from the cosy rainy atmosphere you have.

Arjun Sahdev:

Lovely Anna, it's such a fascinating time to have you on the podcast with so many things happening in the space of machine learning AI, neural networks and so on. But before we get into those topics before we get into your research . Let's just imagine for a second, you've been invited to go back to your primary school to talk to the the new generation, today's generation that will grow up with AI. How would you describe the journey to where you are now, what you're focusing on and why?

Anna Ivanova:

I think as for most people, I'm guessing the path that we take is nonlinear, I would never imagine myself coming here talking about the ideal. As a kid, I was interested in biology. So I did know that I'm interested in science, I want to do research, I grew up in Moscow, Russia, and at the age of 17, I came to the United States, since they have the best science, one of the best places to do science, I wanted to be able to experience that and learn from the best. Once I was in the United States, I found out that there are many different opportunities in many different areas of research, I realised that I can combine my interest in biology with my interest in language to study language in the human brain. And so this has been my interest. This is what my PhD is in. But then also in college, I took a computer science class kind of out of a hunch that computers seemed to be important in our world today. And so I should probably know something about that. And it turns out that computer science is actually a programming class, which I didn't know, I took it the first time. But I liked it. And so throughout the PhD, I've started using programming just to help out with that analysis and stuff like that, but also to understand large language models, which are possibly some of the best artificial models of language that we have today. So

Emma Sinclair:

I'm hearing they're a combination of the human biology and also that can interest in in terms of computer science and programming as well, who knew? And I wondered if we could start perhaps our conversation today by highlighting some of the differences that you draw out in your research? And that's the difference between language and thought in humans? How do they differ?

Anna Ivanova:

So we have many of us have this intuition that language and thought are very tightly linked. Many people have an internal monologue or sometimes a dialogue running in their head planning out their day. And so there is this intuition that language is really what has enabled humans to think. And there are many versions of that view of that hypothesis. But the most basic one is that we think in language, the same parts of our brain are active when we speak and listen, and when we like and plan. And my PhD advisor, after the Renko has done a lot of work uncovering the brain network that is responsible for language, that language network. And it turns out that the language network responds to all kinds of language, so listening, writing, reading, speaking in all the languages that you might know. And it's very selective. In fact, it doesn't do things that aren't language. Many people would think that it's involved in any functions that require manipulating symbols, like math. Turns out that language and math are different. They're there on different parts of the brain. Other people thought, well, maybe something where symbols combine into hierarchical structure to make something more and more complex like music. Music does not rely on the language network either. And so we see this clear separation, which shows up even more strikingly in people who have brain damage. So one of the common consequences of say a stroke is aphasia, which is a disorder of language, language production or language comprehension. And some of you who have all the relatives might have relatives who have aphasia. And so it turns out that even though language might be affected in people with Aphasia pretty severely, their ability to think often remains intact. So if the damage is restricted to the language network, people can still think, engage in social activities do math plan. Some people with really severe aphasia still enjoy playing chess on the weekends. There is a case by a Russian composer who presumably wrote his best work after he developed aphasia. So lots of aspects of thought, end up being preserved, even when language is gone are mostly gone.

Arjun Sahdev:

I think it's really, really interesting because we can often make the this assumption that we we all think in the same way. So I guess, my mind, my kind of thoughts are how the styles of thought within people change how they differ, and how people perceive their own thinking, is also unique to each person. So could you say a little bit about perhaps, why it's so important for us to be aware of the way in which we think the style in which we think or even if we're, say, multilingual, like my wife is the language in which you think I find all of that really fascinating? Yeah,

Anna Ivanova:

it's fascinating. And it's fascinating how under explored this area is, I guess we just have a really powerful inclination to assume that everybody thinks roughly like we do, not the content of the thought, but as you said, the style. And when then when those differences get uncovered, everybody's really surprised. And then the people kind of forget about it and move on. And it seems to have happened many times throughout the centuries where people talk to each other realise they think differently. But then this information tends to go away somehow. So a few years ago, a lot of work has emerged on the concept of f and Asia, it is the phenomenon where some people seem to be unable to form visual images. So if I say, imagine the blue sky, many of you will have a picture of a blue sky, in your mind with different degrees of vividness. And some people just don't do that they have a concept of a blue sky, but they don't have a visual image associated with it. And along those same lines, it seems that people have striking differences in inner voice, then how much how often whether they talk to themselves in their head. So some people I've talked to, they can say exactly, oh, I know, I speak to myself, in my voice. It's coming from like over here of like, right inside my head, that varies also. And other people don't have a phenomenon of inner voice whatsoever. So I have a friend who is an English Polish bilingual. And I asked her back in the day, what language she thinks in and she was just puzzled. She's like, What do you mean, think in a language that to her is a completely foreign phenomenon of thinking in words at all. And here at MIT, we have language Professor Ted Gibson, he's been studying language for decades before he realised that his experience of inner speech is different from other people. He doesn't have an inner voice. And so those kinds of differences, they often go unacknowledged and underappreciated. And for somebody who thinks in words all the time, they might imagine that while, of course, language is essential for thought, I think in words all the time, how can it be otherwise. But those cases of differences between people, they can actually be really insightful, both for as researchers trying to figure out how Language and Thought work together. And for people in everyday life, just realising that there are different styles of thinking, and when you're trying to explain a concept of any one idea to somebody else, you might need to use different strategies.

Emma Sinclair:

How do you study that in terms of kind of the the actual design of experiments or work in the field to understand and unearth that between people? Because I'm now sitting here thinking, do I have an inner voice? And is it is it my voice? And is it my language?

Anna Ivanova:

Yeah, it's a very active open area of research. Many people have done work in this area, there isn't a consensus on how to do it best. I actually am involved in a group project that tries to do just that, we are now very actively trying to figure out what is the methodology what is the best approach? So I can tell you what we are trying to do. We are first outlining various kinds of conceptual distinctions that might be relevant. So what kinds of imagery can there A B, what if you're imagining an object, a scene? how intense the colours are? How many details there are in the scene with an inner voice? Do you imagine it? When a particular voice your voice somebody else's? Do you imagine it with a particular intonation or just words in what language? So all of those little details, we first need to figure out ourselves before we can ask people about each of them. Then the next step, which, to me seems kind of inevitable, we just need to ask people, honestly, so many of those differences emerge. As soon as you ask people what their experience is. And sometimes our experiences don't necessarily tell us how things are in reality, we might have a perception that we think in words, but actually, the underlying thinking is different. But I think it's still very helpful to just ask people, What is their experience, what is most helpful to them when they think, and then we want to actually see whether those reported differences affect people's behaviour. Maybe some people have better memory because they can use imagery, or they have an inner voice. And so it's easier for them to express their thoughts. And then finally, as a neuroscientist, I want to know how it affects the brain. Do you the people who report that constant inner speech? Is their language network, always active and always firing? I'd love

Arjun Sahdev:

to find that. Fascinating. And I'm wondering whether in your research or in the research that you've studied, whether there's any kind of correlate corollary between neurodiversity and people that have kind of I'm thinking about members of my family that are dyslexic, for example. And trends in the way that they think or the styles in which they think I'm I'm always fascinated by whether there is there is something going on, in in their brain that allows them to think in perhaps shapes, colours, or visual with visual acuity, more so than then words and numbers. As an example, is there anything there that has suggested such trends? I

Anna Ivanova:

think it's most likely true, but it is, I would say a very underexplored area. There is lots of work, of course, on neurodivergent populations, there is work that now that no longer frames or just as a deficit, but as truly neuro divergence and different styles of thinking and different ways of interacting with the world and with other people. So I think it's very promising. I will say that in terms of thinking styles, you've mentioned, thinking in words versus thinking in pictures. In Visual images. I will also say that some people report their their thinking as being more abstract and either one or the other. So I think given the space of possibility there remains to be charted out. And dyslexia specifically, of course, it affects primarily them ability to read, and it depends on their ability to link the visual form of a word with the sounds of a word. But of course, language itself is distinct from just reading. So I could imagine Dyslexic individuals who would be very verbal in their thinking.

Arjun Sahdev:

So interesting. Okay, so let's move to AI, let's move from humans to AI. Before we before we get into things like chat, GPT and so on these these systems, are you mentioned earlier, large language models? Could you just level set for us? What is a large language model? And then what we could potentially talk about was how, how, if at all, are they capable of thinking? Yeah,

Anna Ivanova:

so let's start with what Olara thing which model is, it is a it is a model that can generate words one word at a time. That's primarily what it does. And so it might see the beginning of a sentence like the sky is, and its goal is to guess which word is coming next. That's fundamentally its main task. And so it might start by guessing totally randomly, the sky is key, and then it will get the error signal. So it's gonna get the feedback that the correct word there is actually Blue, the sky is blue. And so then it will adjust its predictions just a little bit to take that new information into account. So it's a trial and error learning. And what happens that this procedure repeats, many, many, many, many, many times over on lots and lots and lots of text from the entire Internet. And so by doing this prediction task many, many times, it learns the patterns that occur in natural language. And as a result, it turns out that, through this learning procedure models now can generate responses to questions and texts that sound coherent, plausible, grammatical, mainly through this objective of predicting the next word. Now, the most recent wave of models like GPT, they also do some additional, the researchers also do some additional training for these models, essentially just guiding the model in terms of what kind of answers it should and should not generate. So if you've ever used chat GPT, you might have noticed that there is a button there with thumbs up and thumbs down. And so this is the kind of feedback that Jaggi within developers can then take and feedback the GPT to say, oh, generate more of this kind of answer and less of this kind of answer, not the whole thing. verbatim, but the kind of answer that has been thumbed up and down. So this is why now we have models that can actually answer questions and provide explanations in the elaborations. Like what we see was

Arjun Sahdev:

one thing I have noticed with using chat GPT, particularly is that there are some limitations, right? So some of these large language models, I found they they struggle with certain problems that are quite simple to do. From a even a mental arithmetic perspective, there are certain mathematical reasoning issues, or problems that that they struggle with, how can we use what we know about the different networks in the brain, and how they are interacting with each other, communicating with each other and actually using one another to solve multivariate problems. And then apply that to developing this next wave of AI, this next wave of whatever chart GPT and the like are going to become?

Anna Ivanova:

Good question. People have a bias that is very well justified from an evolutionary perspective, but is really tripping us up when it comes to our language models. biases, if you have someone or something that produces general, coherent, good sounding language, while it must be a thinking creature, you know, most likely another human. And that's how it's been. For all of human history, humans are the only ones that can generate language and who we can communicate with in language. And of course, language is just a reflection of what another human is thinking or feeling or wants from us. And so of course, there is a strong tendency to interpret any kind of language as coming from an entity with thoughts, feelings, intentions from somebody reasonably smart. And now, when we have large language models that generate good, coherent, good sounding language, people assume that they're just going to think and know what we want, and be able to do all the kinds of thinking things that humans do. But it turns out that, in fact, when you look inside the human brain, you do see that this association between Language and Thought, which I alluded to earlier, where we have the language network that is responsible for processing language, all kinds of language. But then we also have other parts of the brain that do other things. There is a network that's responsible for solving problems. So anything that makes us think hard include, including logical puzzles, IQ problems, and math that relies on a network that's completely separate from the language network. Another network is responsible for social reasoning. So thinking about other humans, what they want, why they may be sad, what they said why they didn't say something, right? So we use this very complicated social understanding machinery everyday now interactions, but it's actually separate from language. We can use language as a tool to get something from somebody, but our social understanding it is different. And when we interact with language models, we kind of want everything from them. We're like, oh, well, you're good at language. Now, you must be good at math. And you must be good at logical reasoning. And you must be good at understanding exactly what I want and kind of guessing my intentions and what I know and what I don't know. And it just doesn't work like that. Just because a model has acquired one ability, it doesn't mean that everything else is going to catch up right away. And so me and my colleagues have just been trying to caution users of this language model to not assume that these models are going to be smart and human, like in their thinking, they might be they might become smarter, but it's not going to come for free. Just because these models now sound coherent.

Arjun Sahdev:

That's a good call out because my next. My next exploration of this is whether it's just what you said there, whether this can actually happen. So many people are talking now about this kind of idea of of AGI right, or a super intelligence, that if anyone's seen Marvel movies, you know, you've got Jarvis, that's this guy, like this, this genius that just just floats in the ether. If experts can get closer to engineering or understanding the relationship between the brain networks that you've just spoken about, do you think that that AI models could actually develop super intelligence? Could it could they replicate a human brain? Is that is that just bizarre at this stage? Or do you think that the, the work behind the scenes is getting us closer there?

Anna Ivanova:

I think, to me, the question of artificial general intelligence AGI breaks down into Okay, what do we define by intelligence? What would an AGI agent look like? And you see a lot of conversations and a lot of hype around this topic these days being like, oh, AJ is going to be here in two years, actually, just a screenshot. On Twitter, I think just this morning, where there was a Twitter like a tweet like that saying, oh, AGI is coming here soon. And then somebody asks, well, how do you define AGI? And the initial author says, well, actually, it's a very hard question. And different people have different opinions. So people can make those very bold predictions about what's going to happen and those like super intelligent systems, without even having a clear sense of what it is that they're talking about, excited about or radar. And for me, we, I think we can just use the human mind to see what kinds of abilities are there that make humans intelligent. Language is certainly very helpful. So we now have systems that have learned the rules and patterns of language, that's a big, big step, and very, very impressive. But then we have abilities, like the ones I've mentioned earlier, mathematical and logical abilities, which you know, some of them we can get through a calculator. So it's not particularly new. We have social intelligence that seems much harder to model and understand, will have abilities to plan, decide what actions to take in to achieve a desired outcome. And then, of course, when we think about robots, it's also the question of well, can we actually interact with the physical world? Can we have models robots that move in the world? And it turns out that a lot of these things that are very simple for humans, you know, picking up a cup, are very, very challenging for machines, even though adding multiplying along on numbers together is trivial. So there are lots of things that might go into the definition of intelligence. And for each and single one of them, we can think, what is the possible trajectory? Where are machines on the path to achieving that? How will they interact together? I think it's exciting. I think lots of things are happening and people are working on developing those more sophisticated systems. I just think that whenever we do we really need to be thinking about why we're doing this right now just trying to build an and intelligent machine like, just because we can, but I think there is lots of promise. And as long as there are safeguards and regulations in place, I think we'll be okay.

Emma Sinclair:

I think this is it's so interesting, depending on the perspective that that you come at it from and also like the, the knowledge that you hold to and I love the fact that you you can say openly that it feels promise Saying, and we've had others on the show who've listened in different perspectives as well. And I think that's, that's amazing. And I think for us of all we need to we're always looking at, well, what does this mean for future of leadership? And there's a lot in this space and in this conversation at the moment in terms of how we perhaps coexist with artificial intelligence, how this works for our workforce, what what it means to lead in an automating world, and just thinking about that potential promise and that growing sophistication in do you think? Or in your opinion, do you think it's feasible for organisations to perhaps believe or consider a future where maybe running things like significant parts of their operations? Using perhaps deeply advanced AI is, is is a future that's likely to happen anytime soon? If so, what implications? Could you see around that, from a human perspective?

Anna Ivanova:

It seems to me like it's already happening, this shift toward using AI tools is already underway. I am always fascinated to talk to people outside academic circles, who had no idea what language models were until last December, January. And now they use chat GPT in their day to day work, to help them structure their thoughts, write outlines for documents. Others, obviously for kids to write college essays. But there are all kinds of uses that people have stumbled upon when using GPT. And it seems to be increasing productivity in many ways that we probably couldn't anticipate before. In many ways, it's not that drastic of a change, because well, it's, you know, we've had tools like that for a long time, we have a calculator. Even a few decades ago, we had to have people whose job it was to perform long series of calculations on paper to arrive to an answer. And now of course, we don't have calculators. So I think large language models will probably end up being yet another tool that is not going to replace humans are become this general intelligence machine, but will be very useful for for performing specific tasks and for saving people time.

Arjun Sahdev:

I want to pick up on that, Anna, because you mentioned something a little while ago. But when you said language is, is a uniquely human construct. And I know just from various bits of reading that people like Steven Pinker talk about language being a window into the mind, as an example. So I just wonder what you would say to those sceptics, or even those are potentially worried that actually our unique human abilities are being disrupted faster than we could ever have thought with the rise of AI, with the rise of learning large language models, but also creatively, right with mid journey, for instance, being able to produce staggering pieces of art. What's your perhaps more hopeful message? Given all the research we're doing?

Anna Ivanova:

I guess, you have, there are two parts to your question. One is uniquely human abilities, and another one creative professions more specifically, and in terms of uniquely human abilities. While we're concerned to the calculator analogy, again, presumably the ability to do long division is a uniquely human ability. We don't know any other animals who can do that. And you know, we're very happy to offload this to calculator or, you know, have Apple Maps Google Maps app on our phone to help us direct where we need to go. Sure, we could use a map that's also uniquely human ability. But if we can save time and energy with a new automatic tool, oftentimes people prefer that when it comes to creative professions, that's an interesting new challenge indeed. My guess is that it's also not new, of course, the role that visual artists painters play in society has changed with say the rise of photography right now that we didn't need humans to make exact visual depictions of the world. We have had to rethink about what art is for my guess is similar things are going to happen now with Jaggi pte being able to generate texts or majority being able to generate images I've had to generate some images for illustrations for a talk using Dali, I actually found it non trivial. It's not very, it's not easy to write a good description that will give you exactly what you want. So first of all, you want to get from an abstract idea to a concrete image. If you want to illustrate an abstract concept, you want to know what is the visual depiction that will most effectively conveyed. And then of course, you need to know in which style and how exactly to phrase the prompt to get the output you want. So to me, that also felt like an example where you have to have a human in the loop still, the artificial tool might save you time and energy, but it won't fully replace the creator, the designer. And of course, some people will not be happy with the tools that are available, for example, the day might want to generate paintings that are completely beyond the scope of my journey. And so I think, for me, there seems to be like there will be a lot of space though, even though a lot of current jobs might end up being automated people will find other niches and other opportunities to be creative and inventive and contribute.

Arjun Sahdev:

Well, one of the key things I'm taking from that answer is the importance to learn how to use the tools as an enabler of your work, rather than seeing it as something that could replace me or disrupt me, it's kind of like, how can I use it to take me to the next level? Would you say that? That's, that's accurate? That's possible, as well?

Anna Ivanova:

I think so. I think so. Yeah. And as much as I caution people to watch out when using GPT, it can generate false statements and accurate statements and doesn't do math very well. It has all kinds of limitations, it will be very confident when it's wrong, you will not be able to tell the truth from the lie is just based on the outputs, unless you already know what the answer should look like. Even despite all of those limitations, it seems like it's worthwhile to invest time and energy to look into the capabilities of large language models. What can they do for you? What can they do for your business? What are the potential uses even given the limitations? It really it despite my initial scepticism, I have been struck by how pervasive it is today. And it really seems like that is the future.

Arjun Sahdev:

So you just mentioned the how, and we were just talking about how it can be used in a business. And and going back to this idea of language being a uniquely human construct one thing we know about languages, there are so many of them. And they're all kind of a lot of them a lot of the languages in the world that are dependent on where you are as well, geography geographically, so when when we apply the lens of growth and innovation, do you think replicate replicating a language network in an AI, having a sophisticated large language model could speed an organization's ability to understand a new market in a foreign territory, for instance, could could a uniquely British homegrown business use an LLM to to to enter Taiwan?

Anna Ivanova:

I think it might be challenging in some ways. One way is that LRH language model is trained on massive amounts of text from all over the internet, the vast majority of the text is in English. So the model is best when it performs in English. It can do very well in other languages like Mandarin Chinese, because it's also very common. But if you're talking about interacting with users of a language that is less common, that's, that's only used by a small group of people relatively speaking. That's where you might start seeing more limitations and models not performing as well. And there is honestly a lack of research on this issue on what exactly are the advantages and limitations of large language models when it comes to under resourced languages. And another issue is the issue I've alluded to before where these models can be very useful, but they can also make a lot of mistakes and lots of silly mistakes. And so the use case in which these models work best is where you yourself know what the correct answer should look like. So let's say you need to write an ad for a product you can give to LGBT specifications, and it will write a nice Instagram post for you. That's a great use case, because then you can read over that post, spot out any inconsistencies or a new facts that it got wrong, correct them, but it still is a much faster process than you writing it from scratch yourself. But if say you want to translate your ad into a language that you don't know, if you don't know the language, you can fact check the output of the model. And so then there might be issues that have problems that you just won't be able to spot. And so if you have an employee that speaks the language, if you want to increase their productivity, maybe an LLM can help, but you always want to have a human control at the end before the final product is released.

Jean Gomes:

If you're enjoying the show, you might also appreciate Scott's new book, The Enneagram of emotional intelligence, which provides simple, powerful tools to help us better understand ourselves and others available online at all major retailers?

Emma Sinclair:

Can we need to spend a little minute thinking about emotions and emotional awareness, maybe empathy? You know, you mentioned social judgement. And that social element of language is something that still seems to me still quite uniquely human. Now you can tell me I'm wrong in that as well. Because my question really is, is there ever going to be a point where we could train,

Anna Ivanova:

an AI model, to have

Emma Sinclair:

empathy or to demonstrate those very essential leadership qualities that we're holding on to is uniquely human? At the moment? I

Anna Ivanova:

think when we talk about empathy, and chatbots, and the AI assistants that demonstrate empathy, we need to think about why we would want them or would we want them at all, for example, there are companies that provide hotline services for various kinds of mental health conditions. And one of them actually ran an experiment where instead of human volunteers, it used chatbots, for certain portions of the users who would contact the hotline. And so what happened is that the users initially were very happy with the responses they were getting, you can easily train a lot of things which model the sound more caring, and empathetic and give us templates for what a good Arabic response would sound like. So that worked really well. But then as soon as humans found out that they were talking to an AI, their level of satisfaction and trust dropped. So when it's just words, when it's back, us, people don't find it useful anymore. And so when it comes to empathy, it really seems that the human component there is essential. Empathy requires some kind of understanding of emotions and what another human is going through. And so in order to do that, people are looking for some kind of demonstration of humanity in the responder. And so it's, it's fairly easy to train a system to sound caring, and genuine and empathetic. It's not easy to train a system to exhibit actual empathy that people would recognise as empathy, even when they know they're talking to a robot.

Emma Sinclair:

Fascinating, I find that interesting that they didn't know until the moment they were told it wasn't a human, and then suddenly, the feeling is slightly slightly different to what they've been interacting with. Yeah, and maybe

Anna Ivanova:

at some point, they can figure it out, you know, if you're getting the same response over and over again. It's, it's at some point, you might guess, and then you might feel deceived, of course. Yes, I think just going back to my original premise, you really want to think deeply, why you would want to train a system like that. Would it actually be helpful, or would it just seem helpful but

Emma Sinclair:

not be? Not actually serving the purpose of what it was intended for in the first place? Interesting.

Arjun Sahdev:

This is a really interesting part of the conversation shouldn't because it's allowing us to try to think about how we can elevate what is uniquely human for the future. So it feels like there is, there is a space for us to lean into the fact that people are not going to be based on on this example. Satisfied with, with a robot or a chat bot, trying to comfort them when I don't know their cat has died or something like that, you know, or even something more serious. So there is definitely a space where I think we can think about there are jobs that the chatbots the robots, the the AIS will be able to do way better, but there's spaces that they they can't compete with humans on. And I find that quite, quite interesting. They said, I know people have said that about creative, the creative arts as well. And there's a there's a middle, there's a kind of grey area and a really interesting conversation that's happening around that. But what what I'm getting from this conversation is, there are there are going to be uniquely human things that we should lean into and try to seek power from. Have you found anything in the research that that or or even in your own observations that might refute that, that might say, actually, these these things can be scarily human? In a sense, I just think about some of the things that that Elon is doing with, with his, you know, Tesla robot and things like that, you know, is there anything that you're seeing that that might say, we should be worried

Anna Ivanova:

there are two sides of it. One is systems actually being human like, and the other one of maybe more common one is people perceiving the systems as human like, just because of appearances. So I've mentioned one fallacy of mistaking systems that use language with systems that think it's very common. Similarly, you can have a system that will tell you, I am so sorry, going through this, I feel really bad for you, I care deeply about you. And many people will believe that and will infer that a system that just been trained to generate these generic responses, actually has emotions and feelings. And thus, Karen does have empathy. In fact, there was an early example of a system like that, a chatbot, Eliza, who was a therapy bot from the 60s. So very early in the day, it was able to generate responses like I'm so sorry. Tell me more, repeat what you said back to you. And so this was surprisingly convincing. So even though the algorithm by which it was generating responses was very clear, and very, very simple, much simpler than what the systems do today, there was this illusion of empathy and caring understanding. And so that's another danger there, where people might over attribute these capabilities to artificial systems, making them seem caring, or aggressive or smart or not smart, simply based on these generic cues that work when we think about other humans but don't work. When we think about machines.

Emma Sinclair:

It feels like we've covered a lot of ground from human thought and human language through to the AI and how that differs from, you know, what's possible with humans and what we can perhaps use AI for. And what are you going after next? What's where's your research taking you? That's fascinating you for for 2024?

Anna Ivanova:

That is a great question. There are so many exciting, fascinating directions that are opening up for me and for people who care about language and thought. One line of work that I mentioned earlier, is looking at humans specifically an inner speech in humans trying to figure out can to what extent does human thought depend on us converting ideas and thoughts in language? Do those individual differences hint at something more general language that we need to organise our thoughts? Remember things plan things into the future? And if so, what would intelligence machines need? Can We have machines that think in language? Or would we want something more general that will reflect how all humans think not just some humans. So that's one line of work. And another line of work is to look more deeply at large language models. They are a cool, even if like a tool for businesses. But for scientists, they're an exciting tool to figuring out well, how much can you get from just learning from text? From learning to do this next word prediction? It seems that they've learned a lot about how human language works. To what extent do they resemble the human mind? So using these artificial models, as models of the human mind, is another very promising direction.

Emma Sinclair:

But if there's people listening now and thinking, I don't even know what my thinking style is, how, how could you help someone just consider how they think it might not be a possible answer, but I thought I'd ask it.

Anna Ivanova:

There are a few questionnaires, there is a one that's used very commonly to help people figure out whether they have F Fantasia. So if they cannot form visual images, there are a series of questions you can go through to find out, there are certain questionnaires that can help diagnose how much you use and you know, voice. My research team working on this question, things that none of them was quite perfect enough. So we're in the process of developing our own. So stay tuned. Maybe in a few months, we'll have something that you could do. And, you know, maybe go online, play some games, figure out how you think.

Emma Sinclair:

Thank you ever so much, Anna, for a fascinating tour of your research and opening our eyes to some of those bigger questions around how we coexist, if nothing else, alongside some of those growing large language models and what they can do for us, especially in leadership as well. So thanks very much for joining. So we have all we need and we hope to, hopefully your work somewhere. Thank you.

Arjun Sahdev:

It's been a real pleasure. Thank you so much.

Introduction
If you were invited to talk to pupils at a primary school, that is today’s generation that will grow up with AI, how would you describe your journey that has got you to where you are now, what you're focusing on and why?
Drawing on your research, can you highlight the differences between language and thought in humans?
Why is it so important for us to be aware of the way in which we think, the style in which we think or even the language in which we think?
You mention that only some of us have an inner voice. How do you study that?
Is there any correlation between neurodiversity and trends in the way (or styles) that people think?
Moving to AI, can you tell us what large language models are and how (if at all) are they capable of thought?
How can we use what we know about the different networks in the brain and how they interact with each other to solve multi varied problems, and apply that to developing this next wave of AI?
If experts can get closer to engineering or understanding the relationship between the brain networks, do you think that A.I. models could actually develop superintelligence? Could they replicate a human brain?
Do you think it's feasible for organizations to perhaps believe or consider a future where they may be running significant parts of their operation using perhaps deeply advanced AI? Is that a future that's likely to happen any time soon?
You’ve said that language is a uniquely human construct. What would you say to those who are potentially worried that actually our unique human abilities are being disrupted faster than we could ever have thought with the rise of AI and LLM?
It sounds like you’re saying that it’s important that we learn to use these tools as an enabler of our work rather than a disrupter. Is that accurate?
Going back to the idea that language is a uniquely human construct. There are clearly so many languages used across different territories, could a sophisticated LLM speed an organization's ability to understand a new market in a foreign territory?
Can we think about emotional awareness and empathy? Is there ever going to be a point where we could train an AI model to have empathy or to demonstrate those essential leadership qualities that are uniquely human at the moment?
If there are uniquely human things that we should lean into and try to seek power from, and currently there are definitely spaces in which AIs can’t compete with humans, is there potential in the future for AIs to become more human?
Where is your research taking you next?
There may be people listening who are thinking, “I don’t even know what my thinking style is”. How could you help them address this?