EP 164 : The Wild West of AI Futures - Richard Yonck

A return interview with Richard Yonck where he surveys what he calls the Wild West of AI Futures including digital assistants, job losses, human hybrid working, ethics and social media.

Interviewed by: Peter Hayward

Richard’s other Futurepod interview

Links for this interview

Wired:  “The Generative AI Race Has a Dirty Secret”.  (2/18/23)

The Guardian: “The toll of training of AI models. (8/2/23)

AI for Good – https://ai4good.org/

AI2 / Allen Institute for Artificial Intelligence (AI for the Common Good) – https://allenai.org/

Hugging Face – https://huggingface.co/

AI for Good Global Summit – https://aiforgood.itu.int/

 

Connect:

Web site: richardyonck.com  

Twitter/X: twitter/ryonck

Linkedin: linkedin.com/in/ryonck

GeekWire articles: https://www.geekwire.com/author/richardyonck/

Keynotes and other talks – https://richardyonck.com/videos/

 

Books:

Yonck, Richard, Future Minds: The Rise of Intelligence, from the Big Bang to the End of the Universe. Arcade Publishing, 2020. https://www.amazon.com/Future-Minds-Rise-Intelligence-Universe/dp/1948924382

Yonck, Richard, Heart of the Machine: Our Future in a World of Artificial Emotional Intelligence. Arcade Publishing, 2017. (Second edition with new foreword by Rana el Kaliouby, 2020.) https://www.amazon.com/Heart-Machine-Artificial-Emotional-Intelligence/dp/195069111X

Transcript

Peter Hayward: I don't know how you are traveling with things. But I have just about reached the overwhelm stage of the AI Disruption topic. I for one could do with a bit of help orientating myself around this domain.​

Richard Yonck: We're dealing with a new set of technology that has seemingly appeared out of nowhere. And that's not the case. AI has been around for 75 years, at least, depending on how you want to talk about its origins... We've had this continual evolution, this continual progression of different technologies and the neural networks. ... When we talk about GPT, when we talk about Chat GPT, the GPT stands for generative pre-trained transformer, and it's essentially a neural network. So, these are all statistical models that are converging on an answer in a way that's very different from how the human mind, the human brain does, and it does it very rapidly. It can do it with enormous amounts of data. And so, we're now entering a period where these things seem to almost be performing magic as far as some people are concerned... It does amazing things, but it also leads to all kinds of ethical matters. Ethical questions, IP and intellectual property issues, copyright issues. So, when I say it’s the Wild West, it's not an exaggeration. If anything, it's an understatement. We're really dealing with so many things that we've never had to contend with before. So, in the course of this, we're really having to think a lot about, what does this mean for the kind of world, the kind of future that we're building?

Peter Hayward: That is my guest on Futurepod today, Richard Yonck. Richard is a Seattle-based futurist and international keynote speaker who explores future trends and emerging technologies, identifying their potential impacts on business and society. And he is a returning Futurepod guest. 

 

Welcome back to FuturePod, Richard.

Richard Yonck: Thank you. It's great to be here, Peter. Thank you very much for inviting me back. I really enjoy our chats.

Peter Hayward: I think last time we spoke, I had to look, the last podcast we did went up in December. 2020. So we're pushing onto almost three years since we spoke and you had your book come out. Future Minds. How did the book go?

Richard Yonck: The book has been very well received. I've had a lot of great feedback and conversations around it. The fact that it literally launched as COVID took off in March of 2020, didn’t exactly help my plans for a book tour, but we adjusted and so, it's been a real good ride. Great.

Peter Hayward: From my point of view, Richard, it's great to talk to you anyway, but particularly you are a person that I've wanted to have back on the podcast for a while. And I'll explain why. I regard myself as a reasonably well-read generalist. A generalist / futurist / prac-ademic, whatever you call it. A bit of a very modest polymath. Can I call myself that and stress the modest? And I have been trying to follow all the things, all the moving parts in this space that is where you are and where you write your books. And I've paid attention to the AI alignment question and all the fears and concerns and beliefs that we can either engineer our way, design our way out of a disaster or alternatively, destroy the world, create Skynet, everything else.

And so, I've been following alignment, trying to get a sense of where it's going, watching that. At the same time, I've seen the rapid emergence of the LLMs and the digital media audio process. I've seen the dog piling of all the entrepreneurs, the charlatans, the serious money, everyone, the search for the Unicorn. And then on the other, in the other corner, we see AI jobs, the jobs are going away again. We're back in this thing of technology's going to come in, that's going to take away all our jobs. And what are school's got to do. And now we've got the Pope, buying in, saying this is actually now something that, the Catholic church wants to get involved in.

I'm a well read, very confused person in this space. I don't know really where I should be paying attention, what I should be really spending my time on? I don't know where the interesting stuff is or the things to be concerned about are. So, the chance to talk to someone like you to at least give me some guideposts, it would be very much appreciated for me, and I hope the people listening.

Richard Yonck: Thank you, Peter. That's a very big question. In trying to answer and address some of that, I'm going to recognize that not everyone has covered some of the reading you and I have done around this. I may explain the terms here and there and expand some of the many initials and so forth, just with little explanations along the way, just to keep things clear. Add for some clarity as we go.

I totally agree with you that we're in the midst of a Wild West right now. There is so much going on. There's so much public awareness or at least attention, I should say, rather than awareness. And there is so much media hype that it really is challenging–even if you're familiar with the space–­to really understand or think, what part of the noise am I not supposed to be listening to? What should I pay attention to? And so forth. So, you address quite a few different aspects and issues around AI. Please anytime you want to break in the midst of this either for elaboration or for clarification or to change the topic. Please do.

But let's just start with a few things here right now. We're dealing with a new set of technology that has seemingly appeared out of nowhere. And that's not the case. AI has been around for 75 years, at least, depending on how you want to talk about its origins. And whenever I talk with people, or a lot of times when I talk with people, the general public, I'll make a statement like that. And they'll say, what are you talking about? AI, it's just started, I just heard about ChatGPT the other day or whatever. They don't recognize that we've been going through all kinds of developments in terms of the Connectionist era, the Symbolic era, the AI Winters, different aspects of, added computing power. The GPUs that started actually driving neural networks in the mid-2000s when, it had actually been something that had been around as a concept for 20, 25 years, but we just didn't have the computing power.

So, we've had this continual evolution, this continual progression of different technologies and the neural networks really started to take off around 2010 or so. You can argue the exact date there, but the point is things really got going this past decade and as they did. We started seeing some really impressive things being done but they were still basically pattern recognition types of processes and so forth. And then around 2017, we got to a point where Google Brain published a paper around creating what were known as transformers, a form of machine learning that is at the heart of so many of these LLMs. When we talk about GPT, when we talk about ChatGPT, the GPT stands for generative pre-trained transformer, and it's essentially a neural network. These are all statistical models that are converging on an answer in a way that's very different from how the human mind, the human brain does, and it does it very rapidly.

It can do it with enormous amounts of data. And so, we're now entering into a period where these things seem to almost be performing magic, as far as some people are concerned. Where they just look at it and it's “Wow, how did it do that?” But more importantly, or as importantly, to my mind in terms of the public fascination with it is the fact that all of a sudden, just like in the early days or the mid-days of the computer revolution–once you got to a certain level of interface, a certain level of capability, pretty much anyone, without being a data scientist, without being a computer scientist, can sit down and start typing in a few prompts and get a result. Then it’s, “Oh wow, I created that!” But, no you didn't. You gave it a few prompts and it converged on a mean based on an enormous aggregation of information that was brought together for this model. And it does amazing things, but it also leads to all kinds of ethical matters. Ethical questions, IP and intellectual property issues, copyright issues. So, when I say Wild West, it's not an exaggeration. If anything, it's an understatement. We're really dealing with so many things that we've never had to contend with before. So, in the course of this, we're really having to think a lot about what does this mean for the kind of world, the kind of future that we're building?

And when we talked about my coming on again to talk about this, one of the things that I was thinking was there's so much more to doing work around the future than just talking about AI, but I think it's become such a predominant driver at this stage, it's happening so rapidly, that it's really worth thinking hard about how it influences just about every aspect of our future, whether we're talking about social trends or economics or different aspects of workforce and jobs and so forth. It really is going to impact pretty much everything, so it's worth really exploring deeply. And I think this is a great way to spend our 45 minutes.

Peter Hayward: It just struck me too, Richard, just listening to you that in some ways I'm drawn to link AI and climate change. Climate change is all pervasive. It doesn't matter what nationality, where you are on the planet, you are both part of the problem, part of the effect and part of the solution. Some parts of it obviously produce much more of the problem, and some people are going to quite unfairly suffer most of the consequences. And in some ways I think of AI almost the same way. It's actually being originated in some places of the world. The consequences of it, they're going to flow everywhere else from the people who own it. Obviously, the technology is transforming the politics around who owns this stuff. The legal implications of the consequences. These are almost new questions that I think similar types of questions belong with the consequences of climate change to some extent.

Richard Yonck: Great points. Without getting deeply into it right now, it's safe to say that all of this work is itself adding to our carbon footprints. The training, the use of all of these are producing that much more heat or using that much more electricity. These are all realities we have to think about now. That doesn't mean that people aren't working on creating new ways of using smaller models, more compact models that have lower energy demands and so forth, but even something like that, it has a direct as well as a metaphorical link to climate change. So, in the course of the work I do, one of the things I get to do is talk with a lot of scientists. I go to, or interview people from labs, from institutes and so forth to understand what they're working on .

 A couple of days ago, in fact, I published a new article around a licensing method that AI2 in Seattle has created. Now it's early days, it's very preliminary, but it's the idea that at least they're looking hard at how do we create the alignment that we need in terms of our social values, our community values? How do we ensure that these models are adhering to respecting the kinds of laws and the things we consider important around intellectual property? The things we consider important about around copyright and so forth. Very preliminary, very early, but at least people are working on these types of things right now.

So as far as, is it impacting everyone, everywhere, all at once, to borrow from the movie. Yeah, it is. These tools, which is what we have to remember, they're tools. They're extensions of ourselves, they're things that we're creating so that we can do things better, more efficiently, faster, et cetera. They also have all of these other issues that arise around them in terms of what they can mean for privacy, what they can do in terms of security, in terms of personal surveillance? What they mean for interference in people's lives? This is all real. So, I take your point about the comparison to climate change in that it affects everyone everywhere. And a few people, a very small number of people, are going to get very rich in the course of this.

One of the stories that was in the Guardian this past week, and I think Wired did a piece of about a couple of months ago, is that these models, when they scrape data from the Web, one, first of all, it's a public web, they're drawing from an enormous corpus of what’s public data. But as they're finding out, they need to also clean it up. There's a lot of things out there that really, if you're going to use this in business, you can not allow to come through. But that is taking a horrific psychological toll on the people in various parts of the world who are working for a dollar or two a day to go through this material. They're wrecked by it. Seeing things, reading things that, people should not have to deal with. And unfortunately, it's so that business can do certain things with it. And certain people can get rich off it.

Peter Hayward: So maybe to just give this some structure and so we can fit it within the 45 minutes, let's maybe just drill into some of the things that got popping up with this. And again, I'm not looking for a complete explanation, but, this notion of landscaping it, signposting it, at least relating its parts. But let's start with the Digital Assistants, these things that are popping up. That say you can bring in these Digital Assistants and improve your life and improve your work and everything else. It sounds good, but as you say, the snake oil salesman and the hype merchants are playing here, but okay, what's going on with the Digital Assistant ideas?

Richard Yonck: It's a kind of a dream that's been around for a very long time. I want to start referencing Weizenbaum and ELIZA and the early chatbots from like the mid-1960s and so forth. The things that they are going to be able to do over time are ideally, very powerful.

This is essentially a version of interface development that I've written about, talked about for years. If you follow the progression, the evolution of interfaces over the decades, we keep using more and more computing power to make these ways of interacting with these technologies, more and more friendly, more and more natural. And nothing could be much more natural than a personal assistant that can interact with you like another person. Like a human assistant, even one that theoretically can read and understand your emotions, which was the topic of my first book, “Heart of the Machine.” Now that idea is great, but we're still so early in terms of how reliable these assistants are going to be. They can do very rudimentary things pretty well at this point, but we have to remember that when we're talking about technology, new technologies, that what really has to happen is an alignment between how critical is the task it's being used for and where is its reliability level? So, if we're not taking care of that part properly, then yeah, we're going to have companies that are being sued or we're going to have people creating problems for themselves that they don't need.

The real issue here is to understand at any given time, how reliable is this particular technology? We have to be able to create some better metrics around this so that we can actually measure adequately and be able to assign things based on where they are in terms of their evolution. Sure, they'll be better ideally two, three years later, but for right now, how are we going to use them?

Peter Hayward: So really, if I listen to you, if I'm thinking of bringing these assistants either into my life or into my business, then they're a poorly trained worker. If I continue to work with and supervise, they could become a very good part of the team. But if they're unsupervised, they're probably going to bankrupt the business or do something like that.

Richard Yonck: I think that's a fair assessment. And the nature of certain platforms, certain types of software to be able to actually learn and be corrected by human feedback, even input varies. But in many respects, this is a move in the direction of the hybrid model, where humans and machines are working more and more closely together over time. They're able to essentially support each other with each doing a particular task that the other doesn't do quite as well. And so as a result, the idea of self-correcting, self-training through human in the loop concepts is pretty powerful at this stage in the game.

Peter Hayward: Is this a really important area to pay attention to closely as a scanner or is this something you can maybe just leave this one just on a kind of watch?

Richard Yonck: I think that's really down to the person and a little bit of their philosophy. I think that it's important that people understand that anytime we're dealing with new technologies that lots of people don't understand well–especially, perhaps older people–that we recognize there's an awful lot of potential during the time like that for people to set up scams, to con people and so forth around that technology. The snake oil salesman concept, as you mentioned, that's the time when before there's a ton of adequate regulation and recognition and understanding on the part of the public that our radars are tuned a little sharper, that we're a little bit more aware and cautious. That email that's just come in telling me that I can make X amount of money this month by enlisting this new assistant. Just, be very careful.

Peter Hayward: This sounds to me like a play space, not an investment space at the moment.

Richard Yonck: There's an awful lot of people chasing the technology with their money and this is, unfortunately, this is an important part of the innovation ecosystem. But at the same time, there's enough money out there chasing stuff right now that we get the overhype and the over expectation of the likes of “we're all going to live in the Metaverse.” “We're all going to get rich off of NFTs.” We hear this for a year or two until it crashes and burns and I don't really want to see that happen with this, but at the same time, yeah, there are people who are probably investing in the wrong places along the way.

Peter Hayward: Moving on, the notion of bias. The notion that these, this technology is biased. The bias is built into this technology. Humans themselves are biased, but what do you say to the notion of how people should be aware around how biases play out in this?

Richard Yonck: Sure. Great question. And I think that it's multifaceted. It's a function of the system in part because you're drawing off of our own biases. Human beings have cognitive biases. We have social biases as well. And the social biases that are inherent in everything from language, to writing, to laws, to geography, all of these. If we aren't being adequately representative, in forming of the creation of these models, if there isn't adequate auditing and checking to ensure that people, all aspects of our world, are well represented, or at least are able to be to have feedback in the process. To recognize that this is creating something toxic over here that affects somebody. That's essentially what we have to start doing at a very, I think, methodical level in my mind, because bias exists everywhere. These machines do not think, they don't think. They won't think like us. They start from very different beginnings. So, when we get to a point of really calling these thinking machines in a real sense, where they have contextual, true contextual understanding, true theory of mind and so forth, I suspect that they are going to have a very different set of biases than the ones that we have. And right now, what they have is a reflection of ourselves.

Peter Hayward: Yeah, if you're scraping the internet to find the wisdom of humanity, then you're getting as much pornography as you're getting the work of John Milton.

Richard Yonck: Yep, absolutely. And that's been a problem, from ever since they started doing this with Watson and other forms of AI. Let's not confuse Watson with an LLM or anything like that, but these tools–as soon as they start trying to incorporate big parts of the internet, they are generating all kinds of stuff that we don't want, especially in business.

Peter Hayward: Yeah, and as you said, if the business dilemma is how much money do you spend actually improving the quality and if the business answer to that is we'll do it by paying people in Sri Lanka, $ 1 an hour to actually look at this stuff, then in trying to prevent one offence, you're committing another.

Richard Yonck: Definitely. And I think you touched on one of the things that's really driving this. Right now, everyone who's in this space, who's developing these models who's not in kind of that “AI for Good” realm where they're basically a commercial venture who is looking to monetize this as quickly as possible. What they're doing is they want to get out first. They're releasing them too early. The big companies might have other reasons to hold back a little bit more but so many small ones that are out there that are throwing this stuff out but long before it should be allowed. And of course, it's going to be a long time before at a legislative level, we're ready to bring some controls in.

Peter Hayward: Yeah, we're starting to see certainly the European Union starting to pass legislation. It's obviously going to be a heck of a difficult place to try and write legislation to manage the issue of training these systems and quality and representation, as you say. One topic that people are paying attention to, is this notion of the way that the algorithms drive social media and all the stuff that hangs off the back of social media. The algorithm and social media and digital media and public consumption. How that little dynamic playing out?

Richard Yonck: So social media. Once we got the kind of business models worked out 10 years ago or so to a certain level, it was realized that really what people were doing was harvesting attention. Getting the eyeballs on and keeping them was really what this was driving. Moneymaking based on things like the Google search ranking and everything else. So, you started seeing all of these platforms using what I refer to as algorithmic influence to be able to keep people locked in, keep them scrolling. The concept of the ‘endless scroll’ which was not part of the early web, part of early computing. When we got to the point where with the mobile devices you just keep scrolling. And then of course it becomes doom scrolling because what keeps people locked in the most are the negative emotions. And we become what's known in computing as an optimization problem. A or B. They reacted more there, they stayed on B longer. Now, let's give them B or C, and so on.

 Essentially, they (the internet and social media giants) need to be responsible for what they're doing out there. If you have reports that are consistently showing that teen suicide or depression is shifting very negatively and it aligns very well with the social media progression over a number of years, then we need to really be looking at this and holding people accountable at some level. But it's really hard, especially in a world that is not ruled by one set of laws. It's really challenging to get social media, who's got become very powerful, to do the responsible thing.

Peter Hayward: A big one that pops up, people talk about AI and jobs. This is going to be about job losses, that we're going to basically demolish another level of white collar jobs. Along with the blue collar jobs that were offshored.

Richard Yonck: It's a serious topic and one that needs a lot more detailed thinking. There are going to be job losses, just like there have been with technologies for centuries. The thing that we've been hearing, at least from some factions is, oh, new technologies always create new jobs, always create a net positive. The problem with that thinking, to my mind, is as technological progress has continued to speed up (and we can have the argument or the discussion around exponential change and so forth), but if you get to a certain point, well, human beings are not algorithms. They're not calculations. It takes us time to relearn or to learn something new or to upskill. It doesn't just happen instantaneously. I believe we get to a certain stage of technological advancement where, if change comes too quickly people can't keep up in terms of moving into the new jobs, which is supposedly what happens as new technologies come along.

So, I believe that one of the things that has to happen in time, and this may be the time, is that there's a need for changes in education systems, the economic system that is handling all this, and where the onus on retraining occurs. You cannot have, in our current economic system, people going back to school for another four year degree, or even a two year degree every decade, it's just not sustainable and honestly, it's not something everyone can do. If you move toward a whole system that supports a lifelong learning model, where business is also seriously responsible for the upskilling–and there are some efforts and some models along these lines–you can keep up better. But if you just go with the raw capitalist model and just fire people as soon as they're not cutting it, that's not sustainable on both fronts, because you're going to kill the economic engine eventually. I think that there will come a time, and this is may be that time, where we have to start moving toward some changes in the economic model, the means by which we educate people throughout their careers.

Peter Hayward: This could be quite messy in the short term if we finish up replacing middle range knowledge jobs with algorithms. And we pay more people at lower rates to clean up the data. So, we finish up turning medium paying jobs into low paying jobs. And we'd say there's been no job loss. No, but there's been an actual economic loss and that would play out.

Richard Yonck: Totally agree. No, we have to be considering the quality of the jobs, the quality of the work. There needs to probably be some level of sharing. We've got all of these models being created by public data that is in a lot of cases over the past decade and a half created by users on the platforms. Essentially making a lot of people wealthy. And I just think that we really have to move toward some very different models around how our data is protected, accessed, that we have the control, we have the rights of who gets to use it and so forth. And that's going to take some big changes.

Peter Hayward: One I'm interested to hear is this notion, yeah, you have already touched on it lightly, Richard, is this notion of hybrid work. The man-, the person-machine partnership.

Richard Yonck: To my mind, where we are right now, in terms of the overall progression and evolution of AI and automation and so forth is that we're reaching a stage where these are getting to be really useful and powerful tools, but they're also not able to do everything we do. So, therefore we need to have good ways of working in partnership. Not ways that drive human beings at a level where they basically burn out very rapidly, which certainly machines and automation can do. But on the other hand, there are monotonous things that machines can do incredibly well, incredibly quickly, that honestly, if you leave a human being to be able to do some of the things that they can do well, a person, being able to utilize more creative thinking aspects of our minds that are perhaps less repetitive. That's actually could be a beneficial partnership in my mind, but it's a matter of balance, most certainly.

Peter Hayward: Yeah. To get to balance, you generally have imbalance and falls. So, you'd expect that this is going to be a toing and froing process, that I'm gonna hypothesize and you can push back, that we're probably more likely to see the excesses driven by economics and then the balancing back when legislatures and so forth and public demand there be a balancing process. So, it's likely to be a pendulum between that.

Richard Yonck: Great point. And I generally agree. I think that economics in general are going to continue to drive just about every aspect of our world. And that includes how we're going to ultimately clean up climate change. But getting there is hard and probably after religion, economics may just be one of the hardest things for us to agree to change. People get very locked in–for a range of reasons, philosophically and financially–into a particular model and good luck trying to change their minds around it.

Peter Hayward: Let's talk about the LLMs and the Generative AIs that we're seeing the remarkable stuff. They can do scary stuff. They can take images of people and make them say things they didn't say. And also at the same time producing images based off other images. Where do you sit with where's that going and what are we likely to be seeing?

Richard Yonck: Yeah, so many aspects of new technologies, if you get all of these things that people discover they can do, that wasn't the plan of the original inventors or original developers and that's technology in a nutshell anyway. As far as Deep Fakes, as far as some of the awful things that can and are being done with that, everything from various aspects of social revenge to political manipulation and trying to direct elections and so forth. It's a tough problem. I think that we're going to have technologies, AI, that can detect this better over time. But yeah, I think it's going to be a multi-pronged approach. I think we're going to need the technology. I think we're going to need the legislation. I think we're going to need to have means, better means of attributing who's causing what, which probably in my mind means some big changes in the internet to begin with. Yeah. We're just not going to get there if we can't hold people accountable. If people can continue to do certain things or have it be very challenging to have attribution for just like other forms of cybercrime, cyber warfare and so forth, we really need to make headway in those directions to be able to protect ourselves.

Peter Hayward: I'm easing this to a conclusion because obviously we can go on for another hour and maybe we will do another follow-on podcast if there's interest. I suppose that as a kind of overarching conclusion wrap up, Do you have hope this is going to work its way out? You immerse yourself in this space and the hope that humans can get to a better future. This seems to be a very far-reaching technology that is cutting into everything and undoing aspects of commerce, how we live, how people will parent. Have you got optimism? Have you got hope that we can eventually work our way through it?

Richard Yonck: We all approach this from different perspectives. Mine tends to run toward optimism. For me, this has been going on for millennia. We get a new technology and the old guard says it's the end of the world, then everybody absorbs it into society and it becomes normal, becomes part of our background. We don't think about it after a certain level or a certain stage. It's growing pains and that's what we're doing as a society in general is, in my mind, it's a progression of growing pains across these thousands of years and it's accelerated. It has a range of challenges and problems and certainly right now, yeah. it's taking a range of tolls on people. And on society. But I also think that we live in a pretty good time overall as well. Some of that varies perspective and where people are in the world and so forth, but I do think that overall we're moving in good directions or better directions, more positive directions.

But there will always be–let's just say we eventually reach something like the technological singularity, which I didn't think we were going to get into today. But my belief is that we'll get there some day. It'll become the norm and there will be other things that we'll deal with after that. There’s “Oh, the sky is falling again.” We're going to get past climate change. I really believe that, but it is going to take some real pain for a number of people, and I would like to see it be a pain for the people on top, very soon, so that they actually start doing more, but I don't think there's no single answer, but I don't think that there's any single thing that's going to do us in either. I think we're going to get through.

Peter Hayward: My last question then I will let you go: You use the Wild West metaphor. So, I'm going to ask you: Who is wearing the white hats? Who are the people around us, the organizations around us that, okay, they've got a white hat. You don't have to agree with everything they do, but they're probably the good guys.

Richard Yonck: Yep, absolutely. Great question. And I think it's a great one for us to end on, so that listeners do actually think that I'm not just talking through my white hat. The first one that comes to mind as soon as you were asking is there's a whole movement that started almost a decade ago–not quite, maybe eight years ago–called AI for Good. And the idea is that these tools can be, with effort, with guidelines, created ethically. And we're going to still make mistakes. Don't get me wrong, but to build in the direction that is not just simply, “How fast can we make a buck.” You have people like AI2 (the Allen Institute for Artificial Intelligence, in Seattle. You have Hugging Face. You have a lot of different people out there that are building AI that is designed to try to adhere to making a better world. Making it align with human values as opposed to just pure ‘who's the next unicorn?’ So I'd probably say off top of my head, that's a good white hat to be thinking about.

Peter Hayward: If you can think of any more white hat organizations, we'll make sure we put them in your show notes. Okay. Once again fantastic to catch up. Hopefully we can do it quicker than three years next time. This is going to keep moving and we'd like to get you back and see where things are. Maybe just do a bit of a retro as to how the last period's gone, but thank you very much for supporting Futurepod and being a great participant and supporter and all the best on your travels through South America and now to Asia I heard.

Richard Yonck: Indeed. Yes. Moving around quite a bit. Thank you very much, Peter, for another absolutely scintillating conversation. I really enjoyed it. A lot of great topics to cover, way more than we can do in this time. So glad to join you again when the timing is right. Thank you very much.

Peter Hayward: I hope that Richard's guidance through the Wild West of AI was helpful. If you want to hear more of Richard's thinking, then I will recommend you read his books which information is about them in his show notes. Futurepod is a not-for-profit venture. We exist because of the generosity of our supporters. If you love listening to the Pod and would like to support us, then please check out the Patreon link on our website. I'm Peter Hayward saying goodbye for now.