EP 171: The Playbook of Careless Non Legal Innovation - Richard Slaughter

Richard Slaughter is back for a chat to discuss the latest target of his forensic critical thinking skills, the careless innovation of Big IT

Interviewed by: Peter Hayward

References

ABC TV (2023) AI vs Human Intelligence. Documentary. Sydney: ABC.

Cook, K. (2020). The Psychology of Silicon Valley. London. Palgrave MacMillan. https://link.springer.com/book/10.1007/978-3-030-27364-4

Glenny, M. (2009). McMafia. London: Vintage.

Hassan, R. (2022) Analog. Cambridge, Mass: MIT Press.

King, R. (2023b). Here be Monsters, Melbourne, Monash University.

Slaughter, R. (2004). Futures Beyond Dystopia: Creating Social Foresight. London: Routledge.

Solnit, R. (2005). Hope in a Dark Time. London: Canongate.

Stonier, T. (1983). The Wealth of Information. London: Methuen.

Weizenbaum, J. (1976). Computer Power and Human Reason. From Judgement to Calculation. San Francisco: W.H. Freeman.

Zuboff, S. (2019). The Age of Surveillance Capitalism. London: Profile.

 

 

Transcript

Peter Hayward:

The future will look back at us and our society's response to Artificial Intelligence and the spectrum of responses it has will at one end be "Well, seriously, what were you worried about?" with the appropriate LOL graphics or whatever they have in the future, and the other end of the spectrum, if we're still here, if we hadn't been turned into paper clips will be, "Seriously!! What were you thinking?" But hindsight is not helpful. I didn't teach a Master's of Strategic Hindsight because most people have got pretty good hindsight. So in the middle of whatever is happening all we can do is read, listen and talk to people who have got an idea where things might be going. And that's today's conversation. .

Richard Slaughter: The starting point for this is really crucial. What I learned in those other articles is how did we get here? How did the IT revolution going from being such a boon to humanity, to being a chronic danger to humanity? What on earth, uh happened during that period to take that dream and turn it into something that all societies have to deal with? So one of the starting points is really to understand what happened back when Google and Facebook and the others got started. And that really highlights the theme of Neoliberal culture and what I call the Playbook of Careless Non-legal Innovation. It's Non-legal because you can't say it's illegal 'cause there weren't any rules at the time.

Peter Hayward: That's my guest today on FuturePod, Richard Slaughter. Richard was my Lecturer in Foresight when I was studying. He was my PhD supervisor and has been a regular guest on FuturePod as well as being one of the great thinkers and writers in our community.

Welcome back to Future Pod Richard.

Richard Slaughter: Thanks, Peter. Good to be here.

Peter Hayward: It is been a couple of years since we've had a chat. How have you been and what's going on in the world of Richard Slaughter?

Richard Slaughter: Well, um, pretty good actually. I guess the main news is that Luke VanderLaan and I have been working together at the, uh, University of Southern Queensland and in particular getting a couple of new futures units up for first year undergraduates. Because, people in all sorts of areas are beginning to think they ought to know a little bit about futures, given the media environment, everything that's going on these days. So we're starting small, but there are, hopefully, if things go well, it'll become a more solid kind of center for futures inquiry research and other units in due course.

Peter Hayward: When's that starting, Richard?

Richard Slaughter: Well we've had the first run through of the first unit. And I've just been marking the very first assignments. It's very different doing it online though. My formative years and professional high points were all with face-to-face. So online's very, very different. And I think we've yet to see that begin to return the kind of value and quality and just the sheer productiveness of a face-to-face contact that we enjoyed before. But hey we're just at the beginning here, so there's a long way to go.

Peter Hayward: What are the majors that people are taking futures as an elective in?

Richard Slaughter: Yeah, they're very interesting. They're mostly science and health related ones so far. And one environmental one. Quite a range of people but on the whole from the environment and sciences areas.

Peter Hayward: Back to the Old Master of Science, the first version of the Foresight Masters you started at Swinburne.

Richard Slaughter: Ah it's taking us a fair way back there Peter. Over, gosh, 20, 25 years nearly..

Peter Hayward: And I believe also you might find a home for your books and library as well?

Richard Slaughter: Yes. We're thinking of using that as the core of a futures library. And we've had some beginning conversations around that. So perhaps in January we'll get further down the track on that.

Peter Hayward: And in addition to that I know you've been writing and you're also planning, I think, another piece, which is what we're gonna talk about today?

Richard Slaughter: Yes, that's right. It's quite probably the last of these big pieces for the futures type journals. I've found it really, um taken a lot longer, a lot else going on now. Obviously getting older too, perhaps slowing down a bit, but I also see this as the equivalent of staying fit physically. If you continue to tackle complex subjects, then at least you are using your brain to its fullest, even if it's fullest is not quite what it used to be. And, uh, it's been a real a pleasure really. It's been a hassle at times 'cause of not having sustained time to get into it, but it's been a pleasure to sort of find yourself looking up there in those sort of highlands and feeling that there are things worth saying and over time crafting a way to say them. I don't think writing is easy. It's one of our toughest professions there is. But I think it also returns tremendous satisfaction and value when things really start to hang together and start to connect. And that's what happened quite recently with this piece of work.

It follows on from the four articles I wrote previously on reassessing the IT revolution. Except this time the themes have emerged to more broad concerns and the current topic of this paper is about Human Agency and what I'm calling the Technoscientific Fallacy. And for my way of thinking it begins to get us right to the heart of all those questions around why are we continuing to see powerful new technologies and suites of technology released from labs and sort of out of the way places. And suddenly thousands and millions of people are using Large Language Models and talking about Generative AI and I'm thinking, Hey, what's happening here? How is it that these things just suddenly got tossed out into the wild? Was there no impediment? Was there no checking? Was there no regulation or concern about what might happen? Apparently not. No. It's something that we've seen before.

The starting point for this is really crucial. What I learned in those other articles is how did we get here? How did the IT revolution going from being such a boon to humanity, to being a chronic danger to humanity? What on earth, uh happened during that period to take that dream and turn it into something that all societies have to deal with? So one of the starting points is really to understand what happened back when Google and Facebook and the others got started. And that really highlights the theme of Neoliberal culture and what I call the playbook of Careless Non-legal Innovation. It's Non-legal because you can't say it's illegal 'cause there weren't any rules at the time.

time My guide to this, I think you know Peter is Shoshana Zuboff in her book, the Age of Surveillance Capitalism. She spent seven years working on that book. And it was just such a wonderful gift to humanity because through her eyes you can just see exactly what happened and why. And what comes clear from looking at particularly the rise of Google, is that these developments that were going to change the world were developed in secrecy. And moreover, they were patented in secrecy. So nobody really had the slightest idea of what was going on when these search tools began to appear. And people began to find them useful. That seems quite, nothing to worry about. But what nobody realized at the time and what's so abundantly clear since, is that in changing from simple search, which didn't produce enough money, enough profit to where, where everything you do online is tracked and measured and quantified. What happened was that crucial shift from something useful to something that invaded human space before anyone realized what was happening. So it's Non-legal in the sense of the fact that the so-called free services were paid for by acts of continuous and uncontrolled theft of personal information. And that's really the little pile of poison at the bottom of the whole process. That meant that the IT revolution was kind of fated to turn about over time because the entities driving it were driven by fairly basic, um, values for capital accumulation, for profit, for growth, for wealth, and, not any kind of concern really about values to do with humankind and human wellbeing. So, it's a bit naughty to do this, but you can think of the web as developing in three or four stages.

In that initially it was enabling. Made things easier. Made new things possible. And everyone was really happy with that. There were lots of people, kinds of tech boosters. I remember one of the guys at my PhD exam was a guy (Professor Tom Stonier) from the UK who wrote a book called The Wealth of Information , and another one who wasn't there, of course, was Nicholas Negroponte, the guy from the Media Lab and people like this really just looked with wide open eyes of what they saw as this terrific, whole range of opportunities and so forth that came through it. So it was first of all, seen as positive and enabling then before too long, 'cause people using it and it was expanding so rapidly, people and systems became dependent on it. And in a sense that is actually very similar to a Progress Trap where you do something and you're not aware of the consequences. And this is, I think, what happened in the development of the Web. It became a Progress Trap at that point because you couldn't go back easily once you were there, once you were using it. Systems seemed so much superior that everyone was taking them up and the downside was very much downplayed.

Well, that led on to the situation from a few years ago when it began to be clear that, there's a book by Misha Glenny called McMafia which is a popular version of really what was becoming obvious. That a whole lot of bad actors, particularly international criminal networks, for them the internet was an absolute gift 'cause they could pursue what they did in secrecy and they're very difficult to track down and very difficult to deal with. Also, of course, the rise of hacking and scamming and things began to turn at that point and people began to realize that they were actually vulnerable. So enabling, to dependent, to vulnerable to what we've got now. Where at the national level here in Australia the government's having to set aside economic product or, or wealth, if you like, to make sure that both the systems that run the state, the public services, the utilities, banks, and the rest of it can withstand this constant assault. And so we're now surrounded by layers of criminal aggression and crime and cyber war itself. And cyber aspects of warfare have become really very much to the fore. So what this whole story smacks of something like a missed opportunity. Historical missed opportunity. So that's where I began. And these four papers were published, as you probably know, in an open access book that the University of Southern Queensland has called Deleting Dystopia. So if anyone wants to jump quickly through that, it's a very quick, easy read and, it, outlines that story. ,

Peter Hayward: The phrase you use of Careless Non-legal Innovation, and what that struck for me was remembering both the Donald Trump presidency in America and the Boris Johnson Prime ministership in England is that carelessness and non seriousness becomes a kind of perverse badge of honor. Whether it is a way of disarming critics or simply says, I'm a buffoon, therefore you've got nothing to worry about. People in positions of power don't want to take things that seriously as a strategy.

Richard Slaughter: Well that's true, but remember that one of the reasons we even got to that kind of space is that standards of public discourse had nose dived as so-called social media got going. And they were weaponized and used by all kinds of actors for all kinds of purposes. And I'm beginning to be pretty clear that Trump wouldn't have even gotten into office had it not been for the success of cyber ops by a certain country that wanted to have an influence on the election. Recent stuff I've come across does rather suggest that the winner wasn't in the States at all, but elsewhere. And so you get these weak people in positions of power because the standard of news reporting, dialogue, political discourse and so forth, just drops. The media are it's really vital for people in those kind of professions to call out things as they see them. Lee Sales was making just that point the other night in her speech uh, to an an ABC function. She made it very strongly. But another factor is that one of the drivers of this all was Silicon Valley, seen as a tremendous asset in the US and, and a, a powerful economic sector.

But when you look at Silicon Valley from a sociological or psychological viewpoint, it looks completely different. So one of the sources I found really useful was Katie Cook her book, the Psychology of Silicon Valley. It really undercuts this idea of a shiny, successful entity doing great stuff. Because it shows really how superficial that image is and how the value is obsessed with selling things of questionable value. And that always reminds me of Donella Meadows', comment that you only have to spend millions of dollars advertising something if it's worth is in doubt. . That's a wonderful rejoinder because what we have to remember, this goes back to your recent point too, is that the advertising and merchandising industry has been in full swing for about a century. And we're so used to it that we kind of forget how thoroughly it's infiltrated our thinking and our values and the way that commerce works. And the fact that here in Australia we have several, so-called, commercial stations that are just, manipulating their audience every single day. Take a look at any one of the commercial news programs on any of the commercial networks, and you'll see very clearly the promoting and the manipulation and the putting forward of product. It's a world based on lack of clarity and deceit on a absolutely mega industrial scale. And of course that affects everything.

So coming into it from a psychological view, looking at Silicon Valley in that way is really very helpful, which is why I've taken copious notes from that book, and I put them up on the Foresight International website under Archive and Research notes. So if you go to the site, go to Archive, scroll down to Research notes, and there's a whole block of, of the fascinating work that she did. So when it comes to things like, Large Language Models, like ChatGTP and the idea of Artificial Intelligence coming into this very contested, very atomized and confused up for grabs kind of context, that's a problem in its own right. And so by bringing these things out without really thought or care or forethought, , we've had tossed into the public sphere some extremely powerful tools. And there are very good reason to hold them back and examine them. In fact, there's a meeting going on in London as pretty much as we speak by, held by the UK Prime Minister to look at ways of, managing and regulating this.

But there's a fundamental issue here, which is to do with thinking that these things have got anything to do with intelligence. We have to really work hard to really strip away the marketing and all the assertions around high tech and really try and get to grip of what on earth is happening. Which is where the business about Human Agency and the Technoscience Fallacy comes in. I'll just mention part of it and leave you to come in again, but part of it's to do with language and meaning. One of the books that has thrilled and refreshed and inspired me from the last while is a book by lady called Irene Vallejo, and her book is simply called Pappyrus. I think she was based somewhere around the Alexander the Great Library in Alexandria. And used that as a sort of base, but it tells the story of the development of language and writing, the rise of books and the valuing of books and knowledge, from the Assyrian period onwards. And it was in the context of having read that and just thought what a wonderful, rich evocation of human life existence culture that is. And how different that is when someone comes along and looks at language as a kind of raw material and rips out the rules that seem to underlie it, and then pretends that they've got something that makes sense.

There is no sense. In the product of artificial machine language machine operations, they're calculations, they belong in one world. So these devices belong in a world of calculation, whereas humans belong in worlds of meaning. And it was Joseph Wittenborn, some years, decades ago in a book called Computer Power and Human Reason that discussed the implications of a,model that he developed called Eliza I think. And how humans read onto, projected onto, machines, any meaning that was there. So we have a situation where there's potential here already for massive confusion, and diversion and the mistaking of calculated results for human meaning. So that leads to some very, very dicey stuff around Human Agency.

Peter Hayward: The thing that strikes for me, Richard, is that the connection between the development of human language and communication and how that fits into psychological development itself. In other words, if you wish to develop psychologically, then the ability to create language that is both your internal dialogue and then to take it out of you and so that someone else can work with it is one of those aspects of Human Agency. Will we not bother to develop language because we have clever tools that seem to be able to talk for us?

Richard Slaughter: Well we're seeing examples already of not using our own normal brains if we've got those tools. There was a library, I won't say where it was, but it wasn't in Europe. There's a library that was told to get rid of books that could be deemed controversial, and the librarians found it really difficult to make up their minds. So they actually put it to Chat instead and got Chat to decide which books to take out. Now that's an indication of the kind of thing that we can expect from this where if a machine can do it people will choose it 'cause it's easier whether or not it makes any sense.

Peter Hayward: The machine can do it well, the machine can do anything because it basically builds it as a logic driven arithmetic calculation of relationship. So, of course, will find answers, but those answers are not based in any moral human basis.

Richard Slaughter: Exactly. So when it comes to playing with language at this level, the dangers and the risks just explode. Particularly since that no real concern was given to them at an earlier stage. It's obviously a complete negation of what we can do with the high quality foresight when we can go a long way down into these areas and get a lot of early indications. Well, obviously no one was,interested in doing that. Some people, as you know, think that the risks to humanity are existential because of control over, so-called, AI being problematic in some respects and maybe it just wants to spend its time gathering resources from 'em all over the world to make paperclips as one fellow put it. But underneath that, there's what seems to me to be a really central issue, which is that these technologies are often turned out with the assumption that there might be a few problems, but basically they add something helpful to the human toolkit and broadly can be seen positively and usefully. I think that's an underlying thing that's happened with the whole IT revolution, but that actually isn't true. What I'm looking at here in this paper it has, is how all technologies have what I call an Essential Duality, and it's not just a case of being neutral, which is a widely held belief. Which I think is completely wrong because in the IT context and in other powerful technologies, technologies come packaged in a particular way that grows out of its social cultural context with a worldview, with values, with commitments that are already programmed into them, but they are rendered invisible. Because no one's paying attention to those aspects, and that means that it's not just a case of putting them to good and not so good uses. Not just a case of bad actors misusing them. It's a case of actually needing to understand what it is about particular technologies that cause them to really start to work against human freedom, human needs and, and draw us into a kind of embrace which can actually be quite deadly.

So the thing that really brought this home to me quite recently is that there's been a research team here in Australia working for some time on quantum computing. And there was an excellent broadcast, a Ted Talk type format. With the Chief Scientist standing up and giving a terrific intro to what they were doing and how they were actually ahead of the rest of the world they thought. And viewers were treated to be a very simple view of the beginnings of a whole new raft of tech. Now, everything I've seen about this subject only briefly mentions a few possible good things, I have not seen anything and I'm scanning daily, every day, every week. I've not seen anything yet. It may be there, but if so, it's not easy to find. Nothing about, okay so once this starts to come into functional reality, what then? Nothing. So a question appeared to me, what's really going on here? . it's really curious that, the notion that successive waves of high tech innovation will on balance serve to improve the human condition. That's the fallacy that I'm zeroing in on and it seems to me to be quite clearly a fallacy 'cause of where we are and the experience that we've had and are having suggests otherwise.

So to hell with this, it never rests on one of us, does it? We always have to go out and find other people who've worked in parallel, drawn them, respect their work. And this is what I'm doing with a couple of people, Richard King, whose book is called Here Be Monsters, and Mark Hassan, whose book is called, simply called Analog. And by the way, that book is published by MIT last year, so it's really really interesting giving some of the early boosters of MIT. What I have come to understand through these two writers is that we are still living through it, not a simple change of tech, but an epochal shift of state, meaning the shift to digital. Is just far more significant and in some ways disabling than we have understood or been told. Now, that sounds like a pretty odd thing to say. So when you think about it, the digital realm is not something that we can, as human beings, embodied beings, have any direct experience of. It is a kind of no place. We know it's there. We can't see it. We can't feel it. We can't touch it. It's only accessible via sophisticated tech. And the only entities that have that tech are powerful organizations.

So powerful organizations are the gatekeepers to the digital realm and whatever suits them is the way things happen. So reading Richard King, by the way, I didn't like the title of the book, Here Be Monsters, refers to medieval maps where there were strange beings on the edge meant to in indicate areas of danger. He's taking that metaphor and I think it's a poor metaphor 'cause it's really resonant of another age. But in that one book, he actually takes apart the technoscientific way of looking at things. He shows very, very clearly how it, it overlooks or ignores the social, economic and political conditions under which tech technical innovation occurs. So it does so in ways that affect human ecology, human life in ways that are so not obvious immediately. It denies the extent to which technologies have constitutive impacts on human affairs. And it has a pervasive ethos of manipulation. In those kind of circumstances, we tend to be seeing ourselves as intricate, largely autonomous systems as humans as if humans merely complex machines.

So there are a whole series of features that go along with that, which are all invisible if you look at the product. Just think about how useful it is to have an iPhone. Great. But what seems to be happening, and it's very clear in the IT system, is that it's not so much machines directly threatening the future, but that there's a strong constant pressure for humanity to begin to view everything in machine-like terms.

Peter Hayward: So you haven't written the paper. It's obviously still sketched out, but I'm gonna lead you towards not conclusions, but if a person wishes to maintain or enhance their human agency, then what are the kinds of early steps. From the purpose of both, obviously getting informed but beyond being informed. What are the kind of useful actions or pathways that you think people just an individual could lean into?

Richard Slaughter: Well the tricky thing about that answering that question is that the answer's different for everyone. I like Rebecca Solnit on this particular kind of issue because she's very broad church and she writes a lot like her book called, Hope in a Dark Time, just contains masses of that kind of material where she shows, how even the most desperate and awkward circumstances there are chinks of light ways of proceeding strategies. So that that really links with the whole empowerment thing. And I saw the point of this when I was writing A Biggest Wake Up Call in History because I used four people as exemplars showing how in their own life and work, they had followed these principles in their own ways. Doing different things and finding agency. Finding the power to act. Finding energy and direction and capability. Perhaps one of the best of all is Joanna Macy. Her, her work on Despair and Personal Power and the Nuclear age is absolutely rock solid in this respect. But then there are people like James Hanson, Mohamed Yunus who invented the Green Grameen Bank. And those are some of the people I quoted. It is very much a case of connecting up with sources that offer people a menu of intelligent humanistic and inspiring sort of options and how individuals, followthrough with that.

The problem, it's very much related to the underlying problem we face is that the tech is becoming more clever in a sense. But that's not the real issue. It's more that people are becoming less intelligent if you like. So one answer would be to spend to make sure that somewhere in one's life, one puts a lot of attention into the arts and humanities. That whatever they are, that suit one and really feast among the giants and the amazing people who inhabit that space. Remembering that the art and literature is a conversation that never dies. It's a passport that never runs out. And that this great conversation has been going on throughout for several thousand years. And that's why I thought Vallejo's book was so wonderful 'cause it put you briefly in touch with that wonderful tradition were of meaning making throughout the human history.

So. That does feed directly into the conclusion here though, which is that the rise of so-called intelligent machines in this way, it does imply the decline of human agency, as I suggested, but only if we remain passive and do nothing about it. So we need to work at getting a more sophisticated and informed view of Technoscience and how it operates. Now, a great example that is right on our doorstep is actually here for some people is the cashless society. And there are strong efforts to make that universal in some places one or two places have already. But if you look at it a bit more carefully, what that does is to fold everything into corporate. To corporate land, corporate tech, and to eliminate all the multiplicity of informal uses of cash in society that go on in charities, in out of the way places, with people who don't have the internet and so on and so on and so on. There has been some progress, as I said, to some being worked on in London imminently about regulation, but that's always after the fact.

But one, one really specific thing we can do is to really put some kind of, um, limitations on what venture capitalism can do. And strangely enough, Biden just mentioned this the other day about firms actually, taking seriously and having to pass some tests about whether things were gonna be helpful or not. I do have a summary of all this by an Australian critic called, Christopher Allen, which I'd be happy to finish on. ,

Peter Hayward: It'd be good to hear Christopher Allen's summation of where we just might be.

Richard Slaughter: Christopher Allen is a critic, an extremely articulate critic, who writes in the Weekend Review magazine. And he really has through long-term immersion, developed that inner eye, that eye of discrimination. He sees things very clearly and I very much enjoyed his recent piece on AI. This is what it said at the very end. He said, "perhaps the deepest conclusion is to remind us of the difference between AI as a machine for processing information and human consciousness as the seat of awareness, feeling and understanding. It is consciousness that impels us to create art or to think, wonder, want to know? It's also consciousness that contemplates the meanings invoked by art, literature, and music While AI can emulate many of the operations of the mind, it does not replace consciousness precisely because that's not a function of mind, reason, or data processing.

I'll just add to that one final point. The ABC put a docco on quite recently about AI versus human intelligence, and one of the parts of it was where a presenter went to one of the Chat apps and used it to generate a script for a presentation to a Comedy Club, a brave thing to do. She went there, she gave the presentation and

she died on stage as the, as the saying goes. And the reason was that the audience sat there because not hard to understand, is it? No, there was nothing funny at all about it.

Peter Hayward: A thing that a good friend of yours and someone I admired was Wendell Bell and one of Wendell's superpowers that he said humans had was their ability to develop healthy skepticism. And it's not cynicism, not descending into that, but to be skeptical, be skeptical of yourself, but also skeptical of claims that are around you and put them to proof. And uh, I wonder whether skepticism is one of those human agency pathways.

Richard Slaughter: Well, it certainly helps. I'm sure

Peter Hayward: also struck me too, Richard, that when I first met you. You had done the work to establish what you call critical futures. Obviously the ability to do critical thinking is still at the absolute core of what we need to manage disruptive times.

Richard Slaughter: The future's being delivered to us, frankly, in a word, don't work. So, that's exactly right.

Peter Hayward: Great to catch up, Richard. It's just so nice to hear you still going and still punching. You're doing your best for all of us. Thanks very much to spend some time with the FuturePod community and hopefully this latest writing venture might generate some interest for some other members of the community to engage with and give perspective and conversation on.

Richard Slaughter: Let's hope so. Thanks, Peter.

Peter Hayward: I hope Richard's conversation has got you thinking and helps you make sense of what is emerging around us. FuturePod is a not-for-profit venture. We exist through the generosity of our supporters. If you would like to support the Pod, please check out the Patreon link on the website. I'm Peter Hayward. Thanks for joining us today.