Siri, Meet Siri. – The Evolution of AI Systems | Distinguished Lectures on AI | J.P. Morgan
Articles,  Blog

Siri, Meet Siri. – The Evolution of AI Systems | Distinguished Lectures on AI | J.P. Morgan


MICHAEL WOOLDRIDGE: So this is a talk about artificial intelligence, but I have to begin with a confession. There is no deep learning in this talk whatsoever. OK, so sorry if that comes as a shock to some of you. But actually, the truth is, AI is a very broad church. And deep learning is the kind of the poster boy or poster girl for the successes of AI at the moment. But it is one thread in a very rich tapestry of AI research, which is now delivering really impressive results. It isn’t all just about deep learning. And in fact, if you just remember one message from what I’m talking about today, just take that one away. It’s not all about deep learning. There is an awful lot more going on than that. What I want to talk about particularly is an area of my own field, which Manuela has also worked in, a field called multi agent systems. I’ve never really done anything else. This is what I’ve been doing since I around about 1989, when it was an idea that we thought would come to fruition. And it’s an idea, I believe, which now has come to fruition. And I’m going to give you a little bit of a flavor about how that happened. OK, so I’m going to start off by motivating the idea of multi agent systems– what are multi agent systems. It worked a minute ago. AUDIENCE: [INAUDIBLE] MICHAEL WOOLDRIDGE: OK, brilliant, thank you. So I’m going to start off by motivating the idea of multi agent systems and give you an idea of where this idea came from. And I’ll show you a little video, which Manuela will have seen a million times before. Many of you will also have seen it, but many of you won’t. And what’s remarkable about this video, which was made in the 1980s, is how clearly we anticipated a bunch of things which we’re now seeing happening. So I’ll show you that video. And that will lead me in to discuss the idea of multi agent systems. And then the bulk of my talk– so this is the kind of the high level motivational stuff about why we’re doing what we’re doing, and why this is a natural thing to be doing, and what we might want to be doing. In the middle part of the talk, there’s some more technical material. And the more technical material is really to do with an issue which arises in multi agent systems, which is that they are inherently unstable things. We build things which are inherently unstable. We need to find ways of understanding and managing their dynamics. And so the technical part of the talk is about understanding the equilbria of multi agent systems and the tools that we’ve developed that enable you to do that. So I’m going to talk about two different approaches there. The first is an approach based on ideas in game theory. And game theory is an area of economics which has to do with strategic interaction. And I’ll show you how you can apply game theoretic ideas to understand the dynamics of multi agent systems, what behaviors those multi agent systems might exhibit. And that’s one approach, which has some advantages and disadvantages. I’ll show an alternative approach very, very briefly, an approach code based on agent based simulation, where instead of trying to formally analyze the equilibrium of the system at hand, what we try and do is just simulate the system in its entirety. And these are both ideas, complementary ideas. They’re not in competition, because they deal with quite different types of systems. But both ideas whose time has come. And then I’ll wrap up with some conclusions. So let’s start by motivating multi agent systems. So I wrote a textbook on multi agent systems. It’s appeared in a couple of editions now. And the first edition, I wrote in around about 2001. And the very first page of that first edition essentially had this slide on it. It said, what is the future of computing? And the future of computing, I reckoned in the year 2000, was the following. The future of computing was to do with ubiquity, interconnection, delegation, human orientation, and intelligence. So what do I mean by those things? Sorry, I should have added, by the way, I think looking back, I think I was absolutely right. I don’t know very much in life– I don’t know very much in life, but what I know is that, actually, this is how things turned out, and this is how things will turn out for the foreseeable future in computing. So let me explain what’s going on. Ubiquity, it’s just to do with Moore’s Law. Moore’s Law, number of transistors that you can fit on a chip, or whatever. Basically, what it means is computer processing power just gets exponentially cheaper year on year. The devices, the computer processes, that are used to drive computers get smaller, have lower power requirements, are more powerful year on year. And all that means is that you can put computers into places and in devices that you couldn’t have imagined. So ubiquity just means putting computer processing power everywhere. And that’s made possible by Moore’s Law. So neat example of this, a magazine, a home computer magazine, which costs 5 pounds to buy in the UK, recently handed out on the front cover a Raspberry Pi computer, free. You know, you got it free with the magazine. A computer, which 10 years ago would have had the power of a typical desktop computer of the day. So that’s ubiquity. We can put computer processing power everywhere. Any device that we might care to build, we can augment it with computer processing power. And the second aspect of that is interconnection. These devices are connected. When I was studying computer science as an undergraduate in the 1980s, I did a course on networks. And the guy that was teaching this course told us, look, these networked systems, these distributed systems, they’re really hard to build, but don’t worry, because you’ll probably never meet one. I mean, no lecturer has ever been more wrong than that lecturer was on that day, right? Because we now realize that– and this was a change in the way that people thought about computing– we now realize that actually networked systems, distributed systems, and interconnection, communicating systems, these are the norm. These aren’t the exception. And that’s resulted in a fundamental shift in the way that people think about computing. OK, then the third trend is a bit more subtle. It’s towards delegation. So what do I mean by delegation? Well, some extreme examples of delegation are, again, if I go back to the mid 1980s, Airbus we’re talking about fly by wire aircraft, getting rid of the human in the loop. And the aircraft would have– the aircraft onboard computers would actually have control, and in some circumstances could overrule the instructions of human pilots. And a lot of people were outraged at this. They thought this was absolutely the end of civilization, that machines were taking over that. Well, some things went wrong, but actually, the truth is, it ended up being a really good thing. I mean, what we’re now seeing, the shift towards driverless cars, whether or not full level five autonomy, that you jump in the car and just state your destination, that’s some time away, but nevertheless, there are all sorts of features– smart cruise control features on cars– which we are delegating the task of driving the car to. Those are extreme examples of a much wider phenomenon. We hand over control of ever more of our lives. And very relevant for JPMorgan, ever more of our businesses to computers. We are happy for them to make decisions on our behalf. And that’s a trend towards delegation. Human orientation, what do I mean by that? When Alan Turing arrived in Manchester in the end of the 1940s to program the first computer and the first programmable computer stored memory computer, I think, was their chief claim to fame, the Manchester baby, he actually had a bunch of switches on the front of it. And you had to flip a switch this way to put a one in this particular memory location and then flip it the other way to put a zero there. To program that computer, you had to understand it, right down at the level of– it wasn’t even transistors then– at the level of the vowels that were in this machine. Nothing was hidden from you. Well, this was an incredibly unproductive way of programming. People were not very good at programming computers in that way. And if you learned to program on Manchester’s machine, you couldn’t have gone to Oxford and programmed Oxford’s machine, because they had a completely different architecture. The trend since then, firstly with the arrival of high level programming languages– FORTRAN and COBOL in the 1950s, then on towards languages like Al Gaul, and Pascal and C, meant that you can learn to program on one machine and transfer your skills to another. But the key point is, what those languages present you with is ever more human oriented ways of thinking about programming. Object oriented programming, which is the state of the art. I’m sure there are some programmers in the room, and you probably do object oriented programming. Takes its name from the idea that it was inspired by the way that we interact with objects like this clicker in the real world. And it was supposed to reflect that. It’s a human oriented way of thinking about computing. And interfaces, human computer interfaces will get ever more human oriented. And we’ll see that in a moment. And then the final one was intelligence. Now intelligence, here, I mean two things. All I mean is really that the scope of the things that we get computers to do for us continually expand. Year on year, computers can do a wider range of tasks than they could do previously. Now there’s a very weak sense of intelligence, which is just that– actually it’s just decision making capability. But actually, what we’ve witnessed over the last decade is an explosion of– an explosion of intelligence in the AI sense. That now we’re seeing computers that are capable of a much richer range of tasks than we could have imagined when I wrote the first edition of this book. So I say, the future of computing, with absolute certainty, I think, lies in that space. It is towards ever more ubiquity, ever more interconnection, ever more delegation, ever more human orientation, and ever more intelligence. Now that’s still a wide space. And that gives us a huge range of possibilities for where we might end up. But there are a number of other trends in computing, each of which picks up on different aspects of these trends. So if you look at the trend towards interconnection and intelligence, that takes you towards a thing called the semantic web. So after the worldwide web was developed, the idea of the semantic web was putting intelligence on the web. So having smart web browsers. So that, for example, if you did a search for weather in Canary Wharf, that your browser would be smart enough to realize that if it couldn’t find a website which referred to weather in Canary Wharf, that weather in East London, or just weather in London would be a reasonable proxy for that, involving some reasoning, the kind of common sense reasoning that you all do, but your web browser doesn’t. And that’s the semantic web. And that’s been, historically, over the last 20 years, a big tradition in AI, adding AI to the web. Peer to peer, don’t hear too much about it these days, but 15 years ago it was all the thing. Peer to peer is just one aspects of this ubiquity and interconnection. Similarly, cloud computing, the Internet of Things. The Internet of Things, just the idea that all our devices, our toaster, our fridge, our television are all connected to the web. It’s all one big, interconnected mass. What you might want to do with that, I don’t know. But the point is, it’s a really exciting potential. But where I want to go is just pick up on this last manifestation of these trends. And this is the trend towards what we’ll call agents. And at this point, I’m going to show this video that I referred to. So this is an old video. It’s a video that came from Apple computers. And so, you have to set the scene. This is the late 1980s. John Sculley was then CEO of Apple. He was the guy that evicted Steve Jobs, one of the famous business decisions of all time. They just released the Mac. The Mac was being a smash hit, but John Sculley was already worrying about what would come after the Mac. And the innovation on the Mac was the user interface. It was suddenly a human oriented interface. It was an interface that people could use without specialist training about interfaces. And so to think about what would come next, they came up with this video, which is called Knowledge Navigator. [AUDIO PLAYBACK] [MUSIC PLAYING] – You’ve got three messages– your graduate research team in Guatemala, just checking in, Robert Jordan, a second semester junior, requesting a second extension on his term paper, and your mother, reminding you about your father’s surprise birthday party next Sunday. Today you have a faculty lunch at 12 o’clock. You need to take Cathy to the airport by 2. You have a lecture at 4:15 on deforestation in the Amazon rain forest. – Right. Let me see the lecture notes from last semester. No, that’s not enough. I need to review more recent literature. Pull up all the new articles I haven’t read yet. – Journal articles only? – Mm-hmm, fine. – Your friend, Jill Gilbert, has published an article about deforestation in the Amazon and its effects on rainfall in the sub Sahara. It also covers drought’s effect on food production in Africa and increasing imports of food. – Contact Jill. – I’m sorry, she’s not available right now. I left a message that you had called. – OK, let’s see. There’s an article, about five years ago, Dr. Flemson or something. He really disagreed with the direction of Jill’s research. – John Fleming of Uppsala University– he published in the Journal of Earth Science on July 20 of 2006. – Yes, that’s it. He was challenging Jill’s projection on the amount of carbon dioxide being released to the atmosphere through deforestation. I’d like to recheck his figures. – Here is the rate of deforestation he predicted. – And what happened? He was really off. Give me the University Research Network. Show only universities with geography notes. Show Brazil. Copy the last 30 years at this location at one month intervals. – Excuse me, Jill Gilbert is calling back. – Great, put her through. – Hi, Mike, what’s up? – Jill, thanks for getting back to me. Well, I guess that new grant of yours hasn’t dampened your literary abilities. Rumor has it that you’ve just put out the definitive article on deforestation. – Aha– is this one of your typical last minute panics for lecture material? – No, no, no, no, no, that’s not until, um– – 4:15. – Well, it’s about the effects that reducing the size of the Amazon rain forest can have outside of Brazil. I was wondering, it’s not really necessary, but– – Yes? – It would be great if you were available to make a few comments– nothing formal. After my talk, you would come up on the big screen, discuss your article, and then answer some questions from the class. – And bail you out again? Well, I think I could squeeze that in. You know, I have a simulation that shows the spread of the Sahara over the last 20 years. Here, let me show you. – Nice, very nice. I’ve got some maps of the Amazon area during the same time. Let’s put these together. – Great. I’d like to have a copy of that for myself. – What happens if we bring down the logging rate to 100,000 acres per year? Interesting, I can definitely use this. Thanks for your time, Jill. I really appreciate it. – No problem. But next time I’m in Berkeley, you’re buying the dinner. – Dinner, right. – See you, 4:15. – Bye-bye. [MUSIC PLAYING] – While you were busy, your mother called again to remind you to pick up the birthday cake. – Fine, fine, I know. Print this article before I go. – Now printing. – OK, I’m going to lunch now. If Cathy calls, tell her I’ll be there at 2 o’clock. Also, find out if I can set up a meeting tomorrow morning with Tom Lee. [END PLAYBACK] MICHAEL WOOLDRIDGE: OK, so he’s a professor at Berkeley, apparently. They have a somewhat more relaxed life than I would imagine. [LAUGHTER] So what’s remarkable about our video is quite the number of things that are anticipated. So number one, you saw it– there was an iPad. I mean, a 1980s iPad, but clearly, it was a tablet computer. There were no tablet computers and they were not on the horizon at the time. But that was clearly the way they thought it was going. It had a little selfie camera. I don’t know if you– well, actually, quite a big selfie camera. So they anticipated that. Other stuff that’s interesting, and this is before the internet was a big thing, they anticipated a web search or something like it, that they were already thinking about the devices that people would have in their homes, being connected in that way. They picked up on the idea of visualization. Visualization, it’s an area that’s growing. The way that he’s visualizing that data and putting together those different data sources, to be able to visualize it in neat ways. That’s been a huge growth area over the last 20 years. But the thing that we picked up on, the thing that drove my community, is the idea of an agent. The thing that he was interacting with on the tablet screen was not a person. It was an animated piece of software, which is an interesting story there, that this animated piece of software– clearly, the, idea was that they wanted this to look as lifelike as possible. And actually, the received wisdom these days is if you’re doing something like that, you really don’t want it to look as lifelike as possible. Because you don’t want to mislead people into thinking that they’re talking to another human being. You’ve got to be explicit that they’re actually– you’ve got to explicitly show them that they’re talking to a piece of software. So what my community picked up on is that notion of an agent. And what we saw is the idea that instead of interacting like the 1984 Mac screen, where you went to Microsoft Word, and you went on a menu, and you dragged down that menu to select some item, where everything happened because you made it happen, where the software was a passive servant, it was only doing the things explicitly that you told it to, that there was a shift. That instead, the software would become a cooperative assistant, something that was actively working with you, taking the initiative to work with you on the problems that you were working on. OK, so it’s a fundamentally different shift away from this idea of software being something that you do stuff to, to something that works with you, in the same way that a good human personal assistant would work with you. And that’s exactly the metaphor that they had there for their agent. So that was the idea of– that was the kind of vision that launched the agents community at the beginning of the 1990s, was that vision. So just remember that video was made at a time when Ronald Reagan was president in the United States. Margaret Thatcher was prime minister here. Nigel Lawson was chancellor of the Exchequer in the UK. That’s how old it was. But actually, a lot of what they predicted was pretty much bang on the nail. It’s an impressive vision. OK, so the first research on agents started in the late ’80s and early ’90s. And a lot of the thrust of the work at the time was about building specific applications, like software that would help you read your email, so something which would help you prioritize your email, software that would help you browse the web, anticipate which link, for example, you were going to follow next, and help you proactively help you with the tasks that you’re working on. But it took 20 years from that video before we actually saw the first commercial agents really start to take off. And the one that grabbed my attention at the time, because I knew some of the people involved, and so did my Manuela, I think– the people involved in Siri were working at Stanford Research Institute in the US. And where they came from is exactly this work on agents, software agents, that we were doing in the 1990s. And then we’ve seen, of course, a flurry of others– so Alexa on Amazon, Cortana on Microsoft, Bixby on Samsung. I’ve never actually seen that one. But there are a whole host of other software agents. And those software agents are embodying exactly those ideas– the idea of human orientation. Moving away from machine oriented views towards human oriented views, presenting you with an interface, which you can understand and relate to through your experience of the everyday world. And the most important aspect of that in that video is communicating with just very natural language. Communicating in English, which isn’t nuanced, which isn’t in some kind of– sorry– which isn’t in some kind of strict subset of English. It’s not some special, artificial language. You’re just talking as you would to a human assistant. OK, so why did this idea of agents actually take off? Well, it’s no accident that Siri was released when the iPhone got sufficiently powerful that it could actually cope with the software. There’s an awful lot of very smart AI under the hood on Siri. And understanding spoken language requires a lot of processor time. So it could only happen when we had sufficiently powerful computers. In other words, it was the ubiquity. It was Moore’s Law took us there and made that feasible. So advances in AI also made competent voice understanding and speech understanding, made that possible. It couldn’t have happened in the 1980s, because we just didn’t have the compute resources available. We didn’t have the data sets that we now have available to train speech understanding programs and so on. But probably the most important is that the supercomputer that you all carry around with you in your pocket, the smartphone that you have in your pocket– I mean, we call it a phone. That’s the dumbest thing it does. That’s the most trivial of things that it actually does. It’s a supercomputer. It’s equipped with incredibly powerful processors, massive amounts of memory. It knows where you are. It can hear its environment. It can sense movements. And it’s connected to the internet. And it’s the fact that it has all that stuff which has made these agents, in the way that they envisaged back in the 1980s, has made them now possible. So the agent based interface, whether or not it’s realized in the way that Apple envisaged in that video, the agent based interface, the idea that you interact with some software through that kind of human oriented interaction, is just inevitable. It is the future of the computer, because there is no alternative. If you think about your smart home, and in the future all homes will be smart homes, there isn’t really any other feasible way to manage it, other than through a kind of agent based interface. It’s got to happen. If you think about a sector like banking, you download an app, which you interact to manage your accounts. Where that’s going, inevitably, is that app is going to be more and more like an agent, somebody that’s working with you to help you manage your accounts, to help you manage money. Not just something which is doing something dumbly when you tell it to, but actually something which is actively helping you to manage your finances. Now rich agent interfaces, really reach agent interfaces, are still some time away. And by rich agent interfaces, I mean that they’re still very limited in the kinds of language that they can understand. You don’t have to dig very deep to understand the limitations of Siri, in particular, actually. But even the better ones, you don’t have to dig very deep to understand their limitations. But what I want to now dig into is one aspect of this, which has really been ignored. So if I say to Siri, Siri, book an appointment to see Manuela next week, why would Siri phone up Manuela herself? Why wouldn’t my Siri just talk to her Siri to arrange this. That’s what a PA would do. They wouldn’t go straight to the boss. They would go to the other assistant. In other words, my field is concerned with what happens when these agents can talk to each other directly. If I want to book a restaurant, why would my Siri phone up the restaurant? There was this famous example, I forget whose software it was, that did exactly this. It might well have been Apple, that phoning up a restaurant to book a table, you may have seen in the news last October or so. But why would they do that? Why wouldn’t they just go direct to the agent at the other side? It just makes perfect sense. We were discussing over lunch, but one of the frustrations in my life, and I’m sure many of yours, is diary management. I spend crazy amounts of time juggling meetings and trying to find suitable things. Why don’t we have agents that can do that? This is not actually AI rocket science. It ought to be feasible to have such things now. And there, why wouldn’t my Siri just talk to your Siri to arrange this. That is the vision of multi agent systems. Multi agent systems, just systems with more than one agent. So if we’re going to build multi agent systems, what do they need to do? Well, your agents need to be able to talk to me. My Siri needs to be able to talk to me, but it also needs to be able to interact with other agents. And I don’t just mean the plumbing, the pipes down which data is sent. I mean the richer social skills that we have. So, for example, my Siri and Manuela’s Siri need to be able to share knowledge. My Siri and my wife’s Siri need to share skills, and abilities, and expertise. If I’ve acquired some expertise in something, I want my Siri to be able to share it with my wife. Actually, in neural nets, this is called transfer learning. It’s very difficult to extract expertise out of one neural network and put it into another. It’s a big research area at the moment. How can agents work together to solve problems, coordinate with other agents, or really excitingly, negotiate? Just something as simple as booking a meeting is a process of negotiation. I have my preferences. I don’t like meetings before 9:00 in the morning. I like to keep my lunchtimes free. But maybe you have different preferences. How are agent’s going to reach agreement? They need to be able to negotiate with each other. All of these things have been big research areas over the last 20 years in multi agent systems. And we’re beginning to see the fruits of that research make it out into the real world. So just one example. If you’ve booked in Uber recently– well, firstly, shame on you, because they’re not nice people. But secondly, what happens when you book a ride, that process of allocating somebody to pick you up and do your transport, that process, that basic protocol, is a protocol called the contract net protocol. The process through which that happens is a protocol that was designed within the multi agent systems community. And it has a ton of other applications out there right now as well. It’s probably the most implemented cooperative problem solving process, allowing you to allocate tasks to people in a way that everybody is happy with. So all of these things are active areas of research. If I want my Siri to talk to your Siri, my Siri and your Siri need social skills, the same kind of social skills that we all have– cooperation, coordination, negotiation, the ability to solve problems cooperatively. So I don’t– the debate about general AI, the kind of grand dream about AI has sort of kicked off again recently because of these advances. I’m not a big believer that that’s going to happen. Well, I’m not a believer at all that that’s going to happen anytime soon. And nor do I envisage that an agent will have these skills in a very, very general sense. But for specific applications, like meeting booking, negotiation skills for meeting booking, protocols that will allow agents to book meetings, taking into account everybody’s preferences, so that, for example, I can prove in a mathematical sense that my agent is not going to get ripped off, is not going to end up with a bad deal, that we end up with an outcome which is fair to all of us, these are all big areas in the multi agent systems community, that we’ve made a lot of progress with. So I want to emphasize, multi agent systems are used today. There was an article by a colleague of ours called Jim Hendler. I don’t know if you remember it, Manuela, about 15 years ago. And his article was called, “Where are All the Agents.” And he said, well, we’ve been working on these agents for 10 years, but actually, I don’t see them. Well, I’d love to have Jim here today, because, of course, firstly, we’ve all got an agent with us, right? You’ve all got a smartphone in your pocket. There are hundreds of millions of software agents out there in the real world. And not just agents that interact with people, there are multi agent systems. So high frequency trading algorithms, in particular, are exactly that. These are algorithms to which we have delegated the task of doing trading. People are out of the loop, completely, out of the loop. And they couldn’t be in the loop, because the timescales on which high frequency trading algorithms operate are way, way, way too small for people to be able to deal with in any kind of sense at all. But here’s the thing– this is going to introduce me to the next part of the talk– is that when you start to build systems like high frequency trading algorithms, they start to get unpredictable. They start to have unpredictable dynamics. So here are a couple of examples of this. So the October 1987 market crash. The guys with ties on will remember that. Was it Black Monday or Wednesday? I forget which. Does anybody remember? It was black something. And what happened, what led us to this October 1987 market crash? As with all of these things, there was no one cause. But actually, one of the big contributing factors was that the London Stock Exchange and international stock exchange had computerized just a couple of years before. I think they called it the Big Bang in London. It was when everything– it was when all the stock markets went computerized. And you went from handing people pieces of paper to actually doing trades electronically. And people built agents to do automated trading. And they gave agents rules like– if share price goes down, sell. And you don’t have to be an AI genius to see that if every agent has that kind of behavior, then a sudden event like a sudden sharp stock price fall for some reason, creates a cascading feedback effect. And that’s exactly what happened. It wasn’t the only cause, but generally accepted that that was one of the key causes of the October ’87 stock market crash. More dramatically, recently, 6 May 2010, we were having a general election here. But over in the US, in the middle of the afternoon, over a 30 minute period, the markets collapsed. And for briefly, more than a trillion dollars was wiped off the Dow Jones Industrial Average. It was the largest one-day drop in the Dow Jones history. But it only lasted 30 minutes. The markets bounced back. They didn’t quite regain. And it wobbled a bit. But actually, they regained their– they regained their position. And the joke was, of course, if you are having a cup of coffee at the time, you would have missed the whole thing. This was happening on timescales that people simply couldn’t comprehend. By the time they understood that something weird was going on, it was already starting to bounce back. So some very strange things happened. So Accenture were trading at a penny a share for a while– a very, very brief period of time. And Apple shares, bizarrely, were trading at 100,000 dollars each for a very brief period of time. The scary thing about this is that, of course, now all these markets are connected. We’re not operating in isolation. We’re operating in a global marketplace. And a nightmare scenario was that you would have a flash crash at the end of the trading day. And if you hit the trough, the bottom of the trough at the end of the trading day– you hit the bottom of the trough at the end of the trading day, nobody knows whether or not this is a real collapse or just something, a freak phenomenon which is just going to rebound. And then you’ve got contagion. It starts to spread over Asia and the rest of the world. And these are very real and very, very scary phenomenon. So the next point of my talk is, if we’re going to build multi agent systems– and the problem is that people are frantically running ahead to build high frequency trading algorithms, and trying to build them faster, and using things like AI sentiment analysis on Twitter to drive the decisions that are being made– then they are going to be prone to these unpredictable dynamics. We need to be able to understand them. We need to be able to manage them. And at the moment, management is just hitting a kill switch. It’s unplugging the computers. That’s how these things are managed at the moment. I mean, it’s not all it is. So let me just briefly give you a feel for one of the two approaches that we look at. And the first approach that we look at is what’s called formal equilibrium analysis. So this is relevant for systems where there are small numbers of agents. It doesn’t work for big systems, for various technical reasons. So the alternative technique that I’ll talk about in a moment works for big systems. But for small systems, where there are just a handful of interacting agents, what we can do is, we can view this as an economic system, and start to understand what it’s equilbria are and what kinds of behaviors the system would show in equilibrium. So to put it another way, what we do is, we view a flash crash as a bug in the system. If our system that we have is exhibiting a flash crash or some other undesirable behavior, what we do is we treat it as a bug, and we say how did this bug come about, and how can we fix that bug. And so the technology that we apply is exactly the technology that’s been developed in computer science to deal with bugs. And the most important of these technologies is a technique called model checking. And so here, I’ve got a simple illustration in model checking. So the idea is– what this thing is here is just a description of a system. I mean, a little bit more technical. I said it was going to get a bit more technical, but not too much. These are the states of the system. And then these, these arrows correspond to the actions that could be performed by the agents in the system. So if the system is currently in this state, and some agent does this action, then the system transforms to this state. And what that gives us is this structure here, which we just call a graph structure. And this is just a model of the possible behaviors of my system. So it could be, for example, that some state down at the bottom here is a bad state. This is a flash crash state. And what we want to understand is how does that flash crash state arise. How can we get to that. OK, so in model checking, what we do is we use a special language, a language called temporal logic, to express properties of the systems that we want to investigate. So here is a property written in a standard temporal logic. It just says, if ever I receive a request, then eventually I send a response. That’s what that says. You don’t need to worry about the details. And what the model checker does is, it will check whether or not that property holds on some or all of these possible trajectories. So each path that you can take through that graph corresponds to one possible trajectory of our system. And imagine that there’s some flash crash trajectory where bad things are happening. So if I have– a classic example of a query would be something like eventually I reach a flash crash. So what we’re asking is, is there some computation in my system that will lead to that flash crash. So this is, again, a very big body of work. And colleagues of Manuela’s at CMU won the Turing Award for their work in developing model checking technology, because it’s industrial strength. It really works, with all sorts of caveats. You can really use this to analyze systems. And many model checkers are now available. And really, the reason is for that, that actually these model checkers are relatively easy to implement. So the algorithmics of these things are really, really quite simple. And actually, you can end up with tools that really, really work, if you want to do this. So the two basic model questions, model chicken questions, then are, is there some computation of a system on which my property, like there is eventually a flash crash, holds, or does that property hold on all computations of the system. Is it inevitable that I’m going to have a flash crash on all possible trajectories of the system. So those are the two basic questions which are introduced in model checking. So now, if we turn instead to multi agent systems, the idea is that our agents now, each of them is trying to do the best it can for itself. My Siri is acting for me. Your Siri is acting for you. And what we now want to ask is, OK, under the assumption that our agents are acting rationally, doing the best that they can for us, then what properties, what trajectories will our system take. Assuming that your agent is doing the smartest thing it can in pursuit of what you want to do, like meeting booking, and mine is doing the smartest thing it can for me, what will happen. And to cut a long story short, that’s the question that we ask in this work. And that’s the approach. The approach is what we call equilibrium checking. It’s understanding the equilibrium that a system can have. And the basic analytical concept that we use for this– what is a rational choice– is an idea from game theory, called Nash equilibrium, named after John Forbes Nash, Jr, who just died a couple of years ago. They made a film about him, A Beautiful Mind. The film is terrible. But the book on which it’s based is much better. And he formulated this notion of rational choice in these strategic settings. And the notion Nash equilibrium is extremely simple. It’s the idea that we use in our work for analysis. Suppose all of us are busy. We all have to make a choice. You have to make a choice. You have to make a choice. You have to make a choice. We all of us in this room make a choice. It’s a Nash equilibrium if when we look around the room and see what everybody else has done, none of us regrets our choice. We don’t wish we’d done something else. Given that you lot did all your bits, I’m OK with what I did. But similarly, given that we all did our bits, you’re OK with what you did. That’s a Nash equilibrium. And what we look at in our system is suppose our agents make Nash equilibrium choices, then what trajectories will result? So the picture looks very similar. We’ve got a model of our system. We did a model checking. We’ve got our claim, like there was eventually a flash crash. But now we know what the preferences are of the agents in the system. We know what each of them is trying to accomplish. And the question is, can I get a flash crash under the assumption that we all make rational choices? Or is a flash crash inevitable under the assumption that we all make rational choices? So that’s the work that we look at. And we have a tool that does this. It’s available online. So the tool is called e for equilibrium verification environment. I mean, it’s available online at eve.cs.ox.ac.uk. And what you can do is you can describe a system, using a high level language called reactive modules. It’s a programming language, so you should expect to see a programming language. And then you specify the goals of each of the players. And those goals are specified as temporal logic properties, like the example that I talked about earlier. And what it will do is, it will tell you whether or not what properties hold of that system under the assumption that all the component agents make Nash equilibrium choices. So that’s what we mean by formal verification. So it’s game theoretic verification, because what it’s doing is it’s looking at the system from a game theoretic point of view. It’s saying, you’re going to do the best for yourself. I’m going to do the best for myself. Then what will happen? We’re going to make Nash equilibrium choices, what will happen in my system? OK? OK. So that’s a formal approach to understanding equilibrium properties in a precise mathematical sense and the precise mathematical sense of game theory. And it will tell us what properties will hold inevitably under the assumption of rational choice or could possibly happen. So the idea is, in this setting, the fact that something is possible in principle might not be relevant if it doesn’t correspond to rational choices. All you’re concerned about is what would happen under the assumption that our agents chose rationally. So that’s equilibrium analysis. So this really only works for small systems. If you’ve got a handful of agents, it works. For technical reasons, with game theory, if you’ve got large numbers of agents, which of course you do on the global markets, it doesn’t really work. So what do we do instead with large systems? So the idea instead is we’ve got an alternative approach, which is called agent based modeling. And to cut a long story short, what agent based modeling does is it says you simulate the whole system. You build literally a model of the economy with all of the decision makers in that economy, modeling, and you model the interactions, the buying and selling behaviors, and all the other, the buying, and selling, and lending behaviors, all the other stuff that you might want to do, you model them directly. And then you run a simulation. And this is an old idea, but it’s possible now for familiar reasons. Why is it possible? Because we have large data sets that we can get. Regulators require– for example, in the finance sector, regulators require that banks and other groups make their data publicly available, or parts of their data publicly available. And we could scrape that and use that in our simulations. And that’s what we do. And we’ve got the compute resources to be able to simulate this to scale. So the kind of simulations that we do involve 7 million decision makers. And those decision makers correspond to banks, and individuals, and so on, and we simulate that at scale. There are some challenges with this. So when you start doing agent based simulation, it looks like a beautiful thing to do. I mean literally your modeling each of– each of the decision makers in the economy is an individual program that’s making decisions about what to do. But actually, just getting to meaningful simulations, where it just doesn’t just wobble up and down crazily, never settling down, that’s actually a challenge in itself. Once you’ve got simulations, you then discover that what you’ve done is you’ve plugged in what are called magic numbers. So to get anything sensible, I had to set this parameter to 13.3. But why? 13.3 is a magic number in the simulation. And this is a real problem, because it just feels arbitrary. We don’t want to have to do that. Calibration is a huge problem. So calibration means if your model tells you this is going to happen. How do you know that $1 in the model actually corresponds to $1 in the real world? At the moment, that’s the cutting edge of agent based modeling, doing meaningful calibration. And predictably, the way that you do calibration at the moment, the state of the art technique, is to do lots of heavyweight machine learning on your model to try to understand what it’s actually doing. And finally, whether you interpret the data that it’s providing is quantitative or qualitative. So my colleague, Dan Farmer in Oxford, uses the analogy of weather forecasting. If you go back to the 1940s, how did they do weather forecasting? They would look at the pressure over the United Kingdom, the weather patterns, and they would just go back through their records to find something similar and then look what happened the next day. And simulation of weather systems was widely regarded as something which was impossible for a long time. Eventually became possible, when you could get the grain size of what you’re modeling down to a sufficiently small area, and you had the compute power available to be able to do these simulations at scale. And now it works. And I think the claim is that we will be able to do similar things with agent based modeling. But this is– its simulation. it’s Monte Carlo simulation, which means it’s involves random numbers, basically. You have to do lots of simulations and see the results that you get. And whether you interpret the results literally to give you quantitative data or qualitatively to say this trajectory could happen, you could get a flash crash under these circumstances, that’s also at the cutting edge of the debate on agent based modeling. So I was actually, originally planning this talk, I was going to make this the central point of the talk. But then I panicked, because I’m not a finance person at all. And this work is only possible because we have somebody who works in the finance industry doing this. But this is a quote from [INAUDIBLE]. So for example, he’s looking at the conditions that can give rise to flash crashes. And one of the things that he looks at, for example, is crowding. And this is where everybody is buying into a particular asset. And if that asset becomes distressed, then the concern is that that then propagates. And so the conventional wisdom is that if everybody is buying into the same asset, this can be a bad thing, because it leads to contagion and propagation of distress. But actually, he’s discovered, for example, under some circumstances, this can be a good thing. So these are qualitative things that he’s doing here. The next stage is to try to calibrate this. OK, so to wrap up, so the agent paradigm, it seems to me, it’s a 30 year dream for AI, but it’s now a reality. We all have agents with us. It’s not necessarily the case that your Siri is talking to somebody else’s Siri. I think that’s an obvious thing to actually to happen. So I genuinely believe that will happen. And it won’t happen in the sense of your Siri being generally intelligent. It will be in niche applications. The next step for the agent paradigm is to put agents in touch with other agents, for Siri to talk to Siri. But multi agent systems have unpredictable dynamics. So we need to be careful about the systems that we build. And I’ve described in fairly high level way two different approaches to doing that. One is, for small systems, you can view these things as an economic system, a game in the sense of game theory, model it is a game, and then understand what its game theoretic equilibrium, or it’s Nash equilibrium, behaviors are. Do I get something bad happening in the Nash equilibrium? An alternative is agent based simulation. So the alternative is to directly model the system. And we can do that because we have compute resources at scale and we have data at scale. So I’m going to wrap up. Thank you for your attention. I’m giving you a tour of where agents came from, why it’s a natural idea. It took 20 years for it to become a reality. But it is a reality. And we’ve all got agents in our pockets now, and where that might go. That vision of the future of computing, ubiquity, interconnection, intelligence, delegation, human orientation, just seems to me to be inevitable. And the agents paradigm, it seems to me, is bang in the middle of that. OK, thank you for your attention. [APPLAUSE] MODERATOR: A couple of questions. AUDIENCE: Test. Thank you for your time and thank you for your talk. I have two questions, but for the sake of time, I’m going to let you choose which one to answer. MODERATOR: Well, for the sake of time, you choose one [INAUDIBLE] AUDIENCE: OK. MICHAEL WOOLDRIDGE: An easy one, please. AUDIENCE: All right, cool. One of the hot topics in artificial intelligence is the impact of automation in society. And I was curious as to what do you think the impact of the evolution of multi agent systems will have in automation, and subsequently, how we can think about educating our children, educating ourselves, and ultimately, educating society? MICHAEL WOOLDRIDGE: So I think there’s two slightly conflated– I mean, I think the issue of automation is, and how multi system agents will impact that, let me just take that one first. A lot of my job, and I dare say your jobs, is full of fairly routine management of tasks, processing paperwork, passing it on. There will be any number of workflows within an organization like this, as there is at the University of Oxford, to deal with processes which involve multiple people. And that’s extremely tedious and time consuming. The first thing that I could really see agents doing is simply automating the management of an awful lot of that. So you have agents managing– in our case, for example, when a student applies to us, you have an agent that manages that process, can remind me when I need to do things. Can make sure that the paperwork gets to the next people. Can flag up to the right people when things aren’t processed in time, and so on. And that seems to me to be crying out for agent based solutions. So I think there will be big applications there in that kind of scenario. And I think that will be an area where multi agent systems has an awful lot to offer. The education is– I mean, I think the second part was to do with education, is that right? AUDIENCE: How [INAUDIBLE] education to overcome automation? MICHAEL WOOLDRIDGE: Well, I think that’s a bigger question about AI itself, rather than multi agent systems. And I think the answer to that one is that the skills that won’t be easily automated are human skills. Doctors, for example, they’re not going to be automated by X-ray reading machines. We have software that can read X-rays, and diagnose heart disease, and so on, very, very effectively, but that’s a tiny sliver of what a doctor does. An awful lot of what a doctor does, most of what a doctor does, is to do with human skills, that requires huge amounts of training, which are not going to go away anytime soon. Although, I’m put in mind of this news article you may have seen, over the last 24 hours, of this patient that was told he was going to die by a TelePresence robot. That wasn’t AI, by the way. It was just a crass application of TelePresence technology. MODERATOR: OK, there is one more question. AUDIENCE: [INAUDIBLE] just quick– MODERATOR: Just a second, just a second. AUDIENCE: How would you suggest we should deal with biases? We’re finding databases full of bias and inefficiencies. So how would– by delegating through agents, how do you deal with biases in the data. MICHAEL WOOLDRIDGE: OK, so, wow, that’s a huge question. So I think what’s interesting about bias is that the algorithmic treatment about bias– and it’s just AI. Anything to do with an algorithm, which has to make decision, it’s the same thing. But the algorithmic treatment of bias is something which is just– we didn’t really anticipate is going to be an issue. And it’s just exploded over the last couple of years. So there, I think, we’re just developing the science to understand what it means, for example, to be able to say in a precise sense when is a data set biased? When is an algorithm unbiased or biased? We’re just getting there. People are frantically running ahead to try to understand those issues. And I’m pretty confident over the next 10 years we will have a much richer understanding of that. What will be interesting to see that experience fed back into undergraduate programs, for example. So that when we teach people about programming, we don’t just teach them about programming, we teach them about those issues of bias. At the moment, it’s a huge, great difficult area. And there are no magic fixes for it. We’re just at the beginning of a process to understand what those issues really are. MODERATOR: OK, anyone over here have questions? I see many hands up so just a– OK, you have the mic. So let me [INAUDIBLE] from that side. AUDIENCE: Thank you very much. Thanks for the talk. The question, very quickly, is about defining what’s meant to be each agent’s interest. Once we define what each agent is doing in its best interest, then you can define the Nash equilibrium. But in real world applications, agents may have different interests. So what would be the two, three topics, or methods, that you think are the state of the art for inferring what is the reward function or the actual interest of each agent, so we can [INAUDIBLE] each agent and then go to multi agent level. MICHAEL WOOLDRIDGE: OK, so it’s a slightly technical answer, so I apologize for that. So the problem of I’m interacting with people and I don’t quite know what their preferences are– they could be this sort of person, they could be this sort of person– is a standard one in game theory. And there are standard tools. There’s a variation of Nash equilibrium called Bayes-Nash equilibrium, which deals with exactly that. We haven’t done that in our work, because it’s kind of an order of magnitude, more complex. But nevertheless, it’s a standard technique. And in principle, you can use that technique to understand this. I mean, you could then argue against the models that they use in– what they do in Bayes-Nash equilibrium is, you’ve got a space of possible types. You could be this type, or this type, or this type of person– in other words, have these preferences, or these preferences, or these preferences. And what you know is the probability that they’re of this type, and this type, and this type. You could immediately argue, actually even that actually is asking quite a lot. On large scale systems, that might not be an unreasonable thing to do. But we don’t, for the reasons I’ve said, we don’t look at large scale systems using game theory. MODERATOR: One final question [INAUDIBLE] AUDIENCE: I had a question about HM based modeling. So like take the example of the flash crash, like statistics say this is something that will happen once every billion years or something. Do you think that there is some issues with the way we do agent based modeling or the reliance on simulations to model whether things are likely to happen or not? MICHAEL WOOLDRIDGE: So is that an actual quote? The once every billion years? Because it seems a very silly quote, given that it’s actually happened. AUDIENCE: Yeah, something like that. I mean, these events are super rare. [INAUDIBLE] there are risks. MICHAEL WOOLDRIDGE: Well, there have been smaller scale flash crashes since then. There’ve been a number of them. So the scary thing about flash crashes, if they happen in the circumstances where, for example, like I say, at the end of the trading day, when you hit the trough, when the markets closed, that’s what’s potentially very scary. And there could be others. There could be other circumstances. So in our simulations, what we aim to do is just to get– at the moment, we’re just getting qualitative indications. Look, these are the kinds of conditions. Because these are hugely complicated events. It’s not just one factor. Certainly, high frequency trading could– the flash crash couldn’t have happened without high frequency trading. So that’s certainly a contributing factor, but by no means is it the only one. I’m not sure whether they got to the bottom of whether or not somebody had actually done something fraudulent in the flash crash. I’m sure some you know the answer to that. But there’s a huge range of things. But what we aim to do is be able to give you the kind of characteristics. Look, if your system has these properties, this is the kind of trajectory that it might exhibit under these circumstances. So qualitative indicators, which is still useful for us. We’re right on the frontier of going from that to be able to make– if your leverage is this much, and the crowding is this much, then the probability is this much. We’re on the edge of being able to do that. We can’t do that with confidence yet. That’s a way off, I think. MODERATOR: We have time for one final question. There was a girl in the back. [INAUDIBLE] for awhile, sorry. OK, thank you. MICHAEL WOOLDRIDGE: Hi, professor. When I hear you talk about the agents, and I see you show the video from Apple, I’m curious whether you see agents as a solution to the I/O problem, when dealing with computers. Do you see it as the next step towards or the next trend towards people interfacing with computers? Obviously, now we see like companies in America like control labs. And in their work, they’re looking at gestures, reading the nerve signals to interact with computers. One of the big revolutions in the industry is– MODERATOR: [INAUDIBLE] question. AUDIENCE: So, the question is, do you see it as a solution to the I/O problem, or is there a bigger application than that? MICHAEL WOOLDRIDGE: I think, yes, it’s a problem. I mean, it’s a solution to the human computer interface problem, which I think is what you’re saying. You’re talking about input/output problem, is that right? So I think it’s a solution to the human computer interface problem. The reason that so many people are working on it is because at the moment they don’t see any alternative. So gesture based interface is certainly going to have a role to play. I think they’re not at the stage at the moment even remotely where they could be rolled out. And I think that it’s hard to– the gesture based interface to think about book me a meeting with Manuela. I mean, that doesn’t quite– it just seems easy to say that than it does to try and do something with gestures. Maybe somebody will come up with something innovative there. I don’t know. But at the moment, I think gestures based interface is a very, very niche area. Brain reading, that, I think, is again, we’re nowhere near being able to do anything like book me a meeting with Manuela through brain reading. I think the state of the art there is one or zero. Possibly something a little bit more sophisticated, but not much. So at the moment, I say the reason that people are chasing that up is because they just don’t see any alternative for an awful lot of these systems. If you’re driving a car, how are you going to interface? You can’t take your hands off the wheel and start typing. You certainly can’t do gestures. So it’s the only alternative that’s there. MODERATOR: So let’s thank Professor Wooldrige. [APPLAUSE]

Leave a Reply

Your email address will not be published. Required fields are marked *