The writer and technology critic Evgeny Morozov appeared recently at FutureFest in London for a conversation on the Geopolitics of Artificial Intelligence with John Thornhill of the Financial Times. The Spectator presents edited highlights of this landmark interview as part of its continuing coverage of the fast evolving AI terrain.
Thornhill: In September 2017, Vladimir Putin observed, “Artificial Intelligence is the future, not only for Russia but for all humankind. It comes with colossal opportunities but also with threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.” Is he right?
Morozov: I think to a large extent, yes. We can discuss whether the debate around AI is full of hype, or how much it is a valid debate, but nevertheless it seems clear that as AI becomes the infrastructure around which many others domains, such as industries and parts of society—whether it is education, transportation, or health—will be shaped, it is very important to us who owns it. If it’s owned by states with certain geopolitical imperatives they will be able to twist it any way they want.
There is a realization on the part of non-U.S. actors that the last two or three decades have been relatively good for extending U.S. power via this infrastructure. China certainly has been taking a lot of steps to catch up. Also in places like Russia and India and Latin America there is the understanding that if they do not have an alternative to American and Chinese infrastructure they will be forced to play by the rules they did not set. And especially now, with Trump, there is a fear that there is complete unpredictability as to which way the wind might blow in the United States.
Thornhill: It is interesting that Putin made this comment, because Russia is not evidently a superpower in AI, is it? Or do you think it is in certain areas?
Morozov: Russia clearly has a lot of good researchers and its university culture is still one of the strongest when it comes to churning out a lot of graduates in engineering and AI. They also have a couple of assets in this domain that other countries don’t have. They managed to preserve Russian search engines and Russian social networks, and to some extent they are the ones generating data, so while it is not a leader in AI it definitely has some of the ingredients that other countries are missing. I wouldn’t necessarily bet against Russia in this field, they are not at the level of China, and certainly not at the level of the United States, but they have the human potential and they have some of the key levers through which they can catch up.
Thornhill: You’ve touched on this, but I’d really like to get a definition of AI that we can all agree on, because the term is bandied around, and people mean very different things by it. What is your understanding of it and can you just expand a little on why you think it will be truly significant?
Morozov: I think as far as states and governments are concerned, we are talking about a much weaker definition of artificial intelligence. We are talking essentially about the ability to predict, the ability to classify, the ability to recognize certain objects, and to differentiate them one from another. Does it amount to a full-blown equivalent to the human mind? Of course not. Could it result in billions of savings in ways to mechanize and automate things that were previously done by humans? Yes. And this is where I think the geopolitical ramifications arise.
Thornhill: The debate about AI in some respects seems to resemble the early debate about the internet and the World Wide Web. We tended to think the internet was going to be a fantastic benefit to mankind, it was going to liberate and democratize society. You were one of the first people to warn that this was not necessarily going to be the case. And it seems to me that we are having a bit of second debate about AI at the moment. So how do you think this debate is going to work out; how do you think we are going to get the best of AI, and prevent the worst?
Morozov: In my own early work, almost a decade ago, there was the implicit assumption that every debate about the internet, its benefits and virtues, was also a debate about the virtues and benefits of capitalism, in particular this current stage of capitalism, with a heavy commitment to data with some kind of cognitive component to it. Essentially we are talking about the ability of a couple of firms to deliver certain benefits based on certain business models. To me this is what the debate about the internet has been, even though much of the media and the public sphere refuse to treat it that way—preferring instead to see to the internet as a medium of some kind and not just as the byproduct of business models.
If you want to understand the impact of AI on the world, I think there you would be far better off understanding the points at which there is no overlap between Alibaba, let’s say, and Amazon Web Services. Having this abstract debate about AI and its impact on humanity, it’s not going to be very productive because ultimately the activities of those firms are heavily shaped by the internal dynamics of competition in this industry.
Just to give you an example, a decade ago virtually nobody was offering web services or cloud services. Most firms in this field were selling advertising: Google was doing it, Facebook was doing it, Amazon was selling products. Once Amazon entered the services sphere and understood there is a lot of money to be made, a decade later everybody suddenly is doing services, and they realize that maybe that’s the future because the profit margins of selling web services are much higher than selling products.
Thornhill: This also touches on the nature of power in the modern world, doesn’t it, because seven of the 10 most valuable companies in the world are U.S. and Chinese tech companies, and they are really very much on the leading edge of AI research and deployment.
Morozov: I think there are several ways to interpret this, and they probably overlap. In one sense you can see the immense value these companies have accumulated in the stock markets, but you can interpret it as just a sign of stagnation in the other parts of capitalism. So this is one way to read it, that it does not reflect internal power, it reflects the last hope in the investor class about the ability of capitalism to pull itself out of the crisis. And who can do it? Uber can do it, or Facebook can do it or Amazon can do it, by introducing immense efficiency into how we organize economic activity.
There is another way to read it and I think it also has some truth to it, which is that the expectation behind Amazon’s immense value has to do with the sense that they are going to enter almost every single industry where there is some data and there is some benefit to be had. So now they are entering the pharmaceutical business, they are entering the insurance business, they will be entering, who knows, the banking business, we don’t know.
Thornhill: I’d like to delve a bit deeper into the powers of these companies that you’ve been talking about. There was a debate with Mark Zuckerberg when he was traveling around the 50 states of the U.S. [about] whether he was going to become president, and there was a counternarrative that said, Why on earth would he want to give up power? Is it wrong to think about geopolitics now in terms of nations or blocks; should we be thinking more about it as these corporations?
Morozov: I wouldn’t underestimate the power of the state still to be able to intervene and shape this environment. China gives you the most visible and vivid example of how a state-driven strategy could actually produce an outcome. Clearly, we can be negative about the human rights implications of Chinese state policies. Still, in retrospect the project has been a great instrument of trade, essentially because they kept the foreign companies out of the market, allowing them to build their own domestic tech industry, which, granted, serves their political agenda. I think discounting the state is not necessarily thinking in the right direction.
And even in the U.S., I think if the Congress wanted to crack down on Facebook, they would, but they do not do it also for geopolitical reasons. If you had seen the memo that Mark Zuckerberg was carrying during his testimony in Congress, there was a very interesting passage there that was caught by photographers, which basically stated that America should not crack down on Silicon Valley and Facebook because that will benefit China. I think that is one of the main reasons that I would not expect any action from Washington: It’s not because they’re powerless, it’s because given the current conjuncture it pays for them not to do much.
I would also like to highlight the importance of thinking globally and transnationally. If you look at the main entity that is shaping the present and future of AI, it would be Softbank, a Japanese firm that has amassed almost $100 billion for its Vision Fund. Softbank goes and buys up the most important companies in robotic, AI, transportation and ride-sharing, and so forth. And if you look at Softbank, it’s not obvious which state it actually represents. In terms of state investment, most of its money comes actually from Saudi Arabia and the Sovereign Wealth Fund of the Emirates. They also have money from European sources, like Daimler, a company that technically should be afraid of the Chinese but whose largest shareholder now is actually a Chinese company. Nonetheless this German company invested in Softbank together with Saudi Arabia, and Softbank then goes and buys Boston Dynamics, the big robotics firm that has been in the news. Those are the kinds of movements that are very easy to miss if we just focus on the companies and the nation states, and do not engage this globalism factor.
Thornhill: While we are on nation states, I’d like to contrast and compare what’s going on in the U.S. and China in terms of AI. These are two very different models of development. Which of these two models is more likely to prevail?
Morozov: I think there was some ideological error made in Washington. To me, it’s quite telling that before he wrote The End of History, Francis Fukuyama was actually a staffer in the State Department. The model that was honed and articulated during the Cold War advanced the idea that America does not really have to invest constantly to maintain its power in the global sphere, that it will continue. China has proved that that’s not going to happen, and what we’re seeing now with Trump is an effort essentially to react to that.
To what extent does one model work better than the other? Well, without doubt the Chinese model works. It works in that they have more data than they know what to do with, and they don’t have privacy violations of any kind. We can talk about the democratic shortcomings of the Chinese model, but it will be a very different discussion from the efficiency of their system. They do what I think is right, they provide a lot of monetary incentives: They have announced that they are going to put more than $135 billion into the development of AI by 2030, and that is only at the central level and does not count regional and municipal levels. They have allocated to each big company the responsibility for breaking into a sector—Alibaba is supposed to take care, I think, of smart city, and Baidu autonomous cars—so they have allocated a lot of specific tasks, and they do a lot of industry-shaping, which of course America was doing very happily during the Cold War.
Thornhill: You mention the Cold War. Is it useful to think of the AI arms race, is it a kind of Cold War scenario that we are facing now?
Morozov: Yes, but again, you have to see it, I think, historically. China over the past two decades I would argue has been trying to extricate itself from unnecessary dependencies. They have tried to extricate themselves from their enormous dependency on the dollar, though that did not really work out very well. They feel the need to identify those kinds of dependencies that are not healthy for the extension of Chinese power, and I think they have come to identify AI as one of those bottlenecks, and I think other countries have woken up to that as well. We do not see the military occupying countries yet to defend the power that they have. But even with the available commercial mechanisms, they can do quite a lot.
Thornhill: The people who think about these questions in the European Commission would say we understand we are lagging behind the U.S. and China in this area, but we think we are creating an alternative and ultimately more sustainable data infrastructure in Europe. GDPR (the General Data Protection Regulation implemented in May 2018 by the European Union) was just one example of that. We think if people trust us with their data that that will be more valuable than untrusted data. We will be able to create a new ecosystem, a new more viable data economy, and an AI industry on the back of that. Do you buy that or do you think we are just going to get steamrolled by the U.S. and the Chinese?
Morozov: I don’t buy that. I appreciate Europe’s efforts to articulate a vision that tries to do something in artificial intelligence while maintaining some control over citizen’s data. But at the higher level, you’ll get private industry doing whatever it wants. You just had a big deal this week that a major French transportation and logistics group, Bollore, struck with the Chinese company, Alibaba Cloud. When you have individual deals within your core industries, and when your key municipalities are being approached by Amazon, Microsoft, Google, IBM on a daily basis, and you have so many bilateral deals, you do not have any coordination.
Thornhill: What does a post-Brexit Britain look like in this world?
Morozov: Britain has done better than its standing suggests in AI thus far. But it also sold off its crown jewels—Deep Mind went to Google, Softbank acquired another—so essentially there has not been strategic coordination at the level of retaining some control over the key infrastructure. But we have also seen an announcement of a major funding initiative by the British government in relation to AI earlier this year. Still, I think with the scale at which these things work right now, either Britain sticks with Europe or it can forget about being anything but a customer.
Thornhill: How about the rest of the world. How does it cope in this world of two AI superpowers?
Morozov: It doesn’t. What you see are certain strategic state efforts, though it’s not covered much in the international press. For example, China has many bilateral funds, so at the state level with countries like Russia or Belarus, they invest jointly into local companies and try to retain ownership of them before Google moves in. But ultimately you have to be realistic, given the capital investment that these companies are pouring into developing and honing AI. Amazon spends, I think, $15 billion in R&D every year, and Facebook maybe $13 billion, and Alphabet is somewhere in that range, and Alibaba has announced $15 billion just for AI for the next three years. Those are not trivial sums; you cannot do it as a start-up in Brazil or Bangladesh—it’s going to be very hard because the money going into this field is huge. We are looking at potential alliances with Russia and China, the U.S. possibly with the U.K., and then Europe going on its own, maybe with Norway and Switzerland.
Thornhill: Europe may fall behind in AI due to data-protection laws. How can we balance privacy with the need to keep up with the rest of the world in this area?
Morozov: What’s missing are experiments in Europe that articulate different visions for data ownership. What about social data? What about data that I produce in common with the people who live in my neighborhood? Who has ownership over that? Will it be considered private property, which means that any public sector actor will have to pay to get that data? Will we actually be considering that data produced in common as some kind of a common infrastructure that should belong not to Google and not to me? I think Europe has not succeeded in actually taking on board this vision of social data ownership, because if you really want to preserve a social market economy which is conscious of its own foundations and its own conditions of possibility, you have to be conscious also of the role that shared resources like data play in sustaining it. You cannot just send the data along to private property and continue.
Evgeny Morozov is a writer and researcher who studies the political and social implications of technology. He is the author of two books, Net Delusion, 2011, and To Save Everything, Click Here, 2013, both published by Public Affairs Books.
John Thornhill is the innovation editor at the Financial Times.
FutureFest, a two-day festival of immersive experiences, compelling performances and radical speakers, all designed to challenge our perceptions of the future, was held on July 6–7, 2018. The event is the flagship festival of Nesta, a global innovation foundation. It is a nonprofit initiative aimed at bringing future thinking to the public realm so that everyone can benefit. To find out more, visit www.futurefest.org. This transcript has been edited for length and clarity.
0 Comments