We have arrived at the watershed moment when artificial intelligence is finally at the center of the mainstream conversation. This owes less to the almost obligatory speculation about the role and impact of robots in the workplace and more, now, to AI’s relationship with data, the power of ubiquitous and increasingly sophisticated algorithms, and the nature of the world being shaped around them.
As the ethicist Luciano Floridi wrote in Mega Tech: Technology in 2050, our day-to-day life is becoming more digital: “playing, educating, entertaining, dating, meeting, fighting, caring, gossiping, advertising. We do all this and more in an enveloped infosphere [building the world around AI instead of incorporating it into our world], where we are more analogue guests than digital host.”
Individual and collective agency is the hidden cost of building our lives around algorithms and the new norm of seeking sustained attention. When left unchecked, the combination of rich data-sets and fine-tuned (and often proprietary) algorithms can control our thoughts and erode our capacity to make choices. The preservation of free will is essential to democratic citizenship, and it is important that we examine what is at stake in a world that is managing decisions through algorithms. As I have heard several times from machine-learning experts, “There is no putting the genie back in the bottle.”
Corporate self-interest poses another hurdle. Mark Zuckerberg’s mea culpa in response to the Cambridge Analytica scandal was misleading. Roughly 1.4 billion users log on to Facebook daily, and it was not enough to say that “we have a basic responsibility to protect people’s data, and if we can’t do that, then we don’t deserve to have the opportunity to serve people.” Zuckerberg knows that, for his platform, it’s the “people” who are servants of the business model. As the internet sociologist Zeynep Tufekci has pointed out, Facebook makes its money “by profiling us and then selling our attention to advertisers, political actors, and others.”
Individual and collective agency is the hidden cost of building our lives around algorithms and the new norm of seeking sustained attention.
A different and more disturbing framework was offered by Amin Toufani in his otherwise hopeful keynote about “Exonomics” (the economics of exponential technologies that are changing the way we live) at last year’s Singularity U conference in Toronto. Early in his talk, he observed that people now function as businesses used to (i.e., people have become more entrepreneurial), and businesses have the responsibilities that were once the province of government. “What do governments do?” Toufani asked rhetorically. “Not much.” As attractive as that framing seemed to the mostly private-sector audience, it was chilling to those in attendance who question whether the private sector, tech or otherwise, should be left to regulate itself and create policies around the use of average citizens’ data.
Soon after the Cambridge Analytica story broke, Google AI researcher François Chollet took to Twitter with a bold thread criticizing Facebook’s use of AI. “The problem with Facebook is not just the loss of your privacy and the fact that it can be used as a totalitarian panopticon [a prison designed for control through surveillance]. The more worrying issue, in my opinion, is its use of digital information consumption as psychological control vector.”
According to Chollet, in 2016 Facebook started using deep learning in its newsfeed and advertising networks: “Facebook has invested massively in it.” A deep learning system “learns” from data to make more accurate predictions than those produced by humans. Large tech companies have been investing heavily in AI research in part because of recent advances in this field. Deep learning requires huge amounts of data, which can be costly, unless of course people are willing to give it up for free on, say, a social media platform.
Following his Twitter tirade, Chollet published an important piece, “What Worries Me About AI” on Medium and made the distinction between a manipulative AI tool and an empowering one. “One path leads to a place that really scares me. The other leads to a more humane future. There’s still time to take the better one.”
Not all governments have been content to let the infosphere self-regulate. French President Emmanuel Macron sees both the opportunity and urgency to build a better future in both France and the EU through a national AI strategy. Discussing his plan with Nicholas Thompson, editor of Wired, Macron asserted that AI could “totally jeopardize democracy.” Macron emphasized his desire to make France’s approach to AI interdisciplinary: “This means crossing maths, social sciences, technology, and philosophy. That’s absolutely critical. Because at one point in time, if you don’t frame these innovations from the start, a worst-case scenario will force you to deal with this debate down the line.” Most researchers working in the field of AI and ethics endorse the approach outlined by Macron: retrofitting regulations is a messy job (as we will see with Facebook).
Policymakers must now acknowledge that oversight of the relationship between data, privacy, and algorithms has become a right of citizenship. The need is for robust privacy regulations, along with policies that enable citizens to benefit from data sharing, and regulations that prevent the manipulative use of AI.
A test case is Facebook: will government actually find ways of regulating the Leviathan to protect the public interest more effectively? Possibly not this round. But this fight for the right of individual autonomy in the new digital environment is a cause that must be embraced.
Katharine Dempsey is an editor and writer in Montreal, Québec, who explores the intersection of society, culture, and advancing technologies. She writes the newsletter Mai (www.maimedia.ca) and is the editor of the forthcoming publication AI& (an examination of AI in Canada).
0 Comments