We are at a moment of profound societal transition, not just with respect to science and technology, but in the way we work, live, and think. No sector will be spared. No one’s life will go unaltered, even if the impacts are uneven. With advances in computing power and storage and improvements in machine learning algorithms, the field of artificial intelligence (AI) is developing at unprecedented speed. AI will revolutionize the medical field with incredibly accurate diagnostics. Machines are acquiring language and are even beginning to create art. AI will no doubt generate these and other positive outcomes— it has already—but there is now a compelling need to take up the legal, ethical, and social implications of the new world rapidly unfolding before us.
The goal is to create as diverse and inclusive a conversation around AI as possible, to ensure the average person will be better positioned to demand a more equitable distribution of the benefits to come. The public, or their representatives, need to play a role in strengthening policies that promote equitable access and accountability. The hope, as well, is that they’ll also have a hand in establishing economic and social support programs intended to mitigate the impact on those who are displaced. To fail to understand the social implications of AI is to be at serious risk of being left behind, with the already marginalized lagging even further.
I write from Montreal, Canada, where we are being touted as “the Silicon Valley of artificial intelligence.” This ambition is backed by hundreds of millions of private and public sector dollars. Montreal is home to Yoshua Bengio, one of three eminent deep-learning experts who helped shape AI into the force it is today, and is the center of one of the largest concentrations of academic deep-learning researchers in the world. Yet despite their close proximity to the advances in machine learning and progressive neuroscience, many Montrealers struggle even to define AI—a knowledge deficit symbolic of a culture where AI boosterism and ambivalence coexist.
The Canadian Information and Communications Technology Council (ICTC) defines the arena of AI as an “interdisciplinary field of study including computer science, neuroscience, psychology, linguistics, and philosophy.” This relatively broad and enlightened definition illuminates the myriad ways in which people can participate in the field, both professionally and through media and academic discourse. And the recent and encouraging movement toward Open Science—widely adopted within the machine learning academic community—makes information accessible to anyone who wants it. Initiatives such as OpenAI provide information to the public about AI, but unless people have a personal interest, they remain generally unaware of the available resources.
Part of the challenge lies in the fact that this science, which will help employ future generations, is complicated. Not unlike consciousness itself, the scientific definition of “AI” is elusive, and many experts agree that no clear one can be offered. This spring, MIT Technology Review published an article warning that even the people building the deep learning models cannot explain how, exactly, they work. Instead, AI is more commonly defined in terms of what it does. And as Kevin Kelly, author and senior maverick at Wired bluntly points out, “There’s nothing as consequential as a dumb thing made smarter.”
The potential consequences of AI are grabbing headlines in the mainstream media—a lot of hype but also (and often) in a sci-fi or dystopian context. One of the most imposing hurdles to understanding this technology stems from the public debate among experts over the last two years. Although disagreement is endemic in any historical transition, it can overwhelm the individual who simply wants to understand, “How worried should I be?” Ironically, one important answer to this is, “How comfortable are you with asking questions?”
The technology itself has rendered the act of asking questions problematic. The fine-tuned algorithms of social media have helped to create a digital culture in which people seek out answers and affirmations rather than being comfortable with contradictions and further debate.
There is an analogy frequently thrown around by machine-learning experts on panels and it traces back to the computer scientist Andrew Ng: “Worrying about evil AI killer robots today is a little bit like worrying about overpopulation on the planet Mars.” The quote is often used in the sense of “there are much more pressing matters to attend to” (the way that Ng likely intended). I have also observed the quote being used as a way of deflecting questions, and this is hardly a time to be suppressing curiosity.
U.S. Treasury Secretary Steven Mnuchin recently made the shockingly misinformed statement that in terms of “artificial intelligence taking over American jobs, I think we’re so far away from that it’s not even on my radar screen.” This kind of misinformation is a threat to the same people to whom Trump promised to deliver employment during his campaign. Keeping them ignorant leads to real policy progress being hampered due to sheer lack of demand.
The hard questions are not just going to be those that involve whether and when robots are going to take our jobs.
As a recent article published in New America Weekly notes, “Digital literacy, emotional and social competency (EQ), and adaptability are essential to closing achievement gaps. Human nature is about adaptability; we need to identify new ways to improve these soft skills, prepare individuals for a shifting job market, and ensure inclusivity in workforce development.”
Best-selling author Sam Harris’s conversation with the computer scientist Stuart Russell (found on Harris’s podcast Waking Up) is a must-listen for anyone curious about how to start thinking rationally about AI and its potential impacts. Both men lean toward the cautionary (in the case of Harris, sometimes the dystopian). But the most compelling aspects of their discussion underscore something not always covered by the mainstream media’s discussion of technology disruption: what are our collective core values and, beyond autonomy, does humanity even share common goals?
Many people are calling for clearer unification of technology and ethics. The Future of Life Institute is a formidable group of thinkers producing valuable information on this (and other) topics. When it comes to the data sets that machine-learning algorithms are trained on, we must start redefining what our collective morals are. Adding more people to the conversation, from a variety of demographic backgrounds and academic disciplines (yes, the humanities and social science are still valuable), will be crucial to getting the answers to these questions right.
Likewise, the AI Now Report, which examines the social and economic implications of AI technology in the near future, is available online for free. (AI Now hosts a public symposium that can be live-streamed.) The report highlights much of what is covered in the mainstream media: job loss, uneven distribution of benefits, potentially biased data sets. But it also focuses on related and equally pressing issues, such as who will define more abstract concepts like “care” and how will people adjust to the new lifestyles brought on by advances in AI?
Over the coming decade we will be debating what is the best way to implement the necessary social reforms, and who will pay for them. In Canada, it is clear that the current government is interested in participating, but precisely how remains unclear. Canada already offers a comparatively broad safety net, including federal and provincial support for families, a financially accessible university education, and national health care. In May, Navdeep Bains, our minister of innovation, science, and economic development, announced a $950 million fund for an “Innovation Supercluster Initiative,” with $125 million allocated to a national AI strategy.
While speaking on a recent panel on AI and ethics in Montreal, Yoshua Bengio suggested that one benefit from having innovation start-ups remain in Canada (he was speaking specifically about Montreal) is tax revenue. Our government must structure a war chest for people who will need support—in particular, those who may soon lose their jobs to the AI workplace.
Indifference toward the impending impacts AI will have on society is an urgent concern. For those in the public sector, specialist private institutions, and the media, it is increasingly important to find ways and means of preparing society for the inevitable and likely sudden changes that AI will produce. (We should also be explaining the positive dimensions of this transformational period, to motivate participation from the general public).
The questions are only going to get harder: Universal Basic Income? Free university for those who require new skill sets? Should tech giants be taxed to support unemployed workers directly? Should robots be taxed? Although the full ramifications of AI are not yet clear, more can and should be done to stimulate the public’s general interest, to encourage people to participate in the evolving discussion about AI. Then the policymakers will have no choice but to listen.
Katharine Dempsey is an editor and writer in Montreal, Quebec, who explores the intersection of society and advances in technology.
0 Comments