For decades, the conventional wisdom about Silicon Valley was that it leaned progressive. And by many measures (like donations by Big Tech employees to political candidates), the industry has been aligned with the Democratic politics that dominate the San Francisco Bay Area. But contrarian alternate worldviews held by prominent voices like Elon Musk and Sam Bankman-Fried have emerged that not only counter old narratives but are actively merging with right-leaning political movements. And combined with the anxiety and aspirations created by artificial intelligence, these new social currents are taking on a cultish zeal.
Dr. Timnit Gebru, a prominent AI researcher fired from Google in 2020 for speaking up against what she perceived as the company’s lack of proper ethical guardrails, has partnered with other researchers and philosophers to coin the (somewhat unwieldy) acronym “TESCREAL” to describe the overlapping emergent belief systems that characterize the contrarian, AI-centric worldviews challenging progressivism. It stands for: Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism.
It’s a mouthful. But the various “-isms” overlap in their history and ideology. Transhumanism proposes that humans should augment themselves by combining biological and synthetic technologies as a way of evolving our species. Extropianism posits that humans can counter entropy, thus ultimately extending the human lifespan — perhaps infinitely. Singularitarianism suggests that technology will advance to a point where it begins to design itself, thus accelerating exponentially and leading to the “singularity,” or an irreversible explosion of intelligence and technological advancement. These three ideas have been percolating for decades and popularized by technology evangelists such as Ray Kurzweil, currently heading AI research projects at Google.
Cosmism, the “C” in TESCREAL, is a set of ideologies advanced by Russian scientists and philosophers such as Nikolai Fyodorov, Konstantin Tsiolkovsky, and Vladimir Vernadsky. Prominent Russia scholar Marlène Laruelle found Cosmism to be so fundamental to Russian nationalism that she made it the subject of the first chapter of her book on the subject. (See: Russian Nationalism: Imaginaries, Doctrines, and Political Battlefields)
Foundational to Cosmism is the idea of trying to maximize space exploration, colonization, and if possible, promote the resurrection of the dead. As we have mentioned previously in The Wide Angle, Putin’s Chief of Staff, Anton Vaino, has been deeply influenced by Vernadsky’s idea of the “Noosphere” (the idea that earth will develop a kind of “global brain”). Tsiolkovsky also developed the formulae needed for rocketry, and deeply influenced Elon Musk.
Rationalism, the established philosophical idea that reason should be the source of and basis for knowledge, has spawned communities of practice. Most notably, the website Lesswrong.com has been a hotbed of rationalist discourse online. Attracting mostly (but not exclusively) young men, the rationalist community has a tendency for hierarchy and a desire to “perfect” one’s understanding and application of reason. And according to some former members, some rationalist communities have exhibited signs of cultish behavior and mind control.
Effective Altruism aims to reframe philanthropy in terms of both efficiency and ultimate outcomes. Rather, say, than giving a blanket to the freezing person right in front of you, it might make more sense to devise systems to insure specific people get different resources to maximize their long-term chance of impacting the world. There’s a lot of hand-waving and rationalization here that I won’t attempt to parse now, but it’s a bit like if Ayn Rand was put in charge of a homeless services program.
Sam Bankman-Fried, who famously squandered billions of dollars in FTX, a cryptocurrency Ponzi scheme, was a notable member of the Effective Altruist community. Will MacAskill, an Oxford philosopher and author of the book, What We Owe the Future, about E.A. and adjacent themes, was a frequent collaborator with Bankman-Fried; they directed philanthropic investments together. One of Bankman-Fried’s stated goals was to make massive amounts of money so he could fund investments in E.A.
Lastly, Longtermism is a philosophy championed by MacAskill and his Oxford philosopher colleague Nick Bostrom. Mixing ideas from Russian Cosmism and E.A., Longtermism concerns itself with the maximization of future “intelligences” in the universe, and posits that anyone that interferes with that goal is harming countless future (potential) lives.
This leads to some strange priorities, particularly a strong pro-natalist stance (you may recall that Musk has said that low birth rate is one of the biggest risks to humanity’s survival), but also a belief that in addition to biological intelligences, we should be maximizing machine intelligence in the universe. So that means not only should we be promoting biological space exploration and colonization (as per Cosmism), but we should also harness far-away planetary surfaces inhospitable to biologic life to build giant server “farms” from hypothetical materials like “computronium” — a kind of “programmable matter” that could host vast pools of mechanical Einsteins that could lead to the next big breakthroughs for intelligent life.
If all of that sounds outlandish and orthogonal to solving the debt ceiling crisis, dealing with Earth’s climate problems, or otherwise improving conditions here on this planet, that’s because it is.
TESCREAL proponents have an authoritarian “ends justify the means” mindset rooted in the idea that if we do not submit to their urgent demands, we will extinguish billions of potential future intelligent beings. Surely we must not allow that to happen!
Eliezer Yudkowski, a self-described AI theorist, believes that AI is likely to wipe out humanity and that we should bomb data centers to stop its advance. Max Tegmark, an AI researcher at MIT, has also called for halting AI development in order to seek “alignment” — the idea that machine intelligence should work with humanity rather than against it.
Such alarmist arguments, which originate in science fiction and are quite common in the TESCREAL world, are rooted in a hierarchical and zero-sum view of intelligence. The notion is that if we develop machine superintelligence, it may decide to wipe out less intelligent beings — like all of humanity. However, there is no empirical evidence to suggest these fears have any basis in reality. Some suggest that these arguments mirror ideas found in discredited movements like race science and Eugenics, even as others reject such charges.
TESCREAL is a convergent Venn diagram of overlapping ideologies that, because they often attract contrarian young men, tend to co-occur with other male-dominated reactionary and misogynistic movements. The Men’s Rights movement (Manosphere), the MGTOW movement (Men Going Their Own Way), and PUA (Pick Up Artist) communities are near-adjacent to the TESCREAL milieu.
Combining complex ideologies into such a “bundle” might seem to be dangerously reductive. However, as information warfare increasingly seeks to bifurcate the world into Eurasian vs. Atlanticist spheres, traditionalist vs. “woke,” fiat vs. hard currency, it’s difficult not to see the TESCREAL ideologies as integral to the Eurasianist worldview. I also independently identified these overlaps over the last few years, and thanks to philosopher Émile Torres and Dr. Gebru who together coined the TESCREAL acronym, we now have a shorthand for describing the phenomenon.
As you encounter these ideologies in the wild, you might use the TESCREAL lens, and its alignment with Eurasianism and Putin’s agenda, to evaluate them, and ask whether they tend to undermine or enhance the project of liberal democracy.
TESCREAL ideologies tend to advance an illiberal agenda and authoritarian tendencies, and it’s worth turning a very critical eye towards them, especially in cases where that’s demonstrably true. Clearly there are countless well-meaning people trying to use technology and reason to improve the world, but that should never come at the expense of democratic, inclusive, fair, patient, and just governance.
The biggest risk AI poses right now is that alarmists will use the fears surrounding it as a cudgel to enact sweeping policy reforms. We should resist those efforts. Now more than ever, we should be guided by expertise, facts, and evidence as we seek to use technology in ways that benefit everyone.