fbpx

Select Page

The Wide Angle: Understanding TESCREAL — the Weird Ideologies Behind Silicon Valley’s Rightward Turn

by Dave Troy

May 1, 2023 | Politics, The Wide Angle

PHOTO CREDIT: 
Thomas Hawk, RIA Novosti, Cointelegraph

For decades, the conventional wisdom about Silicon Valley was that it leaned progressive. And by many measures (like donations by Big Tech employees to political candidates), the industry has been aligned with the Democratic politics that dominate the San Francisco Bay Area. But contrarian alternate worldviews held by prominent voices like Elon Musk and Sam Bankman-Fried have emerged that not only counter old narratives but are actively merging with right-leaning political movements. And combined with the anxiety and aspirations created by artificial intelligence, these new social currents are taking on a cultish zeal.

Dr. Timnit Gebru, a prominent AI researcher fired from Google in 2020 for speaking up against what she perceived as the company’s lack of proper ethical guardrails, has partnered with other researchers and philosophers to coin the (somewhat unwieldy) acronym “TESCREAL” to describe the overlapping emergent belief systems that characterize the contrarian, AI-centric worldviews challenging progressivism. It stands for: Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism.

It’s a mouthful. But the various “-isms” overlap in their history and ideology. Transhumanism proposes that humans should augment themselves by combining biological and synthetic technologies as a way of evolving our species. Extropianism posits that humans can counter entropy, thus ultimately extending the human lifespan — perhaps infinitely. Singularitarianism suggests that technology will advance to a point where it begins to design itself, thus accelerating exponentially and leading to the “singularity,” or an irreversible explosion of intelligence and technological advancement. These three ideas have been percolating for decades and popularized by technology evangelists such as Ray Kurzweil, currently heading AI research projects at Google.

Cosmism, the “C” in TESCREAL, is a set of ideologies advanced by Russian scientists and philosophers such as Nikolai Fyodorov, Konstantin Tsiolkovsky, and Vladimir Vernadsky. Prominent Russia scholar Marlène Laruelle found Cosmism to be so fundamental to Russian nationalism that she made it the subject of the first chapter of her book on the subject. (See: Russian Nationalism: Imaginaries, Doctrines, and Political Battlefields)

Foundational to Cosmism is the idea of trying to maximize space exploration, colonization, and if possible, promote the resurrection of the dead. As we have mentioned previously in The Wide Angle, Putin’s Chief of Staff, Anton Vaino, has been deeply influenced by Vernadsky’s idea of the “Noosphere” (the idea that earth will develop a kind of “global brain”). Tsiolkovsky also developed the formulae needed for rocketry, and deeply influenced Elon Musk.

Rationalism, the established philosophical idea that reason should be the source of and basis for knowledge, has spawned communities of practice. Most notably, the website Lesswrong.com has been a hotbed of rationalist discourse online. Attracting mostly (but not exclusively) young men, the rationalist community has a tendency for hierarchy and a desire to “perfect” one’s understanding and application of reason. And according to some former members, some rationalist communities have exhibited signs of cultish behavior and mind control.

Effective Altruism aims to reframe philanthropy in terms of both efficiency and ultimate outcomes. Rather, say, than giving a blanket to the freezing person right in front of you, it might make more sense to devise systems to insure specific people get different resources to maximize their long-term chance of impacting the world. There’s a lot of hand-waving and rationalization here that I won’t attempt to parse now, but it’s a bit like if Ayn Rand was put in charge of a homeless services program.

Sam Bankman-Fried, who famously squandered billions of dollars in FTX, a cryptocurrency Ponzi scheme, was a notable member of the Effective Altruist community. Will MacAskill, an Oxford philosopher and author of the book, What We Owe the Future, about E.A. and adjacent themes, was a frequent collaborator with Bankman-Fried; they directed philanthropic investments together. One of Bankman-Fried’s stated goals was to make massive amounts of money so he could fund investments in E.A.

Lastly, Longtermism is a philosophy championed by MacAskill and his Oxford philosopher colleague Nick Bostrom. Mixing ideas from Russian Cosmism and E.A., Longtermism concerns itself with the maximization of future “intelligences” in the universe, and posits that anyone that interferes with that goal is harming countless future (potential) lives.

This leads to some strange priorities, particularly a strong pro-natalist stance (you may recall that Musk has said that low birth rate is one of the biggest risks to humanity’s survival), but also a belief that in addition to biological intelligences, we should be maximizing machine intelligence in the universe. So that means not only should we be promoting biological space exploration and colonization (as per Cosmism), but we should also harness far-away planetary surfaces inhospitable to biologic life to build giant server “farms” from hypothetical materials like “computronium” — a kind of “programmable matter” that could host vast pools of mechanical Einsteins that could lead to the next big breakthroughs for intelligent life.

If all of that sounds outlandish and orthogonal to solving the debt ceiling crisis, dealing with Earth’s climate problems, or otherwise improving conditions here on this planet, that’s because it is.

TESCREAL proponents have an authoritarian “ends justify the means” mindset rooted in the idea that if we do not submit to their urgent demands, we will extinguish billions of potential future intelligent beings. Surely we must not allow that to happen!

Eliezer Yudkowski, a self-described AI theorist, believes that AI is likely to wipe out humanity and that we should bomb data centers to stop its advance. Max Tegmark, an AI researcher at MIT, has also called for halting AI development in order to seek “alignment” — the idea that machine intelligence should work with humanity rather than against it.

Such alarmist arguments, which originate in science fiction and are quite common in the TESCREAL world, are rooted in a hierarchical and zero-sum view of intelligence. The notion is that if we develop machine superintelligence, it may decide to wipe out less intelligent beings — like all of humanity. However, there is no empirical evidence to suggest these fears have any basis in reality. Some suggest that these arguments mirror ideas found in discredited movements like race science and Eugenics, even as others reject such charges.

TESCREAL is a convergent Venn diagram of overlapping ideologies that, because they often attract contrarian young men, tend to co-occur with other male-dominated reactionary and misogynistic movements. The Men’s Rights movement (Manosphere), the MGTOW movement (Men Going Their Own Way), and PUA (Pick Up Artist) communities are near-adjacent to the TESCREAL milieu.

Combining complex ideologies into such a “bundle” might seem to be dangerously reductive. However, as information warfare increasingly seeks to bifurcate the world into Eurasian vs. Atlanticist spheres, traditionalist vs. “woke,” fiat vs. hard currency, it’s difficult not to see the TESCREAL ideologies as integral to the Eurasianist worldview. I also independently identified these overlaps over the last few years, and thanks to philosopher Émile Torres and Dr. Gebru who together coined the TESCREAL acronym, we now have a shorthand for describing the phenomenon.

As you encounter these ideologies in the wild, you might use the TESCREAL lens, and its alignment with Eurasianism and Putin’s agenda, to evaluate them, and ask whether they tend to undermine or enhance the project of liberal democracy.

TESCREAL ideologies tend to advance an illiberal agenda and authoritarian tendencies, and it’s worth turning a very critical eye towards them, especially in cases where that’s demonstrably true. Clearly there are countless well-meaning people trying to use technology and reason to improve the world, but that should never come at the expense of democratic, inclusive, fair, patient, and just governance.

The biggest risk AI poses right now is that alarmists will use the fears surrounding it as a cudgel to enact sweeping policy reforms. We should resist those efforts. Now more than ever, we should be guided by expertise, facts, and evidence as we seek to use technology in ways that benefit everyone.

Read On:

Share This Story:

2 Comments

  1. As someone who lived and worked in Silicon Valley for 2+ decades, I can fill in some of the details of the history behind this conglomeration of red-pilled pro-fascist tech VC billionaire club. It’s not all a recent “turn”. There’s been an influx of hyper-libertarian get rich quick scumbags into the Silicon Valley tech VC/founders community following the tech booms of the last 2 decades, along with the more recent Oxford EA/”Longtermist” influence. But there’s also a less known native-born underbelly of dark ideologies plaguing the valley going back to…well the fact that Silicon Valley tech industry itself was a creation of the US military/intelligence agencies in the 1950s.

    There’s been somewhat of a myth created in the media of a “liberal Silicon Valley” that is a bit inaccurate, or at least a lot more complicated. While the majority of working class and tech professionals who live in the valley may be liberal, the corporate elite were not, regardless of which party(s) they tried to buy influence with political donations. Actually, this is an important point that a lot of people don’t get. When a corporation donates lots of money to Democrats, that in no way should be taken as proof of a progressive/liberal bias or preference on the part of the corporate execs. Rather, it is tor the purpose of buying influence in the Democratic Party.

    The libertarian narrative of tech hides a much darker reality. Pull back the shiny cover of Silicon Valley and you will soon discover an underbelly of dark ideologies long plaguing the valley including the long influence of eugenics at Stanford University.

    Don’t forget about the OG (original gangster) Silicon Valley VC billionaire Trump supporting fascist creep Peter Thiel, who predates the Oxford EA/”Longtermist” influence. Thiel – a Stanford University molded corporate fascist – represents Silicon Valley’s other line of fascist eugenics assholes that go back to the post Gold Rush days of SF Bay Area hyper-capitalism. Stanford University was founded in 1885 by some rich eugenics elitists with some bigly entitled ‘tude: “to promote the public welfare by exercising an influence in behalf of humanity and civilization.” Their godly mission, as they saw it, was to mold the minds of the next generation of business, scientific, and political leaders – the elite who would be running things.

    See the recently published book: “The Untold History of Silicon Valley” by Malcolm Harris.

  2. Keenan — I concur with your assessment. As someone who has had a 30+ year long-distance relationship with the Valley and has spent a lot of time there with many friends in the industry (as I’ve been myself) it’s a complex mix of libertarianism and idealism. Appreciate your expansion on nuance that I couldn’t myself fit into our target word count. 🙂

We collect email addresses for the sole purpose of communicating more efficiently with our Washington Spectator readers and Public Concern Foundation supporters.  We will never sell or give your email address to any 3rd party.  We will always give you a chance to opt out of receiving future emails, but if you’d like to control what emails you get, just click here.