What’s E/ACC (pronounced /ˈiːak/) ? At first it was a meme, distributed around Tweets, bios and Twitter spaces. Patient zero unknown. Newspapers started to talk about it, but only scratched the surface. This post is an attempt to reverse engineer E/ACC, to get its substance.


First we need to define accelerationism. Accelerationism refers to the acceleration brought by the positive feedback loop between technology and capital.

“The story goes like this: Earth is captured by a technocapital singularity as renaissance rationalitization and oceanic navigation lock into commoditization takeoff. Logistically accelerating techno-economic interactivity crumbles social order in auto-sophisticating machine runaway. As markets learn to manufacture intelligence, politics modernizes, upgrades paranoia, and tries to get a grip.”

– Nick Land, Meltdown

Accelerationism has a large array of pronouns and sub-cultures: G/ACC (Gender Accelerationism), L/ACC (Left Accelerationism), R/ACC (Right Accelerationism or NRx) etc.

Effective Accelerationist

The movement was founded in October 2022 by two pseudonymous Twitter users, Beff Jezos and Bayes (It is not exact because the term emerged from some “randoms schizophrenics Twitter people” months before, to quote Martin Shkreli during a Twitter space). The name is a clear parody of Effective Altruism, and this detail has its importance. Most think about Effective Altruism as a venture for philanthropic activities, but hottest parts of their forum deal with Existential Risk and AI Safety. And of course they have Nick Boltrom and Eliser Yudkowsky as contributors (or at least part of the movement).

And with Chat GPT hype at the beginning of 2023, AI Safety became a trending topic, and so did E/ACC. Push away from its Twitter cave, the E/ACC produced hyperstition, being embarrassed by Silicon Valley big names: Marc Andreessen and George Hotz*, both Curtis Yarvin’s readers. And during various podcasts, both incremented the E/ACC specifications. Lets  try to put these new specifications on digital paper.

1. AI Safety claims are ridiculous

“Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war”

– Center for AI Safety (https://www.safe.ai/statement-on-ai-risk)

The most striking thing when real conversations were started with people challenging AI Safety (Like George Hotz did in numerous podcast), was how bananas AI Safety claims were. Chat GPT5 will not be AGI, because Chat GPT is just a text completion system on steroids. Auto GPT won’t take over the world, it will just make your Chat GPT billing explode.There is no GPU cluster that will magically achieve AGI overnight and start 3D printing bioweapons. AI is not a nuclear bomb, because the only thing a nuclear bomb does is to explode when AI can help to solve climate change and to colonize the galaxy. And pardon my specism, but a GPU cluster is not alive, there no little being living inside the server rack**. And finally, regarding the automatisation misinformation campaign, or how AI would kill democracy, we can suspect that these systems are already massively used by every Party, Russia like the USA. Automated misinformation campaigns and psyops as a L2 over information are now the definition of democracy.

“In Ukraine, Zhang “found inauthentic scripted activity” supporting both former prime minister Yulia Tymoshenko, a pro–European Union politician and former presidential candidate, as well as Volodymyr Groysman, a former prime minister and ally of former president Petro Poroshenko. “Volodymyr Zelensky and his faction was the only major group not affected,” Zhang said of the current Ukrainian president.”

– Buzzfeed about Facebook’s whistleblower, Sophie Zhang (https://www.buzzfeednews.com/article/craigsilverman/facebook-ignore-political-manipulation-whistleblower-memo)

2. Anarchist and open source

“[AI models could obstruct government surveillance rather than fuel invasive intelligence programs]. People are going to be raising the red flag of ‘software communism,’ where we need to declare the models must be open[.]”

– Edward Snowden (https://www.coindesk.com/consensus2023/2023/04/28/edward-snowden-researchers-should-train-ai-to-be-better-than-us/)

A turning point in the western politics of the post WWII era is the rise of shadow governments and supranational entities, like the CIA, the FBI, United Nations, NGO or at a less Intelligent level the European Commission. And at the contrary of previous old school shadow governments like oligarchy, these new ones cannot be dismantled because they own the power on paper. Over them lie no hierarchy. Over them lie the pure anarchy of international relationships, which means China versus the US.

AI safety naturally flows to more coercion against the individual when assuming that these completely opaque shadow governance that governs us will respect these rules. But what if these entities already had some crazy AI tools to monitor us? There are Deep Learning models that can regenerate an image from its hash (the cryptographic signature of an image). It’s hard for non computer scientists to understand how severe this ability is, but, to make it simple, it’s close to black magic. What to do about it? The Safterist logic would be to just ban these models, implying that criminals and the government will still be able use these tools, when letting everyone else to their mercy (this dynamic has a name: anarcho-tyranny).

An example of abusive governmental interference on technology are regular attempts to curb cryptographic standards, sometimes promoting on purpose unsecure algorithms. These behaviors can have catastrophic consequences, leading to large scale data breach and hacking when discovered by malicious entities.

To preserve liberties and protect the general public against abuses of AI, we need to keep AI open source, and let everyone and their computers compete with governments and criminals. And if, in the future, laws are made to prevent open-source software distribution, it’s the duty of the People to use Intelligence to make these laws obsolete. Imagine a Darknet for AI. It’s the idea behind George Hotz’s Tinygrad, a light and open source neural network, built to not consume more than an electric car and so to be undetectable to heat or energy consumption profiling. Tinygrad is by design an anarchist software, built to prevent us from a future dystopia.

Privacy and software protection are not the only use cases of open-source AI. The flood of information that we experience everyday is insane, incinerating in its flow any byte of Intelligence. Letting anyone have its own self-aligned AI to refine the information flow in real time would lead to a whole new political paradigm where Attention Economy and psy-ops become inoperative on a large portion of the population. And to quote Hotz, “[w]hat is blocking if not open-source censorship”?

“Centralization and control are the definition of tyranny. I don’t like anarchy either, but I still prefer anarchy to tyranny because anarchy has a greater chance of success.”  

– George Hotz https://youtu.be/dNrTrx42DGQ?t=5830

3. Acceleration and growth are good, safety is Evil

“Society has hit pause on other technologies with potentially catastrophic effects on society.  We can do so here. Let’s enjoy a long AI summer, not rush unprepared into a fall.”

– Pause Giant AI Experiments: An Open Letter (https://futureoflife.org/open-letter/pause-giant-ai-experiments/)

What are the two major contemporary issues advertised by the media? War in Ukraine and global warming. Imagine that these issues were not the result of too much technology but rather of too much safety. This is the interesting perspective of Marc Andreessen.

In 1973 president Nixon launched the project Independence, designed to cut any energy imports from the US and build 1000 nuclear plants by 2000. Inteads of this timeline, we chose safety: less nuclear reactors leading to the exponential release of CO2 and to Germany funding Russia’s war effort in Ukraine with gas imports. And why do we have swarms of killer drones on the eastern front? Because of the war in Ukraine. And regarding autonomous killer drones with facial recognition, it’s important to empathize on the fact that they represent zero technical challenge, to the point where you can ask if these terrific weapons are not already experimented in Ukraine.

Another example of peace over safety is the atomic bomb: It is one of the most unsafe things ever invented. But what would have been a timeline without the Atomic Bomb? The Soviet Union would not have stopped at Berlin in May 1945 and the end of WWII would have been the start of another deadly and hot world war. Arm race, game theory and deterrence ended-up being  the climax of peaceful diplomacy.***

Conspiracy against Decel

Legacy institutions are done. They have passed the event horizon where the amount of energy to save them from self-incineration exceeds the one needed to bootstrap new ones, thanks to the Internet. What is the Internet, if not an open source repo of virtualized institutions? A haven for unsafe, anarchic and tech growth oriented Institutions. E/ACC is nothing more than the ultimate conspiracy to escape the decel and eventually colonize space.

“Mass fascism is not possible anymore because the XXI century has enacted the death of politics. Satellite sisters is about an atomised mass versus some singular individuals that decided to get together. The Atomised mass is what I call the UN 2.0, or when a supra-national entity annihilate every cultural and national specificities to transform the world into an array of territories, like Kosovo, and that creates the resistance from certain singular individuals that do not want to live in this world, becoming this minority of free men, who [..] look at the stars while thinking “This is where mankind should goes”. [..] What I am writing, the birth of a private space race, is on its way right now. Sir Richard Branson and Virgin Galactic exist. Its project of a space hotel chain to bootstrap a private space program is real. Elon Musk, who is a doctor in physics who has stopped working for institutions, and whose goal is to go to Mars by his own means, exists, he is working on his own space engine, owns the biggest photovoltaic power plant of the US [..] They are all here, already working on the future: The conquest of the moon won’t be NASA, it will be private corporations.”

– Maurice Dantec, in September 2012, talking about his novel Satellite Sisters, the sequel to Babylon Babies.

Miscellaneous links

** It’s one of the first Yudkowski claims on the Lex Friedman podcast. I even wonder at what point even the other AI safety people take him seriously, most of its claims being nothing but pure science fiction.

*** This strategy is often referred to as the Offset strategy. https://en.wikipedia.org/wiki/Offset_strategy?wprov=sfla1

Join 100+ subscribers

Stay in touch with the Acceleration and Hyper-Sadism