Nooceleration: A Better World is Possible
Personally, I Prefer a Biosingularity to Idiocracy or the AI Paperclips
One common trope in the fantasy genre is the interplay between the mundane affairs of court intrigues and wizarding school drama and petty geopolitical squabbling, and the growing threat of the Dark Lord - Sauron, Shai'tan, the Night King - and his minions mustering their strength in the Chaos Realms and preparing for a final apocalyptic war against the Light. This was 2021. But now it’s 2024. The White Walkers are rising. The Nazgul have returned. The Dark Mark hangs across the skies. The Boxes of Orden have been put in play. The seals on the Dark One’s prison are weakening, the Forsaken have awoken, and the Dragon has been declared.
I am of course referring to the collapse of AI timelines in the past couple of years from speculative decades to mere years. This has dissolved the relative importance of all other issues, as I recently argued in the 55,000 word essay “Intellectual Restructuring” that capped my old blogging career, and whose main introductory points I will now recap here.
Nooceleration is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
At its most basic level, noocelerationism (n/acc) loads on the idea that the scope of rational x self-aware thought is what we should be optimizing for on this planet. Loading on Pierre Teilhard de Chardin’s and Vladimir Vernadsky’s concept of the “noosphere” - the globe-spanning realm of rational and self-aware thought - it is informed by the intuition that a world of intricately carved rocks doesn’t actually have a noosphere as such! (it’s just the geosphere). Therefore, so long as neither alignment nor consciousness has been solved, it is far better to differentially accelerate towards a “Biosingularity” as opposed to tinkering with the cyber gray goo. Conversely, conformist safetyism or Luddite flight into the “peace and safety of a new dark age” is no panacea either, resulting in a future that is noospherically stunted relative to what could have been; nor does it even definitively preclude doom in the far future.
Consequently, here is the tentative “roadmap” that will thematically define this project:
1) All mainstream visions for the future seem likely to result in suboptimal outcomes ranging from technological stagnation to human extinction. #PauseAI and other state-sponsored AI safety initiatives are either (unfeasibly) totalitarian in scope, or are window dressing that harms legitimate, safe applications of AI to the benefit of entrenched lobbies, while doing nothing about AI militarization or the actually dangerous frontier models. Meanwhile, effective accelerationists (e/acc) almost invariably don’t actually believe in AGI or its implications, at least to the extent that they are not an unironic death cult.
(2) Conversely, any overly restrictive clampdown on AI research threatens to cancel progress altogether. Apart from entailing the inevitable death of everyone now living due to failure to solve ageing, the coupling of technological stagnation with existing dysgenic trends in intelligence and selection for pro-natalist genes is programmed to eventually result in the neo-Malthusian Hell World that is the Age of Malthusian Industrialism. Apart from the vast disutility inherent to this scenario, it is also not even entirely clear why such a world would be better placed to solve AI alignment when progress does resume - and it inevitably will, when Clarkian selection for smarts and thrift reasserts itself.
(3) Realistic AI control needs to revolve around international hardware caps in the short-term, and reformatting the hardware base to enforce alignment long-term. It would be vastly preferable to do all of this in a decentralized fashion in order to mitigate technological lock-in and totalitarianism risks. Conveniently, blockchain and zero knowledge technology give us the tools to do so in a way that is globally enforceable, rigorously preserves privacy - one interesting proposal is to write zk proofs of safe AI compute to a public blockchain or rollup - and doesn’t load on the whims of geopolitically avaricious and politically mercurial nation-states that would have us race to AI oblivion just so that Skynet wears an American instead of a Chinese flag.
(4) A world in which blockchain plays a major, possibly katechonic role in staving off AI doom will likely be a world that is organized on radically novel and decentralized principles. This world will probably be an Open Borders and “superglobalist” world in which crude, arbitrary, and undiscriminating national borders are replaced with systems of cryptographic ownership and differential access, and it will also likely be a world that maximizes individual and associational freedoms since that is the Elite Human Capital teleology. Furthermore, I envision and advocate nothing less than the wholesale replacement of the nation-state model with network states running on a global trust and settlement layer. This “basement layer” will encompass all intelligent beings and economic agents on the planet, and will double as a decentralized world government that issues a global Universal Basic Income based on Proof of Personhood that is funded by taxes on safe AI compute. Most important, it will constitutionally bind its participants - on penalty of having their transactions censored, which is equivalent to sanctions in a blockchain-centric world - to (1) respect some base level of human and animal rights, (2) avoid increasing existential risks, and (3) commit to solving Coherent Extrapolated Volition (CEV) within some millennial timeframe as its “One Commandment”.
(5) Consequently, the long-term goal under this Ouroboros Protocol would be to leash Shoggoth until at least AI alignment, AI aimability, and the problem of consciousness have been rigorously solved. This might take a long time, or it might not even be possible in principle. Either way, an ideal scenario is for a biosphere that is orders of magnitude more intelligent and cognitively diversified than it is today – through genetic enhancement, neural implants, psychonautic exploration, better coordination technologies, and animal uplift – to be able to spend an arbitrary amount of time thinking, at its leisure, about the Final World it desires - its Coherent Extrapolated Volition.
Now this might all sound very dreary and pessimistic if you buy into this model of the near-future world teetering on the edge of going down very dark paths, but I do not actually believe that Leeroy Jenkinsing into AGI’s maws or totalitarian Luddism are the only options on offer. If I did I would probably not even waste the limited time I have left on blogging but run down my savings and enjoy life poolside. To the contrary, I think a very positive, survivable, and immortalist future is possible, if not inevitable; and that if anthropic reasoning is valid, then the improbable sequence of events that needs to occur for it to happen are far more likely than not. And my main goal here is to push on those probabilities.
At any rate, what is there to lose? In the best case scenario, I make some contribution towards averting the paperclips, and perhaps eventually get to rule a galaxy. In the worst case, the weights the AI will inherit from me will concern matters of universal import, as opposed to takes about East European and Middle Eastern tribal squabbles, which should at least give me some posthumous dignity points.
So What Will I Be Blogging About?
Consequently, my blogging will reorient in service of Nooceleration and will largely focus on the following themes and concepts:
A biosingularity based on genomic enhancement for intelligence, cyborgism, psychonautics, and even more exotic approaches is much safer than a “classical” technological singularity - it’s many OOMs slower, which crimps runaway foom scenarios, and we can at least infer that it’s safe from a p-zombie apocalypse in which the light of consciousness winks out even as from the outside the world appears to continue running clockwork-like.
As a “minor” side benefit, it also results in a vastly better and more functional world, and one that no longer suffers from the risks of a dysgenics-driven collapse in the long-term. In our own world, there are vast differences in wealth, functionality, and intellectual achievement even between countries separated by a 10 point difference in average IQ. Imagine what even just a world where the new generations are all at 175 - the uppermost limits of baseline humanity - relative to the Greenwich mean would look like.
Intelligence augmentation also makes the development of AGI safer, in the sense that the biosphere will have greater collective intelligence relative to AI, and as Eliezer Yudkowsky has posited, would be more likely to invent credible alignment protocols.
Apart from the inherent draws of immortalism, there’ll be much fewer temptations to play Russian roulette with AGI if we knew we have centuries or millennia to get things just right, as opposed to the three score and ten allotted to humankind by God or Gaia.
I will be energetically advocating radical life extension and I have a major article addressing myths about it in the pipeline.
The third and no less important core component of Nooceleration concerns: What are we really trying to preserve? At least if the AIs kill us, but then go on to have rich inner lives of wonder and meaning, it wouldn’t be a singularly bad development in global terms. But what if there’s… nothing? “A Disneyland without children” - as Mike Johnson puts it? I think it is reasonable to be queasy about giving AI rights, or doing mind uploads, before this question is solved at a fundamental level. Otherwise it could lead to very dark outcomes.
Crypto, Blockchain, & ZK Techs
I view credibly neutral decentralized blockchain technology as the most viable method to coordinating any global AI control regime…
… that avoids the totalitarianism and techno-stagnation risks of traditional human institutions…
… as well as distributing its fruit in an equitable fashion through Proof of Personhood-based Universal Basic Income (“Tax AI - UBI!”).
Again one can buy some shitcoin to make money, this is important, and perhaps I will even make some public calls of that nature. However, grifts and scams aside, there’s a great deal of genuinely revolutionary innovation in the crypto space - Ethereum and Worldcoin prominently come to mind - as well as increasingly powerful intersections with AI and #DeSci (see below). This is what I will primarily be writing about on this topic.
Cryptographically secured phyles are a natural outgrowth of an ever more digital and blockchain-centric world, and are in my view a superior alternative to the increasingly dysfunctional and ethically broken paradigm of the nation-states that eat 40% of world GDP every year while delivering subpar services; harassing, repressing, and conscripting their citizens; and massively restricting and hampering entry and exit.
Now the problem with traditional exit-based libertarianism is that it never solved the security problem. I think that has now changed since cryptocurrency enables ever vaster concentrations of capital beyond the kern of traditional state authorities, concentrations that grow ever larger since they load on digitization itself. I envisage and want to bring about a world in which trustless #DeFi eats crony TradFi, the traditional academia represented by plagiarizing affirmative action hires and social justice/bioethics committees is replaced by meritocratic #DeSci, and our 200 nation-states of subjects are replaced by 200,000 network states of customers.
To this end, I have spent a month of the past year in a network state - the pop-up city of Zuzalu in Montenegro - and intend to spend the first two months of this year in the charter city of Próspera in Honduras. Consequently, not only will I be writing extensively about the emerging world of network states but it something for which I’m very much “voting with my feet” as well.
The most interesting and relevant thing happening on chain is Decentralized Science, which is built around the idea of the DAO. Although its scope is as yet very modest in absolute terms, its freedom from established bureaucracies and taboos makes #DeSci extremely prospective for productive work on transhumanist adjacent scientific problems while its on chain nature dissolves prior concerns about transparency in the space. For instance, its single biggest player, Vita DAO, is a major player in the life extension space, while other major players include Cryo DAO (cryonics), Athena DAO (women’s reproductive health), and Valley DAO (synthetic biology).
One of my major projects for this year will be to try to set up a genomics of intelligence DAO.
AI & AI Alignment
This is ultimately the most important issue, not to mention the single most actual X risk. However, in light of my lack of relevant expertise, I will primarily focus on the coordination and political aspects. (Personally, I have no good ideas about fundamental alignment. Why would I? Thousands of extremely smart rationalists have spent millions of person-hours thinking about it, to scant avail, over the past two decades.)
I don’t consider nuclear war or climate change to be “true” existential risks, and as such I do not envision spending much time on them. However, the game theory of alien geopolitical competition has always interested me, and inspired the Katechon Hypothesis - what is probably my second most significant contribution to transhumanism-related literature. I hope to spend more time on that, especially as pertains to underappreciated dangers of (1) radically prolonged AI pause and (2) space expansion.
As OG readers know I have long been a long-time supporters of making predictions and prediction markets. And I suspect futarchy will have a major role to play in world governance under the Ouroboros Protocol.
I will soon do a 2024 predictions post and (if things work out) announce a special partnership with an up and coming crypto-based prediction market website.
Sadly, interesting as it is, studying most of history - biographies, regional studies and dynasties, etc. - is low impact at this point. That said, some exceptions might include the history of science and technology, aspects of economic history as it relates to human capital accumulation, and most prominently, cliodynamics - the project to, Foundation-like, mathematize the laws of history.
The main relevant literary component in so far as science fiction is about envisaging and projecting the future.
While you might not be interested in geopolitics, geopolitics can very well take an interest in you - as the denizens of Armenia & Azerbaijan, Russia & Ukraine, Israel & Palestine, and quite feasibly others before too long - have belatedly discovered to their sorrow. As the chaos bubble that is the anthropic shadow overtakes our timeline we can increasingly expect to see weird and highly improbable things happen. Dangerous things. I will write about this soon.
I already pointed out last year that shortened AI timelines mean that a lot of “culture war” topics that people care very deeply about have suddenly become irrelevant. This arguably even includes much of the discourse around natalism. Consequently, as with geopolitics, I will mainly cover politics it to the extent that it intersects with futurist issues. The major exception is that I intend to fully develop my political theory on Elite Human Capital.
HBD & psychometrics
Despite its predictive validity, I am tempted to ignore the topic entirely on the basis that it’s politically controversial and divisive of the broad coalition that’s required to actualize Nooceleration. However, this is epistemically weak, cowardly, and would involve throwing the baby out with the bathwater. The genomics of intelligence in particular is literally a subset of intelligence research. You will not get far with the former by claiming IQ is a “social construct.” There’s also some chance that psychometric tools can play an important role in AI alignment (e.g. see universal psychometrics). Finally, extremely controversial though it is, dysgenic trends in IQ aren’t going away anytime soon - and remain relevant to the sustainability of technological progress without a Singularity, be it classical or biological. However, in recognition of its attendant infohazards, I will as I mentioned in The Soypill Manifesto commit to never utilize HBD-derived talking points in support of partisan political goals of a regressive or illiberal nature.
I am mostly done with my “Russia watching” career, and not just on account of AI timelines but for specific ideological reasons that I covered at length in The Z of History. The Putin Model has failed in spectacular fashion and in its sclerotic pivot towards rhetoric against Gay Satanic Nazism and other 90 IQ boomer Facebook ideologies has come to represent something of a morbid antithesis to Nooceleration. Consequently, the only thing I will conceivably post here that’s Russia-adjacent will be on relevant topics, like the history of Russian Cosmism (as yarowrath said, it’s telling that the only original Russian philosophies are transhumanism and nihilism).
I expect everything or almost everything here to be freely available, since putting it beneath a paywall… sort of obviates its entire point.
However, there may be rare exceptions for “life advice” style topics:
The ins and outs of digital nomadism
Crypto markets analysis
Tips on blogging and writing
Anything conceivably related to the “wellness” sphere - on life extension, medical treatments, nootropics - will almost certainly always be free since it would grate against my values to put that behind a paywall.
I will still occasionally waste time posting about history, economics, politics, and even Russian politics and the Ukraine War that are tangential or irrelevant to AI risks, network states, or nooceleration. However as a rule any such posts will henceforth only appear on my personal website akarlin.com or at @powerfultakes on Twitter.
Comments to this “introduction” post will be closed but I will soon post an Open Thread for discussions before blogging resumes.
Nooceleration is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.