Why do tech CEOs love this 1960's neuroscience theory?
The analogy of AI as a new layer of your brain
“Twenty years from now, we'll have nanobots, because another exponential trend is the shrinking of technology. They'll go into our brain through the capillaries and basically connect our neocortex to a synthetic neocortex in the cloud providing an extension of our neocortex.” - Ray Kurzweil, 2014
“We already have a situation in our brain where we've got the cortex and limbic system and the limbic system is kind of a mess, that's the primitive brain. (…) those two seem to work together quite well (…) I've not found someone who wishes to either get rid of the cortex or get rid of the limbic system. (…) So, I think if we can effectively merge with AI by improving that the neural link between your cortex and your digital extension yourself which already exists just has a bandwidth issue and then then effectively you become an AI human symbiote.” - Elon Musk, 2016
“We have a much more primitive old brain structure for which our neocortex (…) is basically just a kind of prediction and reasoning engine to help. (…) you can think about some of the development of intelligence along the same lines where just like our neocortex doesn't have free will or autonomy, we might develop these wildly intelligent systems that are much more intelligent than our neocortex, have much more capacity, but are the same way that our neocortex is sort of subservient and is used as a tool by our kind of simple impulse brain.” - Mark Zuckerberg, 2023
1. The AI-neocortex analogy explained
The triune brain is a theory of the evolutionary development of the human brain developed by the American physician and neuroscientist Paul MacLean in the 1960s.1 Somewhat surprisingly, this outdated theory has become a prominent analogy that helps tech CEOs, such as Elon Musk, Mark Zuckerberg, or Sam Altman, to make sense of the future of AI. However, before we can have a look at the origin, implications, and accuracy of the analogy - we first need to understand the quotes above.
The neocortex is the part of the human brain that grew massively in the last 3 million years and now makes up about 80% of its volume. The neocortex-AI analogy uses the evolution and growth of the neocortex as the source domain, to argue that AI will become a new layer of the human brain in the future. The core idea of AI as a new brain layer is that an implanted brain-computer interface will allow AI to directly read and stimulate the activity of biological neurons. This new brain layer has variably been called “exocortex”, “synthetic neocortex”, or “neo-neocortex” and would automatically include AI in all our thinking processes through a continuous, high bandwidth interface.
The exocortex concept originally emerged within science fiction.2 One of the first explicit mentions of the term is from the 2005 Charles Stross novel Accelerando: „About ten billion humans are alive in the solar system, each mind surrounded by an exocortex of distributed agents, threads of personality spun right out of their heads to run on the clouds of utility fog – infinitely flexible computing resources as thin as aerogel – in which they live.”
There are two main subforms of the neocortex-AI analogy. The subform used by inventor and futurist Ray Kurzweil frames the exocortex in terms of the evolution of the neocortex. The more popular subform personified by Elon Musk is similar but, inspired by Paul MacLean’s triune brain theory, it specifically projects the relationship between the limbic system and the neocortex onto the future relationship of our current brain and the new AI brain layer.
Please note that this text focuses on direct brain-computer interfaces. There are broader ideas about the integration of technical artifacts, such as pen and paper or a smartphone, into human thinking and decision-making processes in more indirect and intermittent ways. This strain of thinking has a rich intellectual history but it will be discussed in a separate text.
1.1 The Kurzweilian analogy
Ray Kurzweil uses the analogy to the evolution of the neocortex (e.g., 2009, 2013, 2014, 2017, 2017, 2018, 2022) to say that nanobots will enable a high-bandwidth brain-computer interface in the 2030s and that through this our brains will be continuously connected to the cloud (evolution of neocortex = evolution of synthetic neocortex) (smartphone:cloud = future human brain:cloud). Kurzweil uses the analogy in a deterministic, predictive way, although he is also personally in favor of such a future. According to Kurzweil this synthetic neocortex will primarily connect to the highest layers of the neocortex and enable a qualitative leap in thinking, in the same sense that the flexibility of the neocortex with its many interneurons has enabled language, art, and science. In short:
“It'll be just like what happened two million years ago when we got these big foreheads, and we got this additional neocortex. We put it at the top of the hierarchy, that was the enabling factor for humor and language and music and so on. We’ll do it again.”
1.2 The Muskian analogy
Elon Musk has issued multiple prominent warnings about the risks of humanity losing control over AI and uses the analogy of the relationship of the limbic system to the neocortex as a blueprint for a “digital tertiary layer” connected to the neocortex (limbic system:neocortex = neocortex:AI). Elon Musk uses the analogy in a normative fashion as a desirable (rather than the most likely) future, in which humans live in symbiosis with AI. According to Musk, the output bandwidth of the human brain is the single most limiting factor of human versus artificial intelligence, and therefore a better brain-computer interface is required. Musk consistently returns to this analogy to explain the motivation behind founding Neuralink and its long-term goal (e.g., 2016, 2016, 2017, 2017, 2018, 2018, 2019, 2020, 2022, 2022, 2023, 2023).
In short, the idea as summarized (not endorsed) by Sam Harris is that we “will tether these super intelligent machines quite literally to our brains, we will essentially become the limbic system of these new machines and therefore by definition their goals both long term and instrumental will be anchored to our own value system”.
As science-fiction author Vernor Vinge already remarked in 2008 “this [analogy] is actually especially attractive to people who are otherwise uneasy about the notion of super intelligence because the neo-neocortex provides the intellectual horsepower and we humans provide what we are best at (…) wanting, we humans are very good at wanting and so the team of the human and the neo-neocortex is superhuman but it's still the human in the saddle.”
Presumably through the popularity of Musk, this form of the analogy has occasionally been echoed by other Silicon Valley leaders. For example, in 2022, Sam Altman said:
“In any particular moment we are subjected to our animal instincts, and it is easy for the lower brain to take over. The AI will, I think, be an even higher brain and as we can teach it here is what we really do value here's what we really do want it will help us make better decisions than we are capable of even in our best moments. (…) I think it is technically possible for this to be sort of like a layer above the neocortex that makes even better decisions for us and our welfare and our long-term happiness and fulfillment than we could make on our own.”
In 2023 Mark Zuckerberg has introduced his own twist on the analogy by emphasizing it less as a desirable path in a perilous AI future but more as evidence against concerns about humanity losing control over AI (simple impulse brain: “subservient” neocortex = humans: future AI). Specifically, Mark Zuckerberg uses the analogy as evidence and explanation that intelligence and autonomy operate on independent scales, which is why he thinks that it is possible to “scale intelligence quite far” without manifesting “safety concerns”. The same argument has also been made by Meta’s AI chief Yann LeCun in 2023:
“We shouldn't feel threatened by machines that are smarter than us. We are in control of them. And we will still be in control of them. They won't escape our control any more than our neocortex has escaped the control of our basal ganglia, basically, in our brains.”
Meta notably has declared its aim to build superhuman AI systems and distribute them open-source.
1.3 The Triune Brain Theory
According to the triune brain theory the human brain can be categorized into three major evolutionary leaps. The basal ganglia denote the “reptilian” brain, the limbic system denotes the “paleomammalian” brain, and the neocortex is “neomammalian” brain.
The theory made it into popculture through a book by Carl Sagan3 and MacLean remained an advocate of his theory for his whole life, publishing a book on it towards the end of his career in 1990.4
For context, it is worth pointing out two limitations to the scope of MacLean’s theory. First, it focuses on the evolution of the forebrain. So, there are parts of the human brain that have not been assigned by MacLean to any of the three brains, such as the brain stem and the cerebellum.5 The latter still reflects about 10% of brain volume, 10% of brain weight and, surprisingly, about 80% of neurons. Second, MacLean is interested in “paleocerebral functions”, meaning he looks at inherited functions, not culturally acquired or honed functions. He also does not offer a very clear narrative of why the neocortex grew so much among hominids (e.g., social brain hypothesis6)
Musk has not explicitly referred to the triune brain theory, but the Muskian strand of the analogy is clearly inspired by it. Musk uses terms that almost certainly originate from MacLean’s theory. For example, Musk has referred to the limbic system as the “reptile brain” (e.g., 1, 2) the “monkey brain” (e.g., 1, 2) or both. Another way to substantiate Musk’s exposure to triune brain theory is through Tim Urban. Urban explicitly used triune brain theory as the framework for his long 2017 article on “Neuralink and the Brain’s Magical Future” and he also regularly refers to the limbic system as the “monkey brain”. His article was written at the invitation of Musk, announced and shared by him on Twitter, and referenced in an interview in which he talked about the exocortex analogy.
Here is an excerpt from the article:
“We discussed three layers of brain parts—the brain stem (run by the frog), the limbic system (run by the monkey), and the cortex (run by the rational thinker). We were being thorough, but for the rest of this post, we’re going to leave the frog out of the discussion, since he’s entirely functional and lives mostly behind the scenes. When Elon refers to a “digital tertiary layer,” he’s considering our existing brain having two layers—our animal limbic system (which could be called our primary layer) and our advanced cortex (which could be called our secondary layer). The wizard hat interface, then, would be our tertiary layer—a new physical brain part to complement the other two.”
2. Policy implications
What policy implications could be deduced if we accept the exocortex analogy as a heuristic for the future of AI?
Nature of the situation:
An evolutionary leap from humans towards transhumans with cyborg minds that are a hybrid of biological and artificial neural networks.
Stakes:
Societal level: An evolutionary leap would translate into a higher stage and quality of development. It cannot be predicted what this fully entails but presumably radically new beliefs, technological powers, communication forms, and social organizations. With that the identity and potential unity of humanity as a species is at stake.
Personal level: The stakes range from intellectual growth, to dreams of achieving personal immortality, to tricky questions about personal identity.
Policy prescriptions:
No regulation - neocortex: The source domain of the analogy, the growth of the neocortex is not the product of intentional human design but of glacial evolutionary pressures. As such, the analogy fits quite well with a deterministic Kurzweilian view, in which there are larger optimization processes at play here that do not depend that much on the decision-making of human political organizations. There is also no law governing the neocortex specifically, but of course it is indirectly affected by all laws applying to humans.
Strict regulation: brain-computer interface: While the backward-looking analogy itself, does not imply regulations the framing of the target domain as brain-computer interfaces does. Brain-computer interfaces are FDA Class III devices that require proof of safety and effectiveness for premarket approval. If we view AI as a modification of the human brain, it should be regulated stringently by the FDA (more stringent premarket requirements than the EU AI Act).
Human ownership: Humans have lifelong ownership of their neocortex. They can indirectly sell its services, but they cannot transfer ownership and neither companies nor governments can legally own living neocortices. They can only legally own organ donations from deceased individuals. If applied to AI this would imply a very different sociotechnical regime in which AI, computing power, and infrastructure are legally owned by natural persons.
Equal distribution: Neocortex is roughly distributed among natural persons in equal parts. Accordingly, the neocortex analogy also goes well with policies that target technological power concentration.
Chances of success of policy options:
Control problem: The analogy implies that superintelligence can be controlled or at least aligned with human interests, if there is enough research to tether AI to human intelligence through a brain-computer interface.
Moral rightness of policy options:
“People generally don’t wanna lose their cortex”: The evolution and growth of the neocortex is invariably judged as positive. As such, the source domain of the analogy implies that we should welcome brain-AI interfaces. This is complicated by the fact that the analogy also frames the target domain – the future of AI – within the domain of transhumanism, which brings its own set of moral connotations. While there is widespread support for regenerative neuroprosthetics, cognitive enhancement is often morally rejected as dangerous, unnatural, unequal, and hubristic. In some sense the analogy is an example of the reversal test by Nick Bostrom and Toby Ord, which is meant to highlight that many people have a status quo bias in favor of current levels of cognitive capacity, and morally reject cognitive enhancement as well as cognitive reductions. In short, any transhumanist framing of the future of AI will face some negative moral intuitions, but the backward-looking analogy makes the case that it is morally right.
Don’t worry about superintelligence: The fact that AI will significantly outpower biological neural networks is also framed a morally unproblematic or positive from a human perspective because in this analogy AI does not make any autonomous decisions, it does not self-replicate, there is no danger of loss of control, and it does not compete with humans for resources. Instead, AI increases human agency.
Dangers associated with a policy option:
Inequality: The framing of the target domain in transhumanist terms can feed popular fears that economic inequality amongst humans could turn into permanent biological inequality. Specifically, the concern that only the rich would get access to a powerful exocortex that would make them permanently more powerful than the poor or even let them become immortal. Whether these fears are adequate is another question. At least so far, technology has had a strong history of diffusion.
Bifurcated species: Whether by inequality or by choice. The image of an evolutionary brain leap easily evokes fears that the human species could split into transhumans and traditional humans.
3. Commonalities and differences
Before going into a discussion of structural commonalities and differences between the source and the target domain, there is a need to highlight that the Triune Brain Theory as a model of the human brain in the source domain does not correspond to a state-of-the-art understanding of neuroscience. It is not a problem per se to make analogies that communicate a set of relationships which do not correspond to observed relationships in any current or formerly existing domain (e.g., analogies to science fiction stories). However, it becomes problematic in case fictional analogies are presented as evidence that a certain set of relationships is possible, let alone likely or inevitable.
3.1 Accuracy of the Triune Brain Theory
The triune brain model continues to have a popular appeal as it offers an intuitive and entertaining narrative (e.g., TED, TEDx).7 Also, I’m not a neuroscientist. However, most neuroscientists seem to consider it a misleading oversimplification.8 In Google Scholar, it is not hard to find articles like “Your Brain Is Not an Onion With a Tiny Reptile Inside” or “The Brain Is Adaptive Not Triune: How the Brain Responds to Threat, Challenge, and Change”. Or, if you prefer a YouTube video, you can find it under titles like “No, You Don’t Have a ‘Reptilian Brain’” or “The brain myth that won’t die” on. As an obituary for Paul MacLean in the Yale School of Medicine Magazine summarized it in 2008: “a theory abandoned but still compelling”.
Specifically, neuroscientists will point out that the triune brain is misleading9 from both an evolutionary and a functional perspective.
The Triune Brain is popularly (mis-)understood to have evolved as “hats on top of hats” from reptiles to mammals to “higher mammals”, so that, for example, reptiles would in fact only have MacLean’s “reptilian brain”. A better description of reality is that most animals have similar brain parts, but they have reorganized and grown to different extents, so that the “reptilian brain” is bigger in relative size in reptilians. All vertebrates undergo a similar division of the nervous system early in embryonic development. So, for example, while the six-layered cortex is unique to mammals, there are cortex-like structures in both reptiles and birds, just smaller and less complex.
MacLean had developed his theory by methodologically destroying different brain parts of lizards and squirrel monkeys and assessing the behavioral impact. Modern neuroscience, supported by advanced neuroimaging techniques, reveals that almost all high-level brain functions are not confined to a single brain region but result from the dynamic interaction of multiple, integrated networks performing subparts. This contradicts the Triune Brain's notion of quasi-autonomous brain parts governing specific functions.
The theory is “directionally right”. For example, mammals have indeed evolved a distinctive six-layered neocortex and primates have developed a massively enlarged pre-frontal cortex in that neocortex. This has likely played a key role in enabling complex language. At the same time, it is also good to recall that not everything in the neocortex is a unique function of “higher mammals”. For example, humans also have their primary visual cortex and their primary motor cortex in the neocortex but frogs, crocodiles, and lizards do not just have eyes and legs as a decoration. It is also good to keep in mind that a 100 million years or more would be a lot of time for natural selection to adapt or remove any ancestral brain functions, unless they remain useful within the changing environment. We certainly don’t look like therapsids or early mammals and while the human body has some “evolutionary leftovers”, such as wisdom teeth, the plica semilunaris (remnant of the third eyelid of reptiles) or the coccyx (remnant of our lost monkey tail), these are minor phenomena.
Relationship between the limbic system and the neocortex
In the context of the exocortex analogy, Mark Zuckerberg and to a lesser degree Elon Musk have asserted a hierarchical relationship between the limbic system and the neocortex in which “the monkey brain” is “steering the cortex” and “calls the shots”, whereas the neocortex has “no autonomy”, is “subservient”, and a “tool” used by the limbic system.
MacLean was interested in the evolutionary origins of brain functions and exclusively focused on genetically inherited rather than culturally learned brain functions. This does arguably create a bias of attributing more agency to older parts of the brain, as opposed to the more open-ended and flexible neocortex. However, MacLean still rejects an extended mind analogy in which the neocortex has no agency due to inherited structures:
“In this age of computers it would be quite consistent with respect to clean-slate hypothesis to regard the neocortex as an expanded central processor especially adapted to serve the protoreptilian and paleomammalian formations in performing calculations, making discriminations, and solving problems beyond their capabilities. The situation would be analogous to our own use of supercomputers to perform numerical calculations that otherwise would be impossible. Nevertheless, there are accumulating bits of evidence that the neocortex has built-in mechanisms (…)”10
Overall, the popular idea that the limbic system is responsible for setting the goals and motivations, whereas the neocortex just works to make the limbic system happy is a simplification to the point that it is clearly misleading. Consider the following examples:
The history of lobotomies: Lobotomy is a discredited neurosurgical treatment performed from the 1930’s to the 1960’s for psychiatric or neurological disorders. The surgery typically consists of severing most connections to and from the pre-frontal cortex, the newest brain region of the neocortex, associated with the highest-order cognitive abilities. Somewhere around 100’000 patients worldwide have received this treatment. The treatment was a “success” in that it made it easier for institutions to handle the patients. However, that was because lobotomies created many cognitive impairments including apathy (lack of interest). How can we explain the prominence of damages to the pre-frontal cortex in the neural correlates of apathy, if the neocortex just serves to help implement the desires of the limbic system? Clearly, the pre-frontal cortex and its connections to other brain regions must play an important role in motivation.
Examples of specific motivational subfunctions attributed to the neocortex:
Orbitofrontal cortex: This part of the pre-frontal cortex plays a role in reward processing and the evaluation of the emotional value of stimuli.
Ventrolateral prefrontal cortex: This region is particularly important in tasks that require the suppression of a response that might be instinctive or habitual but needs to be inhibited in a particular context.
Anterior cingulate cortex: Area outside of the pre-frontal cortex but in the neocortex that also plays a key role in motivation. This area is thought to be involved in the conscious aspects of decision-making and has also been called “the center of free will”. It is heavily involved in assessing the costs and benefits of different actions and the anticipation of reward, which can motivate certain behaviors and decisions.
In summary, the neocortex doesn't merely “serve” the limbic system but interacts with it in complex ways. For example, sensory inputs might be screened for specific patterns in parts the neocortex and forwarded to the limbic system. The limbic system might then indeed propose impulses or desires based on emotional responses or ingrained preferences. Ignited by the limbic system, the neocortex then processes these impulses in a broader context, considering a range of factors including past experiences, future consequences, moral values, and social norms. Lastly, consider that the neocortex also evaluates the emotional value of stimuli and thereby also influences emotional memory and potentially modifies future emotional responses of the limbic system. So, just as it would be misleading to deny the limbic system any agency and call it a “megaphone” that specific parts of the neocortex can use to alert the full neocortex, attributing all agency to the limbic system doesn’t make a lot of sense. Overall, multiple brain regions contribute necessary subfunctions, but no single area is sufficient on its own to fully explain complex functions like the construction of human motivations.
3.2 Key commonalities
Alignment with and contribution to formulating personal goals: One key argument is that the neocortex is aligned with your personal goals and that there should be a personal AI aligned with your goals. Such an alignment should generally be possible. However, much like the personal computer it does not seem like a necessary condition to integrate personal AI with your brain through a neural interface for this.
More planning and prediction: The neocortex plays a large role in planning and in determining the chances of success of different courses of action. In a similar vein, it does not require too much projection to imagine that personal AI, whether integrated through a brain-computer interface or not, could boost planning and prediction further by providing step-by-step plans on how to achieve certain goals and by assessing the chances of success of various strategies in business and personal life.
Difficulty to predict emergent abilities enabled by higher scale: As Kurzweil argues “neocortex is neocortex”, what sets humans apart from other mammals is the quantity of neocortex, which enabled qualitative new capacities for language, art, science and technology. Kurzweil argues that it would have been impossible to predict the capabilities enabled by the enlarged neocortex in advance (“try explaining music to a primate”), and “we're going to create things that we can't even envision now, the way we did the last time we got more neocortex”. A similar uncertainty about emergent abilities applies to the effects of scaling artificial neural networks to previously unseen sizes, whether integrated into our brain through a brain-computer interface or not. In short, we cannot deterministically assert or reject that cyborg brains with more computing power will enable qualitatively new cognitive capacities, but it is at least a plausible hypothesis.
3.3 Key differences
Substrate: When we compare the neurons in the basal ganglia, the limbic system, and the neocortex, they are organized into different shapes. However, overall neurons in the neocortex are not that fundamentally different from neurons in the limbic system. In contrast, the exocortex would be based on a completely new substrate with neurons that are activated differently and that have a different learning algorithm.
Signal speed: Biological neurons can fire at about 200 times per second, independently of whether they are in the limbic system or the neocortex. In contrast, modern computer processors can operate at speeds of several gigahertz (GHz), with one GHz being equal to 1,000,000,000 cycles per second. So, there would be a massive speed gap between a potential exocortex and the rest of the brain.
Cybersecurity: If part of your brain is digital technology, it will be exposed to the general risks associated with that substrate, including attacks on the confidentiality, integrity, and availability of your thoughts. For example, someone might encrypt your thoughts and memories and demand money for decryption, or someone might plant false memories as part of a grandparent scam, romance scam, or business fraud.
Interoperable interface: Your connections between the limbic system and the neocortex are deeply within your brain and there is no easy way to access them, let alone to switch to another neocortex on top. While you certainly would want exocortex portability between providers, a brain-computer interface also brings new coercive possibilities (e.g., police interrogations).
Lifespan: The hardware of the exocortex would in some sense be less durable. Brain-computer interfaces will be limited by battery life and would likely have to replace every few years. At the same time, artificial neural networks or other types of software running on the exocortex could exist indefinitely and essentially be immortal.
Consciousness: While we do not understand how consciousness occurs in human brains yet, and to what degree the neocortex is involved in this. However, it is commonly assumed that current computers and AI are not conscious. Hence, with the exocortex, a larger and larger share of your brain would be unconscious.
Symbiosis vs. Interdependence vs. Integration: Musk has argued that “your cortex and your limbic system are in a symbiotic relationship” and made the case that this should also be the case for the human-AI relationship. While we can all understand the intended meaning, it might still be worthwhile to make a more fine-grained distinction. Your brain areas are strictly speaking not in a symbiotic relationship. Further, while a human-AI symbiosis is plausible, the integration of an exocortex through a brain-computer interface is on its own neither sufficient nor necessary for it.
Symbiosis: Symbiosis describes a close, long-term interaction between two biological organisms of different species, which is beneficial for at least one of the species. For example, a clownfish and a sea anemone, a hippo and a barbel fish, a flowering plant and a bee, or, arguably, a human and a pet, such as a cat or a dog. There are also examples of endosymbiosis, where one symbiont lives within the other, such as some gut bacteria in humans.
Interdependence: The limbic system and the neocortex have not only co-evolved, but they are physically connected, cannot exist on their own, and share the same DNA. They depend both on each other and the rest of the human body to exist. It is not a choice of the basal ganglia, the limbic system, and the neocortex to live together. If one of them is removed you die, which is why I am not surprised that no one wants to get rid of either their limbic system or their neocortex.
Integration: In cyborgs, the biomechatronic parts are designed and engineered to be integrated into an existing biological structure. Hence, integration might be a good term for an exocortex enabled by a seamless interface where mechanical and biological components work together.
Speed of brain evolution: The fastest observed biological brain growth has been the drastic enlargement of the neocortex in hominids, which we can approach with endocranial volume. Still, this was not a one-off event at a specific point in time, rather it has been a circa three million years long process. The average doubling period for brain volume, from Australopithecus to early Homo sapiens, was approximately 1.8 million years.
Moore’s Law is the observation that the transistor counts on microchips have doubled about every 2 years for the last 60 years. According to EpochAI the training compute for artificial neural networks has doubled every 6 months for the last 14 years. In short, artificial neural networks grow at a speed of more than one million times faster than our neocortex has been growing.
Upper limit of exocortex size: As Ray Kurzweil explains: “this expansion wirelessly into the cloud will not be a one-shot deal because the cloud is not limited by a fixed enclosure it's growing exponentially as we speak so will become a hybrid of biological and non-biological intelligence the non-biological part will grow higher exponentially.” The neocortex makes up about 80% of the human brain volume, 65% of its weight, and 20% of its neurons. So, it is a big part of our brain, but it’s also not 99.9999%. In contrast, if digital computing power continues to expand at the speed of Moore’s Law or even Huang’s Law, the exocortex would completely dominate the human brain within a decade.
Elasticity of exocortex size: If your brain has a large “non-biological layer” that runs on the cloud that would also mean that human brain power could adjust much more flexibly to short-term demand variation than if it’s attached to a human-body. For example, you would likely have a much larger exocortex during wake hours than during sleep hours. Similarly, you might want to use an extra-large brain for multi-tasking, job interviews, dates, or large-scale investments, whereas you might prefer a quieter brain for meditation, jogging, or watching a comedy.
Distribution and variety of exocortex: Neocortex is geographically distributed in accordance with global population distribution, and there are no dramatic differences in neocortex size or shape between humans. There are no “neocortex billionaires” that have more neocortex than entire countries. In contrast, artificial computing power and capital, are much more unequal both within and between countries. A few US companies own about as much computing power as the rest of the world.
Ownership of exocortex: You are the owner of your current brain. If you rely on data storage and AI as-a-service, the infrastructure of an increasing share of your brain will be owned by the hyperscalers, such as Amazon, Microsoft, and Google, and you would be at the mercy of their terms of service. Anything that applies for your extended mind today, namely that your cloud provider might use your data for training, screen it for illegal content, or share it with a government would also apply for your thoughts that are automatically shared with your exocortex. While there is already a burgeoning movement for neurorights that aims to protect freedom of thought and privacy within your head, such questions would be even more tricky for a hybrid brain.
Autonomous viability: The neocortex is part of your brain, which is part of your body. The neocortex is vitally dependent on the rest of your body, it cannot exist outside of it, it cannot replicate itself and it does not make any decisions purely on its own. So, we may attribute some of your agency to it, but it is not an autonomous agent. In contrast, artificial neural networks come in all shapes and sizes. They are already deeply embedded into the human economy without any brain-computer interface and we can only expect them to get more numerous and more autonomous. So, even if the exocortex will materialize, this will only ever be one specific subform in which artificial neural networks exist and most likely not a dominant one.
Paul MacLean. (1964). Man and his animal brains. Modern Medicine (Chicago), 32, 95-106.; Paul MacLean. (1973). A triune concept of the brain and behavior. The Hincks Memorial Lectures. University of Toronto Press.
For a more comprehensive history see Exocortex. (2023). transhumanism.fandom.com
Carl Sagan. (1977). The Dragons of Eden: Speculations on the Evolution of Human Intelligence. Random House.
Paul MacLean. (1990). The Triune Brain in Evolution: Role in Paleocerebral Functions. Plenum Press.
At the time of writing, this is described correctly on the Wikipedia page “Triune brain” but wrongly on the page “Limbic system”, WaitButWhy, and most colored brain illustrations depicting the triune brain theory.
Byrne, R. W., & Whiten, A. (Eds.). (1988). Machiavellian intelligence: Social expertise and the evolution of intellect in monkeys, apes, and humans. Clarendon Press/Oxford University Press.
Digression: I would be interested in a Jordan Peterson-esque reading of the bible, in which the triune Christian Godhead is a metaphor for the triune brain: The father provides the expected reward and expected punishment. Subsequently, we get to the son, the product of motherly love, which leads to the limbic system, and then of course the newest addition, the neocortex, the Holy spirit, which gives humans the fiery tongues to go and spread the word!
Georg Striedter. (2004). Principles of Brain Evolution. Sinauer Associates. pp. 31-37.
I prefer to use the term “misleading” here because a discussion of the accuracy of the Triune Brain theory is a multi-level challenge. The popular use of Triune Brain theory often does not correspond to MacLean’s writings. MacLean’s writings also do not always reflect state-of-the-art in neuroscience, but an assessment is complicated by the fact that MacLean does not always articulate his theory in a clear, well-structured manner. Hence, there is some ambiguity or even motte-and-bailey about what claims Triune Brain theory makes. For example, MacLean’s main illustration, which he has used consistently since the 1960’s to promote the Triune Brain theory, only includes the human brain and communicates a layered evolution of three brains. However, in his 1990 book MacLean also states in response to criticism that the layered view of his theory is a misinterpretation. Paul MacLean. (1990). The Triune Brain in Evolution: Role in Paleocerebral Functions. Plenum Press. p. 9
Paul MacLean. (1990). The Triune Brain in Evolution: Role in Paleocerebral Functions. Plenum Press. p. 519