> The progress of knowledge—and the fact that we’re educated about it—lets us get to a certain level of abstraction. And, one suspects, the more capacity there is in a brain, the further it will be able to go.
This is the underlying assumption behind most of the article, which is that brains are computational, so more computation means more thinking (ish).
I think that's probsbly somewhat true, but it misses the crucial thing that our minds do, which is that they conceptually represent and relate. The article talks about this but it glosses over that part a bit.
In my experience, the people who have the deepest intellectual insights aren't necessarily the ones who have the most "processing power", they often have good intellectual judgement on where their own ideas stand, and strong understanding of the limits of their judgements.
I think we could all, at least hypothetically, go a lot further with the brain power we have, and similarly, fail just as much, even with more brain power.
I didn’t see any mention of the environment or embodied cognition, which seems like a limitation to me.
embodied cognition variously rejects or reformulates the computational commitments of cognitive science, emphasizing the significance of an agent’s physical body in cognitive abilities. Unifying investigators of embodied cognition is the idea that the body or the body’s interactions with the environment constitute or contribute to cognition in ways that require a new framework for its investigation. Mental processes are not, or not only, computational processes. The brain is not a computer, or not the seat of cognition.
https://plato.stanford.edu/entries/embodied-cognition/
I’m in no way an expert on this, but I feel that any approach which over-focuses on the brain - to the exclusion of the environment and physical form it finds itself in – is missing half or more of the equation.
This is IMO a typical mistake that comes mostly from our Western metaphysical sense of seeing the body as specialized pieces that make up a whole, and not as a complete unit.
After reading this article, I couldn’t help but wonder how many of Stephen Wolfram’s neurons he uses to talk about Stephen Wolfram, and how much more he could talk about Stephen Wolfram with a few orders of magnitude more neurons.
I kind of skipped through this article, but one thing occurs to me about big brains is - cooling. In Alastair Reynolds Conjoiner novels, the Conjoiners have to have heat-sinks built into their heads, and are on the verge of not really being human at all. Which I guess may be OK, if that's what you want.
The African elephant has about 3 times (2.57Ă—10^11) as many neurons than the average human (8.6Ă—10^10). The pilot whale (1.28Ă—10^11).
Perhaps they see the bigger picture, and realize that everything humans are doing is pretty meaningless.
Considering our intelligence stems from our ability to use bayesian inference and generative probabilities to predict future states, are we even limited by brain size and not a lack of new experiences?
The majority of people spend their time working repetitive jobs during times when their cognitive capacity is most readily available. We're probably very very far from hitting limits with our current brain sizes in our lifetimes.
If anything, smaller brains may promote early generalization over memorization.
there seems to be an implicit assumption here that smarter == more gooder but I don't know that that is necessarily always true. It's understandable to think that way, since we do have pretty impressive brains, but it might be a bit of a bias. I'm not saying that I think being dumber, as a species, is something to aim for but maybe this obsession with intelligence, artificial or otherwise, is maybe a bit misplaced wrt it's potential for solving all of our problems. One could argue that, in fact, most of our problems are the direct result of that same intellect and maybe we would be better served in figuring out how to responsibly use the thinkwots we've already got before we go rushing off in search of the proverbial Big Brain Elixir.
A guy that drives a minivan like a lunatic shouldn't be trying to buy a monster truck, is my point
"Minds beyond ours", how about abstract life forms, like publicly traded corporations. We've had higher kinded "alien lifeforms" around us for centuries, but we have not noticed them and seem generally not to care about them, even when they have negative consequences for our survival as a species.
We are to these like ants are to us. Or maybe even more like mitochondria are to us. Were just the mitochondria of the corporations. And yes, psychopaths are the brains, usually. Natural selection I guess.
Our current way of thinking – what exactly *is* a 'mind' and what is this 'intelligence' – is just too damn narrow. There's tons of overlap of sciences from biology that apply to economics and companies as lifeforms, but for some reason I don't see that being researched in popular science.
You gotta admire the dedication to shoehorning cellular automata into every discipline he encounters.
Wolfram’s “bigger brains” piece raises the intriguing question of what kinds of thinking, communication, or even entirely new languages might emerge as we scale up intelligence, whether in biological brains or artificial ones.
It got me thinking that, over millions of years, human brain volume increased from about 400–500 cc in early hominins to around 1400 cc today. It’s not just about size, the brain’s wiring and complexity also evolved, which in turn drove advances in language, culture, and technology, all of which are deeply interconnected.
With AI, you could argue we’re witnessing a similar leap, but at an exponential rate. The speed at which neural networks are scaling and developing new capabilities far outpaces anything in human evolution.
It makes you wonder how much of the future will even be understandable to us, or if we’re only at the beginning of a much bigger story. Interesting times ahead.
We have massively increased our brain by scaling out not up. Going from pop. 8M to 8Bn is a 1000x
We often struggle to focus and think deeply. It is not because we are not trying hard enough. It is because the limitations are built into our brains. Maybe the things we find difficult today are not really that complex. It is just that we are not naturally wired for that kind of understanding.
If you make the brain larger, some things will get worse, rather than better. The cost of communication will be higher, it will get harder to dissipate heat, and so on.
It's quite possible evolution already pushed our brain size to the limit of what actually produces a benefit, at least with the current design of our brains.
The more obvious improvement is just to use our brains more. It costs energy to think, and for most of human existence food was limited, so evolution naturally created a brain that tries to limit energy use, rather than running at maximum as much as possible.
After reading this article, I couldn't help but wonder what would happen if our brains were bigger? Sure, it's tempting to imagine being able to process more information, or better understand the mysteries of the universe. But I also began to wonder, would we really be happier and more fulfilled?
Would a bigger brain make us better problem solvers, or would it just make us more lonely and less able to connect with others? Would allowing us to understand everything also make us less able to truly experience the world as we do now?
"...a single thread of experience through time."
Do human brains in general always work like this at the consciousness level? Dream states of consciousness exist, but they also seem single-threaded even if the state jumps around in ways more like context switching in an operating system than the steady awareness of the waking conscious mind. Then there are special cases - schizophrenia and dissociative identity disorders - in which multiple threads of existence apparently do exist in one physical brain, with all the problems this situation creates for the person in question.
Now, could one create a system of multiple independent single-threaded conscious AI minds, each trained in a specific scientific or mathematical discipline, but communicating constantly with each other and passing ideas back and forth, to mimic the kind of scientific discovery that interdisciplinary academic and research institutions are known for? Seems plausible, but possibly a bit frightening - who knows what they'd come up with? Singularity incoming?
I feel it is not just about bigger but also about what all is supported in the current brain that is potentially not as useful anymore or useful for intelligence as such. The evolutionary path for our brain has an absolutely major focus on keeping itself alive and based on that keeping the organism alive. Humans will often take potentially sub-optimal decisions because the optimal decision may have a very low probability of death for themselves or those genetically related to them. In some sense, similar to a manned fighter jet vs a drone, where in one case a large amount of effort and detail is expended on keeping the operating envelope consistent with keeping the human alive, whereas a drone can expand the envelope way more because the human is no longer a concern. If we could jettison some of the evolutionary baggage of the brain, it could potentially do so much more even within the same space.
I assume that we are neurons in a bigger brain that already exists!
I started down this belief system with https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach
Nonsense.
Neurology has proven numerous times that it’s not about the size of the toolbox but the diversity of tools within. The articles starts with cats can’t talk. Human can talk because we have a unique brain component dedicated to auditory speech parsing. Cats do, however, appear to listen to the other aspects of human communication almost, sometimes much more, precisely than many humans.
The reason size does not matter is that 20% of brain volume accounts for 80% of brain mass in the cerebellum. That isn’t the academic or creative part of the brain. Instead it processes things like motor function, sensory processing (not vision), and more.
The second most intelligent class of animals are corvids and their brains are super tiny. If you want to be smarter then increase your processing diversity, not capacity.
Those of you who have used psychedelics might have personal experience with this question.
There can be moments of lucidity during a psychedelic session where it's easy to think of discrete collections as systems, and to imagine those systems behaving with specific coherent strategies. Unfortunately, an hour or two later, the feeling disappears. But it leaves a memory of briefly understanding something that can't be understood. It's frustrating, yet profound. I assume this is where feelings of oneness with the universe, etc., come from.
I imagine that even if we did, this article would still be way too long.
I found some of it interesting, but there's just too many words in there and not much structure nor substance.
An interesting thought I had while reading the section on how larger brains allow more complicated language to represent context:
Why are we crushing down the latent space of an LLM to the text representation when doing llm-to-llm communication. What if you skipped decoding the vector to text and just feed the vectors directly into the next agent. It's so much richer with information.
If you want to go down the rabbit hole of higher order intelligences, look up egregores. I know John Vervake and Jordan Hall have done some work trying to describe them, as well as other people studying cognition. But when you get into that you start finding religion discussing how to interact them (after all, aren't such intelligences what people used to call gods?)
What if our brains lived in a higher dimensional space, and there was more room for neuron interconnectivity and heat dissipation?
Completely ignores any sort of scaling in emotional intelligence. Bro just wants a DX4.
Our brain aren't much impressive in the animal reignn what makes human (dangerously) so dominant apart from their size are their hands. After human started building, their brain power adapted to new targets
This is why once a tool like neuralink reaches a certain threshold of capability and enough people use it, you will be forced to chip yourself and your kids otherwise they will be akin to the chimps in the zoo. Enhanced human minds will work at a level unreachable by natural minds and those same natural minds will be left behind. Its a terrifying view on where we are going and where most of humanity will likely be forced to go. Then on top of that there will be an arms race to create / upgrade faster and more capable implants.
Makes me wonder if "bigger brains" would feel more like software-defined minds than just smarter versions of ourselves
I've often felt like this is one of the most serious issues faced by modern society: very small brains...
> What If We Had Bigger Brains?
Nothing. Elephants have bigger brains, but they didn't create civilization.
As brains get bigger, you get more compute, but you have to solve the "commute" problem. Messages have to be passed from one corner to the other, and fast. And there are so many input signals coming in (for us, likely from thirty trillion cells, or at least a significant fraction of those). Not all are worth transporting to other corners. Imagine a little tickle on your toe. Should that be passed on? Usually no, unless you are in an area with creepy crawlies, and other such situation. So decisions have to made. But who will make these decisions for us? (Fascinating inevitably recursive question we'll come back to)
This commute is pretty much ignored when making artificial brains which can guzzle energy, but matters criticallyfor biological brains. It needs to be (metabolically) cheap, and fast. What we perceive as a consciousness is very likely a consensus mechanism that helps a 100 billion neurons collectively decide, at a very biologically cheap price, what data is worth transporting to all corners for it to become meaningful information. And it has to be recursive, because these very same 100 billion neurons are collectively making up meaning along the way. This face matters to me, that does not, and so on. Replace face with anything and everything we encounter. So to solve the commute problem resulting from a vast amount of compute, we have a consensus mechanism that gives rise to a collective. That is the I, and the consensus mechanism is consciousness
We explore this (but not in these words) in our book Journey of the Mind.
You'll find that no other consciousness model talks about the "commute" problem because these are simply not biologically constrained models. They just assume that some information processing, message passing will be done in some black box. Trying to get all this done with the same type of compute (cortical columns, for instance) is a devilishly hard challenge (please see the last link for more about this). You sweep that under the rug, consciousness becomes this miraculous and seemingly unnecessary thing that somehow sits on top of information processing. So you then have theorists worry about philosophical zombies and whatnot. Because the hard engineering problem of commute was entirely ignored.
https://www.goodreads.com/en/book/show/60500189-journey-of-t...
https://saigaddam.medium.com/consciousness-is-a-consensus-me...
https://saigaddam.medium.com/conscious-is-simple-and-ai-can-...
https://saigaddam.medium.com/the-greatest-neuroscientist-you...
we would be elephants or whales? (sorry couldn't resist)
Seems like the 'intellectual paradox' where someone who thinks hard about subjects, concludes that all learning is done by thinking hard. Attending to a subject with the conscious mind.
Clearly not always the case. So many examples: we make judgements about a person within seconds of meeting them, with no conscious thoughts at all. We decide if we like a food, likewise.
I read code to learn it, just page through it, observing it, not thinking in words at all. Then I can begin to manipulate it, debug it. Not with words, or a conscious stream. Just familiarity.
My son plays a piece from sheet music, slowly and deliberately, phrase by phrase, until it sounds right. Then he plays through more quickly. Then he has it. Not sure conscious thoughts were ever part of the process. Certainly not words or logic.
So many examples are possible.
"can't run code inside our brains" lad... speak for yourself.
Sperm whales have the largest brains on earth but they have not invented fire, the wheel or internal combustion engine or nuclear weapons... Oh wait. Hmmm.
Imagine how hungry we'd be.
3 other posters with the same link. Why this one blows up?
> At 100 billion neurons, we know, for example, that compositional language of the kind we humans use is possible. At the 100 million or so neurons of a cat, it doesn’t seem to be.
The implication here that presupposes neuron count generally scales ability is the latest in a long line of extremely questionable lines of thought from mr wolfram. I understand having a blog, but why not separate it from your work life with a pseudonym?
> In a rough first approximation, we can imagine that there’s a direct correspondence between concepts and words in our language.
How can anyone take anyone who thinks this way seriously? Can any of us imagine a human brain that directly related words to concepts, as if "run" has a direct conceptual meaning? He clearly prefers the sound of his own voice compared to how his is received by others. That, or he only talks with people who never bothered to read the last 200 years of european philosophy. Which would make sense given his seeming adoration of LLMs.
There's a very real chance that more neurons would hurt our health. Perhaps our brain is structured in a way to maximize their use and minimize their cost. It's certainly difficult to justify brain size as a super useful thing (outside of my big-brained human existence) looking at the evolutionary record.
I think Wolfram may be ignoring what we already know of brains with more neurons than most have. We call them "austistic" and "adhd".
More is not always better, indeed it rarely is in my experience.
After reading this article, I couldn't help but wonder what would happen if our brains were bigger? Of course, it's tempting to imagine being able to process more information, or better understand the mysteries of the universe. But I also began to wonder, would we really be happier and more fulfilled?
Would having everything figured out make us more lonely, less able to connect with others, and less able to truly experience the world as we do now?
There are larger minds than ours, and they've been well-attested for millennia as celestial entities, i.e. spirits. Approaching it purely within the realm of the kinds of minds that we can observe empirically is self-limiting.
The one repeated statement throughout the article, if I interpreted it correctly, is that our brains pretty much process all the data in parallel, but result in a single set of actions to perform.
But don't we all know that not to be true? This is clearly evident with training sports, learning to play an instrument, or even forcing yourself to start using your non-natural hand for writing — and really, anything you are doing for the first time.
While we are adapting our brain to perform a certain set of new actions, we build our capability to do those in parallel: eg. imagine when you start playing tennis and you need to focus on your position, posture, grip, observing the ball, observing the opposing player, looking at your surroundings, and then you make decisions on the spot about how hard to run, in what direction, how do you turn the racquet head, how strong is your grip, what follow-through to use, + the conscious strategy that always lags a bit behind.
In a sense, we can't really describe our "stream of consciousness" well with language, but it's anything but single-threaded. I believe the problem comes from the same root cause as any concurrent programming challenge — these are simply hard problems, even if our brains are good at it and the principles are simple.
At the same time, I wouldn't even go so far to say we are unable to think conscious thoughts in parallel either, it's just that we are trained from early age to sanitize our "output". Did we ever have someone try learning to verbalize thoughts with the sign language, while vocalizing different thoughts through speaking? I am not convinced it's impossible, but we might not have figured out the training for it.