Inferring neural activity before plasticity for learning beyond backpropagation

by warkanlockon 11/27/2024, 8:00 PMwith 48 comments

by lukeinator42on 11/27/2024, 9:37 PM

It has been clear for a long time (e.g. Marvin Minsky's early research) that:

1. both ANNs and the brain need to solve the credit assignment problem 2. backprop works well for ANNs but probably isn't how the problem is solved in the brain

This paper is really interesting, but is more a novel theory about how the brain solves the credit assignment problem. The HN title makes it sound like differences between the brain and ANNs were previously unknown and is misleading IMO.

by yongjikon 11/27/2024, 9:31 PM

The title of the paper is: "Inferring neural activity before plasticity as a foundation for learning beyond backpropagation"

The current HN title ("Brain learning differs fundamentally from artificial intelligence systems") seems very heavily editorialized.

by robwwilliamson 11/28/2024, 12:51 AM

Not my area of expertise, but this paper may be important for the reason that it is more closely aligned with the “enactive” paradigm of understand brain-body-behavior and learning than a backpropogation-only paradigm.

(I like enactive models of perception such as those advocated by Alva Noe, Humberto Maturana, Francisco Valera, and others. They get us well beyond the straightjacket of Cartesian dualism.)

Rather than have error signals tweak synaptic weights after a behavior, a cognitive system generates a set of actions it predicts will accommodate needs. This can apparently be accomplished without requiring short term synaptic plasticity. Then if all is good, weights are modified in a secondary phase that is more about asserting utility of the “test” response. More selection than descent. The emphasis is more on feedforward modulation and selection. Clearly there must be error signal feedback so some if you may argue that the distinction will be blurry at some levels. Agreed.

Look forward to reading more carefully to see how far off-base I am.

by pharringtonon 11/27/2024, 10:18 PM

Theories that brains predict the pattern of expected neural activity aren't new, (eg this paper cites work towards the Free Energy Principle, but not Embodied Predictive Interoception Coding works). I have 0 neuroscience training so I doubt I'd be able to reliably answer my question just by reading this paper, but does anyone know how specifically their Prospective Configuration model differs, or expands, upon the previous work? Is it a better model of how brains actually handle credit assign than the aforementioned models?

by oatmeal1on 11/27/2024, 10:23 PM

> In prospective configuration, before synaptic weights are modified, neural activity changes across the network so that output neurons better predict the target output; only then are the synaptic weights (hereafter termed ‘weights’) modified to consolidate this change in neural activity. By contrast, in backpropagation, the order is reversed; weight modification takes the lead, and the change in neural activity is the result that follows.

What would neural activity changes look like in an ML model?

by dborehamon 11/27/2024, 9:47 PM

Paper actually says that they fundamentally do learn the same way, but the fine details are different. Not too surprising.

by robotresearcheron 11/27/2024, 9:32 PM

The post headline is distracting people and making a poor discussion. The paper describes a learning mechanism that had advantages over backprop, and may be closer to what we see in brains.

The contribution of the paper, and its actual title is about the proposed mechanism.

All the comments amounting to ‘no shit, sherlock’, are about the mangled headline, not the paper.

by eli_gottliebon 11/27/2024, 10:01 PM

Oh hey, I know one of the authors on this paper. I've been meaning to ask him at NeurIPS how this prospective configuration algorithm works for latent variable models.

by yellowappleon 11/28/2024, 1:08 AM

The title of this post doesn't seem to have any connection to the title or content of the linked article.

by blackeyeblitzaron 11/27/2024, 9:38 PM

The comments here saying this was obvious or something else more negative are disappointing. Neural networks are named for neurons in biological brains. There is a lot of inspiration in deep learning that comes from biology. So the association is there. Pretending you’re superior for knowing the two are still different, contributes nothing. Doing so in more specific ways, or attempting to further understand the differences between deep learning and biology through research, is useful.

by ilakshon 11/28/2024, 2:09 AM

Looks amazing if it pans out at scale. Would be great if someone tried this with one of those simulated robotic training tasks that always have thousands or millions of trials rather than just CIFAR-10.

by nickpsecurityon 11/27/2024, 8:56 PM

Some are surprised that anyone would make this point, either the title or the research.

It might be a response to the many, many claims in articles that neural networks work like the brain. Even using terms like neurons and synapses. With those claims getting widespread, people also start building theories on top of them that make AI’s more like humans. Then, we won’t need humans or they’ll be extinct or something.

Many of us whom are tired of that are both countering it and just using different terms for each where possible. So, I’m calling the AI’s models, saying model training instead of learning, and finding and acting on patterns in data. Even laypeople seem to understand these terms with less confusion about them being just like brains.

by revskillon 11/28/2024, 1:36 AM

It is a good thing as i do not admire much human brain. U learn things slowly...

by CatWChainsawon 11/28/2024, 4:09 PM

"AI and Human learn differently."

Obviously. So can the scraping grifters who claim that AI 'learns just like a human' please shut up and never inflict their odious presence on the rest of humanity again? And also pay 10X damages for ruining the Internet.

by nextworddevon 11/28/2024, 2:11 AM

Brain learns through pain. LLMs learn through expending energy.

by josefritzishereon 11/27/2024, 8:20 PM

Surprise factor zero.

by isaacimagineon 11/27/2024, 8:29 PM

Wait, my brain doesn't do backprop over a pile of linear algebra after having the internet rammed through it? No way that's crazy /s

tl;dr: paper proposes a principle called 'prospective configuration' to explain how the brain does credit assignment and learns, as opposed to backprop. Backprop can lead to 'catastrophic interference' where learning new things abalates old associations, which doesn't match observed biological processes. From what I can tell, prosp. config learns by solving what the activations should have been to explain the error, and then updates the weights in accordance, which apparently somehow avoids abalating old associations. They then show how prosp. config explains observed biological processes. Cool stuff, wish I could find the code. There's some supplemental notes:

https://static-content.springer.com/esm/art%3A10.1038%2Fs415...

by tantaloron 11/27/2024, 8:23 PM

No shit, really?

by johneaon 11/27/2024, 9:03 PM

Was a study really necessary for this?

Do "AI" fanbois really think LLMs work like a biological brain?

This only reinforces the old maxim: Artificial intelligence will never be a match for natural stupidity

by FrustratedMonkyon 11/27/2024, 8:34 PM

"does not learn like human" does not mean "does not learn".

It is alien to us, that doesn't mean it is harmless.