Philosophical thought: if the aim of this field is to create an artificial human brain, then it would be fair to say that the more advanced the field becomes, the less difference there is between the artificial brain and a real brain. This begs two questions:
1) Is the ultimate form of this technology ethically distinguishable from a slave?
2) Is there an ethical difference between bioengineering an actual human brain for computing purposes, versus constructing a digital version that is functionally identical?
I've been building a 'neuromorphic' kernel/bare metal OS that operates on mac hardware using APL primitives as its core layer. Time is considered another 'position' and the kernel itself is vector oriented using 4d addressing with a 32x32x32 'neural substrate'.
I am so ready and eager for a paradigm shift of hardware & software. I think in the future 'software' will disappear for most people, and they'll simply ask and receive.
Neuromorphic computation has been hyped up for ~ 20 year by now. So far it has dramatically underperformed, at least vis-a-vis the hype.
The article does not distinguish between training and inference. Google Edge TPUs https://coral.ai/products/ each one is capable of performing 4 trillion operations per second (4 TOPS), using 2 watts of power—that's 2 TOPS per watt. So inference is already cheaper than the 20 watts the paper attributes to the brain. To be sure, LLM training is expensive, but so is raising a child for 20 years. Unlike the child, LLMs can share weights, and amortise the energy cost of training.
Another core problem with neuromorphic computation is that we currently have no meaningful idea how the brain produces intelligence, so it seems to be a bit premature to claim we can copy this mechanism. Here is what the Nvidia Chief Scientist B. Dally (and one of the main developers of modern GPU architectures) says about the subject: "I keep getting those calls from those people who claim they are doing neuromorphic computing and they claim there is something magical about it because it's the way that the brain works ... but it's truly more like building an airplane by putting feathers on it and flapping with the wings!" From "Hardware for Deep Learning" HotChips 2023 keynote. https://www.youtube.com/watch?v=rsxCZAE8QNA This is at 21:28. The whole talk is brilliant and worth watching.
And still no mention of Numenta… I’ve always felt it’s an underrated company, built on an even more underrated theory of intelligence
Just searched against HN, seems this term is at least 8 years old
Once again, I am quite surprised by the sudden uptick of AI content on HN coming out of LANL. Does anyone know if its just getting posted to HN and staying on the first page suddenly, or is this a change in strategy for the lab? Even so, I don't see the other NatLabs showing up like this.
memristors are back
I could be mistaken with this nitpick but isn't there a unit mismatch in "...just 20 watts—the same amount of electricity that powers two LED lightbulbs for 24 hours..."?