Unraveling The Increasing Importance Of Neuromorphic Hardware

Unraveling The Increasing Importance Of Neuromorphic Hardware

A Brief Guide to What, Why, and How of Neuromorphic Computing

The concept of neuromorphic computing is not new. It was first proposed in the 1980s when Carver Mead coined the term 'neuromorphic engineering'. Carver had spent more than 40 years developing analysis systems aimed at mimicking the human body's senses and processing mechanisms, such as touching, seeing, hearing, and thinking. Neuromorphic computing is a subset of neuromorphic engineering that primarily focuses on the 'thinking' and 'processing' side of these human-like systems. Today, this neuromorphic computing is generalized and vaguely known as the next phase of the Artificial Intelligence (AI) revolution. This technology has wide-ranging applications such as computer vision, machine learning, neural network, deep learning, facial recognition, speech recognition, natural language processing, digital assistants, and more. But to successfully operate these applications, one needs robust AI hardware, i.e., AI chip. The chip architecture is composed up of graphics processing units (GPUs), tensor processing units (TPUs), central processing units (CPUs), field-programmable gate arrays, and application-specific integrated circuits. While these chips may not be used for both research and practical purpose, researchers are now developing Neuromorphic Hardware chip that can bridge the gap.

The Hype

The tech experts say that that neuromorphic computing can be fundamental in bringing the fourth revolution of AI. With the hardware market growth over time, we can witness neuromorphic chip dominate the scene. However, it is essential to note that neuromorphic chip would not replace GPUs, CPUs, ASICs, and other AI-accelerator chip architectures. Instead, it will enhance hardware platforms of other chips so that each can process the specialized AI workloads for which they were designed. For instance, an article posted in Wired mentions that conventional CPUs process instructions based on "clocked time" – information is transmitted at regular intervals as if managed by a metronome. So, if we supplement the neuromorphic chip with it, neuromorphics can communicate in parallel (and without the rigidity of clocked time) using "spikes." This will help in sending bursts of electric current information whenever required.

The Reality

Another reason cited by experts is that Moore's law is dying. According to this law formulated in 1965, "The law states that the number of transistors on a chip would double every two years." However, we can no longer make chips that are cheaper and can also match the performance requirements of today's systems. Neuromorphic chips not only can compute double the capabilities of existing chips, but it also consumes less power. IBM's neuromorphic chip, TrueNorth, is capable of 46 billion synaptic operations. The chip has one million individually programmable neurons and 256 million synapses. It consumes 70 milliwatts and fits in the palm of our hand. Even the Pohoiki computer, introduced last year, packs 8.3 million neurons. The Pohoiki delivers 1000x better performance and is 10,000x more energy efficient than equivalent GPUs. Meanwhile, Loihi from Intel has 130,000 neurons and 139 million synapses. This chip was fabricated using Intel's 14nm process technology using spiking neural networks (SNN). SNN is a dense network of neurons interconnected by synapses on a neuromorphic chip. Intel asserts that SNNs are a novel model for arranging those elements to emulate natural neural networks that exist in biological brains.

Currently, other than neuromorphic chips, scientists are also experimenting in Quantum Computing, Carbon Nanotubes, parallel architectures as an alternative to prevent the death of Moore's law.

Related Stories

No stories found.
logo
Analytics Insight
www.analyticsinsight.net