Personal tools

Neuromorphic Computing

Microelectronics_012623A
[Microelectronics: A simplified schematic of a crossbar circuit element designed for future low power, non-volatile memory or neuromorphic computing applications - US Department of Energy]
  

New Approaches to Computing Modeled on the Human Brain

 

 

- Neuromorphic Engineering

Neuromorphic engineering, also known as neuromorphic computing, is a concept developed by Carver Mead, in the late 1980s, describing the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system. 

In recent times, the term neuromorphic has been used to describe analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems (for perception, motor control, or multisensory integration). The implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors, spintronic memories, threshold switches, and transistors. 

 

- Neuromorphic Research and Artificial General Intelligence (AGI)

A key aspect of neuromorphic engineering is understanding how the morphology of individual neurons, circuits, applications, and overall architectures creates desirable computations, affects how information is represented, influences robustness to damage, incorporates learning and development, adapts to local change (plasticity), and facilitates evolutionary change. 

Neuromorphic engineering is an interdisciplinary subject that takes inspiration from biology, physics, mathematics, computer science, and electronic engineering to design artificial neural systems, such as vision systems, head-eye systems, auditory processors, and autonomous robots, whose physical architecture and design principles are based on those of biological nervous systems.

Efforts to produce artificial general intelligence (AGI) are also advancing neuromorphic research. AGI refers to AI computers that understand and learn like humans. By replicating the human brain and nervous system, AGI can create artificial brains with the same cognitive capabilities as biological brains. Such a brain could provide insights into cognition and answer questions about consciousness.

 

- Neuromorphic Computing

Neuromorphic computing is an approach to computer engineering in which elements of a computer mimic systems in the human brain and nervous system. The term refers to the design of hardware and software computing elements. 

Neuromorphic engineers draw inspiration from multiple disciplines -- including computer science, biology, mathematics, electrical engineering and physics -- to create biologically inspired computer systems and hardware. Among the biological structures of the brain, neuromorphic structures most often mimic neurons and synapses. That's because neuroscientists consider neurons to be the basic unit of the brain.

Neuromorphic computing uses hardware based on the structure, processes and capabilities of neurons and synapses in biological brains. The most common form of neuromorphic hardware is the spiking neural network (SNN). In this hardware, nodes -- or spiking neurons -- process and store data like biological neurons. 

Artificial synaptic devices connect spiking neurons. These devices use analog circuits to transmit electrical signals that mimic brain signals. Unlike most standard computers that encode data through a binary system, spiking neurons measure and encode discrete analog signal changes themselves.

The high-performance computing architecture and capabilities used in neuromorphic computers differ from the standard computer hardware found in most modern computers (also known as von Neumann computers).

 

- New Approaches to Computing Modeled on the Human Brain

Neurons use chemical and electrical impulses to send messages between different areas of the brain and the rest of the nervous system. Neurons connect to each other using synapses. Neurons and synapses are more versatile, adaptable, and energy-efficient information processors than conventional computer systems.

Neuromorphic computing encompasses a range of different approaches to computing software and hardware that seek to mimic the neuron and synapse structure of the human brain. It holds out the promise of both potentially more powerful and sophisticated problem-solving as well as more energy-efficient computing.

As a computational system, the human brain is not only uniquely complex but also roughly a million times more energy-efficient than the most advanced supercomputers in existence today. As powerful as the coming generation of exascale computers may be, it is difficult to match the human brain in sophistication of information processing and energy efficiency. Neuromorphic computing offers an important pathway worth exploring as we move toward an era beyond exascale and beyond Moore’s Law. 

 

- Characteristics of Neuromorphic Computers

Neuromorphic computers have the following characteristics:

  • Collocate processing and memory. Brain-inspired neuromorphic computer chips process and store data together on each individual neuron, rather than assigning each neuron a separate area. By configuring processing and memory, neural network processors and other neuromorphic processors avoid von Neumann bottlenecks and can be both high performance and low power consumption.
  • Lots of parallelism. Neuromorphic chips, such as Intel Labs' Loihi 2, can have up to 1 million neurons. Each neuron performs a different function simultaneously. In theory, this could allow neuromorphic computers to perform as many functions as neurons at once. This parallel function mimics random noise, the seemingly random firing of neurons in the brain. Neuromorphic computers are designed to handle this random noise better than conventional computers.
  • Inherently scalable. Neuromorphic computers have no traditional scalability barriers. To run larger networks, users add more neuromorphic chips, increasing the number of active neurons.
  • Event-driven computing. Individual neurons and synapses perform calculations in response to spikes from other neurons. This means that only a fraction of the neurons actually processing the spike are using energy; the rest of the computer remains idle. This makes the use of electricity very efficient.
  • High adaptability and plasticity. Like humans, neuromorphic computers are designed to respond flexibly to changing stimuli from the outside world. In a spiking neural network (or SNN) architecture, each synapse is assigned a voltage output and adjusts this output according to its task. SNNs are designed to evolve different connections in response to underlying synaptic delays and voltage thresholds of neurons. With increased plasticity, researchers hope that neuromorphic computers will be able to learn, solve new problems and quickly adapt to new environments.
  • Fault tolerance. Neuromorphic computers are highly fault-tolerant. Just like the human brain, information is kept in multiple places, which means that the failure of one component will not affect the operation of the computer.

 

- Applications of Neuromorphic Computing

Neuromorphic computing is an emerging scientific field with no practical applications as of yet. Various groups are conducting research, including universities; the U.S. military; and technology companies such as Intel Labs and IBM.

Neuromorphic technologies are expected to be used in the following areas:

  • Deep learning applications
  • Next-generation semiconductors
  • Transistor
  • Accelerators
  • Autonomous systems such as robots, drones, self-driving cars, and artificial intelligence (AI).

Some experts predict that neuromorphic processors could provide a way around the limitations of Moore's Law.

Many experts believe that neuromorphic computing has the potential to revolutionize the algorithmic power, efficiency, and capabilities of artificial intelligence and reveal insights into cognition. However, neuromorphic computing is still in its early stages of development and faces many challenges.

 

- Neuromorphic Computing: The Next Generation of AI

The first generation of AI was rules-based and emulated classical logic to draw reasoned conclusions within a specific, narrowly defined problem domain. It was well suited to monitoring processes and improving efficiency, for example. The second, current generation is largely concerned with sensing and perception, such as using deep-learning networks to analyze the contents of a video frame. 

A coming next generation will extend AI into areas that correspond to human cognition, such as interpretation and autonomous adaptation. This is critical to overcoming the so-called “brittleness” of AI solutions based on neural network training and inference, which depend on literal, deterministic views of events that lack context and commonsense understanding. Next-generation AI must be able to address novel situations and abstraction to automate ordinary human activities. 

Global research community is driving computer-science research that contributes to this third generation of AI. Key focus areas include neuromorphic computing, which is concerned with emulating the neural structure and operation of the human brain, as well as probabilistic computing, which creates algorithmic approaches to dealing with the uncertainty, ambiguity, and contradiction in the natural world. 

 


[More to come ...]


 

Document Actions