Personal tools

Neuromorphic Computing

[Helsinki Cathedral, Helsinki, Finland - Tapio Haaja]


New Approaches to Computing Modeled on the Human Brain



- Neuromorphic Engineering

Neuromorphic engineering, also known as neuromorphic computing, is a concept developed by Carver Mead, in the late 1980s, describing the use of very-large-scale integration (VLSI) systems containing electronic analog circuits to mimic neuro-biological architectures present in the nervous system. In recent times, the term neuromorphic has been used to describe analog, digital, mixed-mode analog/digital VLSI, and software systems that implement models of neural systems (for perception, motor control, or multisensory integration). The implementation of neuromorphic computing on the hardware level can be realized by oxide-based memristors, spintronic memories, threshold switches, and transistors. 

A key aspect of neuromorphic engineering is understanding how the morphology of individual neurons, circuits, applications, and overall architectures creates desirable computations, affects how information is represented, influences robustness to damage, incorporates learning and development, adapts to local change (plasticity), and facilitates evolutionary change. 

Neuromorphic engineering is an interdisciplinary subject that takes inspiration from biology, physics, mathematics, computer science, and electronic engineering[9] to design artificial neural systems, such as vision systems, head-eye systems, auditory processors, and autonomous robots, whose physical architecture and design principles are based on those of biological nervous systems.


- New Approaches to Computing Modeled on the Human Brain

Neuromorphic computing encompasses a range of different approaches to computing software and hardware that seek to mimic the neuron and synapse structure of the human brain. It holds out the promise of both potentially more powerful and sophisticated problem-solving as well as more energy-efficient computing.

As a computational system, the human brain is not only uniquely complex but also roughly a million times more energy-efficient than the most advanced supercomputers in existence today. As powerful as the coming generation of exascale computers may be, it is difficult to match the human brain in sophistication of information processing and energy efficiency. Neuromorphic computing offers an important pathway worth exploring as we move toward an era beyond exascale and beyond Moore’s Law. 

- Neuromorphic Computing: The Next Generation of AI

The first generation of AI was rules-based and emulated classical logic to draw reasoned conclusions within a specific, narrowly defined problem domain. It was well suited to monitoring processes and improving efficiency, for example. The second, current generation is largely concerned with sensing and perception, such as using deep-learning networks to analyze the contents of a video frame. 

A coming next generation will extend AI into areas that correspond to human cognition, such as interpretation and autonomous adaptation. This is critical to overcoming the so-called “brittleness” of AI solutions based on neural network training and inference, which depend on literal, deterministic views of events that lack context and commonsense understanding. Next-generation AI must be able to address novel situations and abstraction to automate ordinary human activities. 

Global research community is driving computer-science research that contributes to this third generation of AI. Key focus areas include neuromorphic computing, which is concerned with emulating the neural structure and operation of the human brain, as well as probabilistic computing, which creates algorithmic approaches to dealing with the uncertainty, ambiguity, and contradiction in the natural world. 



[More to come ...]


Document Actions