Sparse Adaptive Local Learning for Sensing and Analytics

 

New computer hardware based on the brain’s approach to interpreting visual stimuli is being explored that will have the ability to process images and video 1,000 times faster with 10,000 times less power without the loss of accuracy. The computer chip utilizes networks of conventional transistors and memristors to preform both logic and memory functions. Memristors are resistors capable of memory through the regulation of electric current based on previous stimuli applied to them and consume zero current when idle. This allows the use of big picture image processing to identify deep structures through inference processing rather than the current pixel by pixel rendering. The ability to extract key features to reconstruct images eliminates the noise in the images and videos, resulting in less data transfer and quicker rendering speeds. Professor Wei Lu’s research team was awarded an up-to-$5.7 million contract from DARPA to design and fabricate a self organizing, adaptive neural network based computer chip.

The layout of the memristor array (center) acting as the memory-synapses for the learning-neurons uses tungsten-oxide with vacancies that migrate when current flows thus changing a synapses strength. (Source: University of Michigan)

The goal of the project is to build a memristor network that acts as artificial synapses between conventional circuits, which would act as neurons. Two designs will be tested by the research team. A simpler version of the network uses memristors as memory nodes to store the values of its synapses along traditional wired connections between layers. The more complex architecture mimics the brain more closely by using the memristors themselves to process voltage spikes sent between layers. Professor Lu’s neural network image processor will connect artificial neurons using a crossbar of memristors with migrating oxygen vacancies in tungsten oxide to adaptively change its synaptic connection strengths, as shown in the figure to the right. Once the systems are built, they will be loaded with thousands of images and trained to recognize common features in the images. Connections from the input images to the artificial neurons will evolve spontaneously and each neuron should be able to identify one particular feature or shape after training. When shown a similar feature, only the neurons associated with the particular pattern/shape would fire and transmit info, rather than trying to recognize the entire image itself. Each recognized pattern is then combined to reconstruct the image. Through the use of the memristor network, new computing platforms would be capable of multitasking abilities that could enable processing of vast numbers of signals in parallel to allow advanced machine learning. These networks could be used to handle large data tasks much quicker and more efficiently.

More information about this project can be found at the links below.

http://www.eecs.umich.edu/eecs/about/articles/2013/Lu-image-processing-1000-times-faster.html
http://www.eetimes.com/document.asp?doc_id=1319270


W. Lu, Z. Zhang, M. Flynn University of Michigan
G. Kenyon Los Alamos National Lab
C. Teusher Portland State University

Work performed at the U Michigan Lurie Nanofabrication Facility.