Loading…

Optimized programming algorithms for multilevel RRAM in hardware neural networks

A key requirement for RRAM in neural network accelerators with a large number of synaptic parameters is the multilevel programming. This is hindered by resistance imprecision due to cycle-to-cycle and device-to-device variations. Here, we compare two multilevel programming algorithms to minimize res...

Full description

Saved in:
Bibliographic Details
Main Authors: Milo, Valerio, Anzalone, Francesco, Zambelli, Cristian, Perez, Eduardo, Mahadevaiah, Mamathamba K., Ossorio, Oscar G., Olivo, Piero, Wenger, Christian, Ielmini, Daniele
Format: Conference Proceeding
Language:English
Subjects:
Citations: Items that cite this one
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:A key requirement for RRAM in neural network accelerators with a large number of synaptic parameters is the multilevel programming. This is hindered by resistance imprecision due to cycle-to-cycle and device-to-device variations. Here, we compare two multilevel programming algorithms to minimize resistance variations in a 4-kbit array of HfO 2 RRAM. We show that gate-based algorithms have the highest reliability. The optimized scheme is used to implement a neural network with 9-level weights, achieving 91.5% (vs. software 93.27%) in MNIST recognition.
ISSN:1938-1891
DOI:10.1109/IRPS46558.2021.9405119