IR-QNN Framework: An IR Drop-Aware Offline Training Of Quantized Crossbar Arrays

Mohammed E. Fouda, Sugil Lee, Jongeun Lee, Gun Hwan Kim, Fadi Kurdahi, Ahmed Eltawil

Research output: Contribution to journalArticlepeer-review

Abstract

Resistive Crossbar Arrays present an elegant implementation solution for Deep Neural Networks acceleration. The Matrix-Vector Multiplication, which is the corner-stone of DNNs, is carried out in O(1) compared to O(N2) steps for digital realizations of O(log2(N)) steps for in-memory associative processors. However, the IR drop problem, caused by the inevitable interconnect wire resistance in RCAs remains a daunting challenge. In this paper, we propose a fast and efficient training and validation framework to incorporate the wire resistance in Quantized DNNs, without the need for computationally extensive SPICE simulations during the training process. A fabricated four-bit Au/Al2O3/HfO2/TiN device is modelled and used within the framework with two-mapping schemes to realize the quantized weights. Efficient system-level IR-drop estimation methods are used to accelerate training. SPICE validation results show the effectiveness of the proposed method to capture the IR drop problem achieving the baseline accuracy with a 2% and 4% drop in the worst-case scenario for MNIST dataset on multilayer perceptron network and CIFAR 10 dataset on modified VGG and AlexNet networks, respectively. Other nonidealities, such as stuck-at fault defects, variability, and aging, are studied. Finally, the design considerations of the neuronal and the driver circuits are discussed.
Original languageEnglish (US)
Pages (from-to)1-1
Number of pages1
JournalIEEE Access
DOIs
StatePublished - 2020

Fingerprint Dive into the research topics of 'IR-QNN Framework: An IR Drop-Aware Offline Training Of Quantized Crossbar Arrays'. Together they form a unique fingerprint.

Cite this