My research interests are centered around the design and implementation of low-power architectures and circuits for the hardware acceleration of learning algorithms with a particular focus on neuromorphic structures. I am particularly interested in non-von Neumann architectures leveraging analog-CMOS and alternative (Beyond CMOS) computing substrates to achieve the limits of energy efficiency. I explore both hardware and software techniques to enable adaptive and learning algorithms and circuits in highly resource constrained environments such as sensors and processors used in the “Internet of Things” IOT.
I am looking for motivated graduate students and postdoctoral scholars who are interested in the general area of Computer Architecture, Circuit Design, and Machine Learning. Please click here to know more.
Leveraging neurally inspired principles of stochastic, distributed parallel computing to develop prototype neuromorphic learning hardware that targets large-scale data analysis and sensory signal processing.