Notre Dame Intelligent Microsystems Lab

Lab Mission

At our core, we want to change the way we live through improving the efficiency of hardware and software

The research interests of our lab are centered around the design and implementation of low-power architectures and circuits for the hardware acceleration of learning algorithms with a particular focus on neuromorphic structures. We are particularly interested in non-von Neumann architectures leveraging analog-CMOS and alternative (Beyond CMOS) computing substrates to achieve the limits of energy efficiency. We explore both hardware and software techniques to enable adaptive and learning algorithms and circuits in highly resource constrained environments such as sensors and processors used in the “Internet of Things” IOT.

scroll to me

Lab Group

Siddharth Joshi is an Assistant Professor in the department of Computer Science and Engineering at the University of Notre Dame since 2018. Prior to that, he was Postdoctoral Fellow in the department of Bioengineering at University of California San Diego. He completed his PhD in 2017 in the Electrical and Computer Engineering department at UC-SD where he also completed his M.S. in 2012. He also completed a B. Tech from Dhirubhai Ambani Institute of Information and Communication Technology in India. His research focuses on the co‐design of custom, non‐Boolean and non‐von Neumann, hardware and algorithms to enable machine learning and adaptive signal processing in highly resource constrained environments.

Clemens Schafer's research interests focus on brain-inspired computing. He especially enjoys applying findings from neuroscience to topics in machine learning. This ranges from using biologically plausible spiking dynamics on a single neuron level to stochasticity in synapsis to hyperdimensional computing. He is passionate about finding new ways for computers to learn more efficiently. In the future, his goal is to partake in the development of cutting-edge brain-inspired computing hardware and algorithms. During his undergrad at Catholic University Eichstätt-Ingolstadt and masters studies at UCL in Europe, he was a scholarship holder of the Friedrich Naumann Foundation for Freedom and the German Academic Exchange Service (DAAD).

Patrick Faley is a computer engineering undergraduate student at the University of Notre Dame. His research so far has revolved around alternate implementations of common neural network architectures, with the goal of creating models capable of being implemented directly into hardware. He has also worked on implementing various statistical techniques to gain a deeper understanding of the inner workings of a neural network. In the future, Patrick hopes to implement his networks in hardware to create robust, low-power computer vision systems.

scroll to me

Md Shahrul Islam is a Computer Science and Engineering PhD student at the University of Notre Dame. His research interest includes implementation of novel bio-inspired algorithm in resource constraint environments. His work focuses into designing efficient low power analog and mixed signal systems to implement machine learning and signal processing tasks. Before joining University of Notre Dame he completed his Masters in Electrical and Computer Engineering from Southern Illinois University at Carbondale.

The focal point of his research is Machine learning acceleartors. He is a Research Assistant at University of Notre Dame with a history of working at the intersection of artificial intelligence and hardware for emerging technologies at the architecture level designs. Before joining the Universoty of Notre Dame he studied at Colorado State University and National University of Iran.

scroll to me

Mark Horeni is a PhD student in the department of Computer Science and Engineering at the University of Notre Dame. His interests lie trying to solve neuromorphic engineering problems. This takes the form of examining different representations of information, to studying how properties of emerging devices can be used as solutions to problems. In undergraduate at Lewis University, he did research into the connectome of the C. elegans roundworm to analyze the predictive power of the links between neurons.

scroll to me


We are studying techniques that are not energy intensive to improve machine learning algorithms

Our studies develop extremely power efficient processors and circuits that can operate under severe resource constraints such as sensors and processors used in the “Internet of Things” IOT. Our group explores hardware and software techniques to enable adaptive and learning algorithms and to be implemented in low-power architectures and circuits that use machine learning algorithms to operate at the limits of energy efficiency. The use of non-von Neumann architectures and both analog-CMOS and Beyond CMOS techniques enables these designs to operate at the extremes of energy efficiency.

scroll to me


Patrick Faley created a tool based on this paper. which allows users to analyze neural networks written in Pytorch to measure how well a network separates different classes during training. With this information, users can ensure that models are learning to distinguish between classes and determine which specific layers contribute the most to class separation. This can be used to make informed decisions about model utility and layer composition. The repository can be found here.

scroll to me

Join Us

We are looking for motivated graduate students and postdoctoral scholars who are interested in the general area of Computer Architecture Circuit Design and Machine Learning.

We are actively looking for motivated students who are interested in pursuing Ph.D. degrees in the topic of exploring algorithms, architectures, or circuit-level techniques to develop energy-efficient intelligent systems.

Helpful Experience

  • Hardware Design for Machine Learning and Deep Neural Networks
  • Reinforcement Learning
  • Generative Models
  • Strong coding abilities (C++ and Python)
  • Deep Learning platforms (PyTorch or Tensorflow)
  • VLSI Circuit Design and Computer Architecture
  • CMOS Chip Tape-Out and Testing

Open Positions

At the Intelligent Microsystems Lab, our research traverses various levels of abstraction including systems, circuits, and algorithm design residing at the interface of machine intelligence and cyber-physical systems. We study neuromorphic and other non-von Neumann architectures where we leverage energy efficiencies in analog-CMOS and alternative (Beyond CMOS) computing structures to deliver orders of magnitude improvement in the performance of adaptive and learning systems. We work closely with groups developing new devices and materials to help us develop new chips aimed at achieving the limits of energy efficiency.

How to apply

If you are a postdoc candidate, please send us an email containing your CV, research statement, and two names for requesting reference letters.

scroll to me



  1. Memory-Efficient Synaptic Connectivity for Spike-Timing-Dependent Plasticity


  1. Sub-Vrms-Noise Sub-W/Channel ADC-Direct Neural Recording With 200-mV/ms Transient Recovery Through Predictive Digital Autoranging
  2. Unsupervised Synaptic Pruning Strategies for Restricted Boltzmann Machines
  3. A 92dB dynamic range sub-μVrms-noise 0.8μW/ch neural-recording ADC array with predictive digital autoranging
  4. Capacitive passive mixer baseband receiver with broadband harmonic rejection


  1. Neuromorphic event-driven multi-scale synaptic connectivity and plasticity
  2. Neuromorphic neural interfaces: from neurophysiological inspiration to biohybrid coupling with nervous systems
  3. From algorithms to devices: Enabling machine learning through ultra-low-power VLSI mixed-signal array processing
  4. Memristor for computing: Myth or reality?
  5. 21.7 2pJ/MAC 14b 8× 8 linear transform mixed-signal spatial filter in 65nm CMOS with 84dB interference suppression
  6. High-Fidelity Spatial Signal Processing in Low-Power Mixed-Signal VLSI Arrays


  1. A 6.5-/MHz Charge Buffer With 7-fF Input Capacitance in 65-nm CMOS for Noncontact Electropotential Sensing
  2. High-Fidelity Spatial Signal Processing in Low-Power Mixed-Signal VLSI Arrays
  3. Energy Recycling Telemetry IC With Simultaneous 11.5 mW Power and 6.78 Mb/s Backward Data Delivery Over a Single 13.56 MHz Inductive Link
  4. Forward table-based presynaptic event-triggered spike-timing-dependent plasticity
  5. Hierarchical Address Event Routing for Reconfigurable Large-Scale Neuromorphic Systems
  6. Stochastic synapses enable efficient brain-inspired learning machines
  7. A 1.3 mW 48 MHz 4 channel MIMO baseband receiver with 65 dB harmonic rejection and 48.5 dB spatial signal separation
  8. Neuromorphic architectures with electronic synapses


  1. Unsupervised learning in synaptic sampling machines
  2. A CMOS 4-channel MIMO baseband receiver with 65dB harmonic rejection over 48MHz and 50dB spatial signal separation over 3MHz at 1.3 mW


  1. A 12.6 mW 8.3 Mevents/s contrast detection 128× 128 imager with 75 dB intra-scene DR asynchronous random-access digital readout
  2. A 7.86 mW+ 12.5 dBm in-band IIP3 8-to-320 MHz capacitive harmonic rejection mixer in 65nm CMOS
  3. Energy-recycling integrated 6.78-Mbps data 6.3-mW power telemetry over a single 13.56-MHz inductive link


  1. Dens invaginatus: a case report.


  1. Dept. of Electr. & Comput. Eng., UC San Diego, La Jolla, CA, USA
  2. 65k-neuron integrate-and-fire array transceiver with address-event reconfigurable synaptic routing
  3. Event-driven neural integration and synchronicity in analog VLSI
  4. Live demonstration: Hierarchical address-event routing architecture for reconfigurable large scale neuromorphic systems
  5. Hierarchical address-event routing architecture for reconfigurable large scale neuromorphic systems


  1. Head Harness & Wireless EEG Monitoring System
  2. Subthreshold MOS dynamic translinear neural and synaptic conductance
  3. Double Precision Sparse Matrix Vector Multiplication Accelerator on FPGA.


  1. FPGA based high performance double-precision matrix multiplication
  2. Scalable event routing in hierarchical neural array architecture with global synaptic connectivity


  1. Matrix Multiplication


  1. 64bit, floating point matrix matrix multiplier on FPGA

Publication Details

More information on our publications can be found by clicking the link here.