Notre Dame Intelligent Microsystems Lab

Lab Mission

Our lab builds real systems that operate at the frontiers of computation and physics. We first design biologically-inspired, hardware-friendly algorithms, then we design and architect hardware that runs these algorithms very efficiently. In the end, we use these chips in specific domains such as robotics, biomedical-implants, and anywhere we might need to embed intelligence.

scroll to me

Lab Group

Siddharth Joshi is an Assistant Professor in the department of Computer Science and Engineering at the University of Notre Dame since 2018. Prior to that, he was Postdoctoral Fellow in the department of Bioengineering at University of California San Diego. He completed his PhD in 2017 in the Electrical and Computer Engineering department at UC-SD where he also completed his M.S. in 2012. He also completed a B. Tech from Dhirubhai Ambani Institute of Information and Communication Technology in India. His research focuses on the co‐design of custom, non‐Boolean and non‐von Neumann, hardware and algorithms to enable machine learning and adaptive signal processing in highly resource constrained environments.

Clemens Schaefer's research interests focus on brain-inspired computing. He especially enjoys applying findings from neuroscience to topics in machine learning. This ranges from using biologically plausible spiking dynamics on a single neuron level to stochasticity in synapsis to hyperdimensional computing. He is passionate about finding new ways for computers to learn more efficiently. In the future, his goal is to partake in the development of cutting-edge brain-inspired computing hardware and algorithms. During his undergrad at Catholic University Eichstätt-Ingolstadt and masters studies at UCL in Europe, he was a scholarship holder of the Friedrich Naumann Foundation for Freedom and the German Academic Exchange Service (DAAD).

Patrick Faley is a computer engineering undergraduate student at the University of Notre Dame. His research so far has revolved around alternate implementations of common neural network architectures, with the goal of creating models capable of being implemented directly into hardware. He has also worked on implementing various statistical techniques to gain a deeper understanding of the inner workings of a neural network. In the future, Patrick hopes to implement his networks in hardware to create robust, low-power computer vision systems.

scroll to me

Md Shahrul Islam is a Computer Science and Engineering PhD student at the University of Notre Dame. His research interest includes implementation of novel bio-inspired algorithm in resource constraint environments. His work focuses into designing efficient low power analog and mixed signal systems to implement machine learning and signal processing tasks. Before joining University of Notre Dame he completed his Masters in Electrical and Computer Engineering from Southern Illinois University at Carbondale.

Pooria Taheri is a PhD student in the department of Computer Science and Engineering at the University of Notre Dame. His research interest revolves around designing accelerators using emerging technologies for machine learning applications more specifically Spiking neural networks (SNN). Before joining the University of Notre Dame, he graduated from the National University of Iran, and his research was mostly focused on parallel computations for classification algorithms e.g. k-means and VLSI designs. He also worked as a junior researcher at IPM (Institute for Research in Fundamental Science).

Mark Horeni is a PhD student in the department of Computer Science and Engineering at the University of Notre Dame. His interests lie trying to solve neuromorphic engineering problems. This takes the form of examining different representations of information, to studying how properties of emerging devices can be used as solutions to problems. In undergraduate at Lewis University, he did research into the connectome of the C. elegans roundworm to analyze the predictive power of the links between neurons.

Kshama is a PhD student in Computer Science and Engineering department at the University of Notre Dame. Her research interests includes designing and implementation of low-power integrated circuits for biomedical and bio-inspired systems. Before joining University of Notre Dame, she graduated from University of Wyoming with B.S in Electrical Engineering and from Boise State University with M.S in Electrical Engineering.

Yasmein is a PhD student in Computer Science and Engineering department at the University of Notre Dame. Her research interests includes design and implementation of Digital Neuromorphic Integrated Circuits. She graduated from University of Science and Technology at Zewail City with a B.Sc. in Nanotechnology and Nanoelectronics Engineering.

Thomas is a Ph.D. student in the Department of Computer Science and Engineering at the University of Notre Dame. His research interests include biologically inspired machine learning and edge AI applications. Thomas graduated from Brown University, concentrating in Computational Cognitive Neuroscience.

Jake Leporte is a Master's student in the department of Computer Science and Engineering at the University of Notre Dame. Jake also completed his undergradute studies at the University of Notre Dame, and commissioned into the US Air Force through ND's AFROTC Detachment 225. His research focuses on digital accelerator design for neuromorphic systems.

Samir is a Computer Science and Engineering PhD student at the University of Notre Dame. His research interests include applying analog computation towards Radio-Frequency applications. He is also interested into designing efficient low power analog and mixed signal systems to implement machine learning and signal processing tasks. Before joining University of Notre Dame he completed his Bachelors in Computer Science and Engineering from BRAC University at Dhaka, Bangladesh. After that he was in the industry, where he gained expertise in different FinFET technology nodes such as 3nm, 5nm, 16nm etc.

scroll to me

Research

We are studying techniques that are not energy intensive to improve machine learning algorithms

Our studies develop extremely power efficient processors and circuits that can operate under severe resource constraints such as sensors and processors used in the “Internet of Things” IOT. Our group explores hardware and software techniques to enable adaptive and learning algorithms and to be implemented in low-power architectures and circuits that use machine learning algorithms to operate at the limits of energy efficiency. The use of non-von Neumann architectures and both analog-CMOS and Beyond CMOS techniques enables these designs to operate at the extremes of energy efficiency.

scroll to me

Projects

Patrick Faley created a tool based on this paper. which allows users to analyze neural networks written in Pytorch to measure how well a network separates different classes during training. With this information, users can ensure that models are learning to distinguish between classes and determine which specific layers contribute the most to class separation. This can be used to make informed decisions about model utility and layer composition. The repository can be found here.

scroll to me

Join Us

We are looking for motivated graduate students and postdoctoral scholars who are interested in the general area of Computer Architecture Circuit Design and Machine Learning.

We are actively looking for motivated students who are interested in pursuing Ph.D. degrees in the topic of exploring algorithms, architectures, or circuit-level techniques to develop energy-efficient intelligent systems.

Helpful Experience

  • Hardware Design for Machine Learning and Deep Neural Networks
  • Reinforcement Learning
  • Generative Models
  • Strong coding abilities (C++ and Python)
  • Deep Learning platforms (PyTorch or Tensorflow)
  • VLSI Circuit Design and Computer Architecture
  • CMOS Chip Tape-Out and Testing

Open Positions

At the Intelligent Microsystems Lab, our research traverses various levels of abstraction including systems, circuits, and algorithm design residing at the interface of machine intelligence and cyber-physical systems. We study neuromorphic and other non-von Neumann architectures where we leverage energy efficiencies in analog-CMOS and alternative (Beyond CMOS) computing structures to deliver orders of magnitude improvement in the performance of adaptive and learning systems. We work closely with groups developing new devices and materials to help us develop new chips aimed at achieving the limits of energy efficiency.

How to apply

If you are a postdoc candidate, please send us an email containing your CV, research statement, and two names for requesting reference letters.

scroll to me

Publications

2019

  1. Memory-Efficient Synaptic Connectivity for Spike-Timing-Dependent Plasticity

2018

  1. Sub-Vrms-Noise Sub-W/Channel ADC-Direct Neural Recording With 200-mV/ms Transient Recovery Through Predictive Digital Autoranging
  2. Unsupervised Synaptic Pruning Strategies for Restricted Boltzmann Machines
  3. A 92dB dynamic range sub-μVrms-noise 0.8μW/ch neural-recording ADC array with predictive digital autoranging
  4. Capacitive passive mixer baseband receiver with broadband harmonic rejection

2017

  1. Neuromorphic event-driven multi-scale synaptic connectivity and plasticity
  2. Neuromorphic neural interfaces: from neurophysiological inspiration to biohybrid coupling with nervous systems
  3. From algorithms to devices: Enabling machine learning through ultra-low-power VLSI mixed-signal array processing
  4. Memristor for computing: Myth or reality?
  5. 21.7 2pJ/MAC 14b 8× 8 linear transform mixed-signal spatial filter in 65nm CMOS with 84dB interference suppression
  6. High-Fidelity Spatial Signal Processing in Low-Power Mixed-Signal VLSI Arrays

2016

  1. A 6.5-/MHz Charge Buffer With 7-fF Input Capacitance in 65-nm CMOS for Noncontact Electropotential Sensing
  2. High-Fidelity Spatial Signal Processing in Low-Power Mixed-Signal VLSI Arrays
  3. Energy Recycling Telemetry IC With Simultaneous 11.5 mW Power and 6.78 Mb/s Backward Data Delivery Over a Single 13.56 MHz Inductive Link
  4. Forward table-based presynaptic event-triggered spike-timing-dependent plasticity
  5. Hierarchical Address Event Routing for Reconfigurable Large-Scale Neuromorphic Systems
  6. Stochastic synapses enable efficient brain-inspired learning machines
  7. A 1.3 mW 48 MHz 4 channel MIMO baseband receiver with 65 dB harmonic rejection and 48.5 dB spatial signal separation
  8. Neuromorphic architectures with electronic synapses

2015

  1. Unsupervised learning in synaptic sampling machines
  2. A CMOS 4-channel MIMO baseband receiver with 65dB harmonic rejection over 48MHz and 50dB spatial signal separation over 3MHz at 1.3 mW

2014

  1. A 12.6 mW 8.3 Mevents/s contrast detection 128× 128 imager with 75 dB intra-scene DR asynchronous random-access digital readout
  2. A 7.86 mW+ 12.5 dBm in-band IIP3 8-to-320 MHz capacitive harmonic rejection mixer in 65nm CMOS
  3. Energy-recycling integrated 6.78-Mbps data 6.3-mW power telemetry over a single 13.56-MHz inductive link

2013

  1. Dens invaginatus: a case report.

2012

  1. Dept. of Electr. & Comput. Eng., UC San Diego, La Jolla, CA, USA
  2. 65k-neuron integrate-and-fire array transceiver with address-event reconfigurable synaptic routing
  3. Event-driven neural integration and synchronicity in analog VLSI
  4. Live demonstration: Hierarchical address-event routing architecture for reconfigurable large scale neuromorphic systems
  5. Hierarchical address-event routing architecture for reconfigurable large scale neuromorphic systems

2011

  1. Head Harness & Wireless EEG Monitoring System
  2. Subthreshold MOS dynamic translinear neural and synaptic conductance
  3. Double Precision Sparse Matrix Vector Multiplication Accelerator on FPGA.

2010

  1. FPGA based high performance double-precision matrix multiplication
  2. Scalable event routing in hierarchical neural array architecture with global synaptic connectivity

2009

  1. Matrix Multiplication

2008

  1. 64bit, floating point matrix matrix multiplier on FPGA

Publication Details

More information on our publications can be found by clicking the link here.