INC 2020 / IRDS™ Fall 2020 IFT Readouts

2020 IEEE International Nanodevices and Computing (INC) Conference / International Roadmap for Devices and Systems (IRDS™)

Welcome | Program | Speakers | Registration | Sponsors


Welcome to the 2020 IEEE International Nanodevices & Computing (INC) Conference.

The IRDS™/INC 2020 conference will be held 2-3 September as a virtual event. The 2020 IRDS™ roadmap will be presented. New upcoming chapters of 2021 IRDS™ on More than Moore and Packaging Integration will be introduced for the first time. Overall plans for the 2021 IRDS™ will be outlined. Highlights of actual industry responses to the 2020 IRDS™ and actual trends will be presented.

In addition, state-of-the-art experimental results on these topics and more will be presented on the INC day by an international group of invited experts covering Computer Architecture & Communication Systems and Nanodevices & Materials.

The IEEE International Nanodevices and Computing (INC) Conference covers the continuously evolving technology ecosystem based on nanotechnology, nanodevices, and computing, supporting the global information technologies infrastructure. The INC conference includes predictions on devices for computing and communications, computer architecture, and applications. The latest results in the field of Neuromorphic Computing, Quantum Computing, and Machine Learning at the Edge will be highlighted. The advent of 5G has created excitement, and INC will present for the first time a mapping of what needs to be done to cover the full technology landscape.

Register Now - Sessions Available On-Demand



Day 1:

INC 2020 Day 1 Program


Day 2:

INC 2020 Day 2 Program



Praneet AdusumilliPraneet Adusumilli
Praneet Adusumilli, Ph.D. is a Senior Engineer at the IBM Thomas J. Watson Research Center in Yorktown Heights, NY. He received his doctoral degree in Materials Science and Engineering from Northwestern University and a bachelor’s degree in Metallurgical Engineering from the Indian Institute of Technology, Varanasi, India. After joining IBM in 2011, he worked on silicide contacts and local interconnects for multiple generations of advanced CMOS technology. More recently, his research is focused on non-volatile memory devices for analog in-memory compute to accelerate deep learning applications. He has co-authored more than 30 publications and is a co-inventor on more than 100 US patents.

Abstract: Advancing Broad AI with Algorithms, Architectures & Materials
The explosive growth in the size of Artificial Intelligence (AI) models and the concomitant increase in compute intensity over the past few years is unsustainable without significant innovation across the stack. A non-von Neumann computational approach using in-memory compute is a paradigm shift that can deliver significant improvements in performance and energy efficiency for data-centric AI applications. Among the many memory options being evaluated, phase change memory (PCM) devices are attractive given their non-volatility, analog tunability & maturity of the technology. However, these devices also suffer from many non-idealities. This talk details innovations across materials, algorithms and architecture to address challenges posed by PCM device non-idealities to accelerate AI inference and training jobs.


Francis BalestraFrancis Balestra
Director of Research CNRS

Vice President Grenoble INP
Director of the European Sinano Institute


Tom ConteTom Conte
Professor of CS & ECE, and Co-Director, CRNCH Center,
Georgia Institute of Technology
IEEE Division V Director


Michael FrankMichael Frank
Dr. Michael P. Frank (Stanford ’91, MIT ’99) began his studies of nanoscale computing in 1994, at the start of his Ph.D. work, inventing one of the first schemes for universal computing with DNA. Discovering that the laws of thermochemistry required his computing model to be reversible, Mike then began investigating more practical approaches to the engineering of reversible machines. During his Ph.D. work, Mike proved rigorously that in physically realistic models of computation, reversible computing technologies can have better scaling properties than any possible non-reversible computing technology, and Mike and his fellow students built the first reversible CMOS chips. After receiving his Ph.D. in 1999, Mike has continued to focus his research on reversible computing, first as a faculty member at the University of Florida and Florida State University, and since 2015 in the Center for Computing Research at Sandia National Laboratories.

Abstract: Novel Reversible Devices and Systems Implications
The principles of reversible computing offer a tantalizing prospect for an alternative path for continuing to improve the energy efficiency of general digital computation far beyond the physical limits that will soon cause the low-level energy efficiency of the conventional non-reversible computing paradigm to plateau. Microcircuit designs illustrating the physical and architectural principles of reversible computing already exist for both semiconducting and superconducting platforms. However, to maximize the practical applicability of the reversible approach, it is essential to explore ways to improve characteristics such as the speed (serial performance), and (even more broadly) cost-performance of the technology, simultaneously with its energy efficiency. This will likely require novel circuit, device, and even materials innovations. In this talk, we survey a range of new ideas for leveraging exotic quantum phenomena to help reduce energy dissipation as a function of delay in new classes of devices based on fundamentally novel physical mechanisms of operation, and discuss what the architectural and systems-level impacts of such technologies could be, if such concepts are eventually developed to the product level.


Paolo GarginiPaolo Gargini
Chairman, IRDS™
IEEE Life Fellow
I-Fellow, JSAP


Yoshihiro HayashiYoshihiro Hayashi
Chairperson, SDRJ

Visiting Professor,
Faculty of Science and Technology,
KEIO University,

* Invited Senior Researcher
TIA Central Office
National Institute of Advanced Industrial Science and Technology (AIST)


Vijay Janapa ReddiVijay Janapa Reddi
Vijay Janapa Reddi is an Associate Professor at Harvard University, Inference Co-chair for MLPerf, and a founding member of MLCommons, a nonprofit ML organization aimed at accelerating ML innovation. He also serves on the MLCommons board of directors. Before joining Harvard, he was an Associate Professor at The University of Texas at Austin in the Department of Electrical and Computer Engineering. His research interests include computer architecture and runtime systems, specifically in the context of autonomous machines and mobile and edge computing systems. Dr. Janapa Reddi is a recipient of multiple honors and awards, including the National Academy of Engineering (NAE) Gilbreth Lecturer Honor (2016), IEEE TCCA Young Computer Architect Award (2016), Intel Early Career Award (2013), Google Faculty Research Awards (2012, 2013, 2015, 2017, 2020), Best Paper at the 2020 Design Automation Conference (DAC), Best Paper at the 2005 International Symposium on Microarchitecture (MICRO), Best Paper at the 2009 International Symposium on High Performance Computer Architecture (HPCA), IEEE’s Top Picks in Computer Architecture awards (2006, 2010, 2011, 2016, 2017) and he has been inducted into the MICRO and HPCA Hall of Fame (in 2018 and 2019, respectively). He received a Ph.D. in computer science from Harvard University, M.S. from the University of Colorado at Boulder and B.S from Santa Clara University.

Abstract: Benchmarking Machine Learning Systems: An MLPerf Perspective
Deep Learning is transforming the field of machine learning (ML) from theory to practice. It has also sparked a renaissance in computer system design. Both academics and the industry are scrambling to integrate ML-centric solutions into their products. Despite the breakneck pace of innovation, there is a crucial issue affecting the research and industry communities at large: how to enable fair and useful benchmarking of ML software frameworks, ML hardware accelerators, and ML systems. The ML field stands in need of systematic benchmarking that is both representative of real-world use-cases and useful for making neutral comparisons across different software and hardware platforms. MLPerf answers the call. MLPerf is a machine learning benchmark standard, and suite, driven by academia and industry (50+ companies). The talk describes the design principles behind MLPerf. It discusses the challenges and opportunities in developing a benchmark for the industry that tackles the complexity, heterogeneity, and scale of ML training and inference systems. See for details.


Shih-Chii LiuShih-Chii Liu
Shih-Chii Liu is a professor in the Faculty of Science at the University of Zurich. She co-directs the Sensors group ( at the Institute of Neuroinformatics, University of Zurich and ETH Zurich. Her research focus is on the design of low-power neuromorphic asynchronous spiking auditory and vision sensors, bio-inspired computing circuits, and more recently on event-driven deep neural network processors and their use in neuromorphic artificial intelligent systems. Dr. Liu is past Chair of the IEEE CAS Sensory Systems and Neural Systems and Applications Technical Committees. She is current Chair of the IEEE Swiss CAS/ED Society and general co-chair of the 2020 IEEE Artificial Intelligence for Circuits and Systems conference.

Abstract: Brain-inspired computation and event-driven technology
A fundamental organizing principle of brain computing enabling its amazing combination of intelligence, quick responsiveness, and low power consumption is its use of sparse spiking activity to drive computation. Recent progress in the development of higher-performance event-driven deep networks, neuromorphic spike-event-based visual (DVS/ATIS/DAVIS) and auditory (DAS) sensors along with versatile hardware such as FPGAs have stimulated exploration of real-time event-driven technology for wearable and IoT platforms. These systems enable "always-on" low-latency system-level response time at lower power than equivalent conventional solutions. We show recent work in constructing energy-efficient event-driven deep networks that exploit spatial and temporal sparsity and real-world examples of the use of these networks with these neuromorphic sensors.


Shashank MisraShashank Misra
Shashank Misra earned a doctorate in physics from the University of Illinois – Urbana, Champaign in 2005, and since 2013, has been a member of the research staff at Sandia National Laboratories. His research interests have revolved around developing instruments and techniques that provide new access to exotic phases in quantum materials, and quantum effects in semiconductors. More recently, his interests have turned to using STM-based lithography to fabricate atomically-precise dopant devices in semiconductors. He leads the Far-Reaching Applications, Implications, and Realization of Digital Electronics at the Atomic Limit (FAIR DEAL) program at Sandia.

Abstract: From atoms to transistors: finding opportunities for more Moore in silicon
Increasing tooling and development costs are poised to disrupt the microelectronics ecosystem, where the number of companies choosing to pursue the latest manufacturing nodes is constantly shrinking. In this context, it makes sense to relax the requirement for achieving scalable manufacturing to evaluate opportunities based on the physical limit of atoms, and not just based on incremental gains that can be achieved with volume fabrication one or two generations in the future. Here, we examine progress in creating digital microelectronics using atomic precision advanced manufacturing (APAM), which leverages surface chemistry to incorporate dopants into silicon with atomic precision. At first pass, this technique appears to be a poor candidate for application to digital microelectronics – it has mostly been used to fabricate simple “one-off” devices that function only at cryogenic temperatures. Our work focuses on making complex devices that work at room temperature, enabling tactile control over transistor technologies that hold promise for energy efficient computing. APAM also produces such a high density of dopants that it transforms the electronic structure of silicon, opening the door to nearer-term impact. To enable these benefits, we also detail our efforts to directly integrate APAM into a CMOS manufacturing workflow, and to extend APAM to volume wafer-scale fabrication at reduced resolution.

This work was supported by the Laboratory Directed Research and Development Program at Sandia National Laboratories and was performed, in part, at the Center for Integrated Nanotechnologies, a U.S. DOE, BES user facility. SNL is a multimission laboratory managed and operated by National Technology & Engineering Solutions of Sandia, LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. DOE’s NNSA under contract DE-NA0003525. The views expressed in the article do not necessarily represent the views of the DOE or the U.S. Government.



Registration for INC 2020 is now open. You may register for both days or a single day. Please use your IEEE Member account to complete registration. If you are not a member, please create a free IEEE Account. After you have registered, you will be able to access the recordings on-demand.

IEEE Member IEEE Student Member Non-IEEE Member Registration Link
INC Days 1 & 2 (Full Conference) $40 $20 $80 Register (Best Value)
INC Day 1 - Tuesday, 2 September $25 $15 $49 Register
INC Day 2 - Wednesday, 3 September $25 $15 $49 Register

Registration payments support local currency.

Sessions are available on-demand for a limited time. If you previously registered for this conference, sign-in with your IEEE account and access the recordings via the links below.

INC 2020 / IRDS™ - Day 1 On-Demand Recording

INC 2020 / IRDS™ - Day 2 On-Demand Recording




IEEE International Roadmap for Devices and Systems (IRDS) SiNANO Institute The System Device Roadmap Committee of Japan (SDRJ)