AI for Fully-Automated Chip Design: Gimmick or Trend?

June 29, 2024, Buenos Aires, Argentina

Co-Located with ISCA 2024


About

Welcome to the workshop on AI for Fully-Automated Chip Design, to be located with ISCA 2024. Over the past decades, numerous researchers and engineers have made significant contributions to the design of high-performance computers. The pursuit of automated processor design has been a long-standing goal. As artificial intelligence (AI) demonstrates its potential to outperform humans in specific areas — such as playing Go, optimizing code, and predicting protein structures — architecture researchers are increasingly interested in exploring the use of AI techniques to enable automated processor design. Some recent advancements in this field have made some progress in AI-based architecture optimization. However, these initiatives lag significantly behind the goal of achieving autonomous processor design and self-evolution, a concept envisioned by Von Neumann in 1956 as "Self-Reproducing Automata". Clearly, processor design is fundamentally different from traditional AI tasks: The precision necessary for CPU design is extraordinarily high (>99.99999999999%), vastly surpassing the levels typically associated with prediction and generation tasks in the field of AI. CPU design is the ultimate product of logical computation, an area in which current data-driven AI technologies fall short, presenting a formidable challenge to the proficiency of modern AI.

Therefore, we aim to organize this workshop to bring together more architectural designers to discuss how to apply AI techniques to boost automated architectural design, and where are the boundaries of AI techniques in automated design.


Topics

This workshop will focus on but not limited to the following topics:
  • High-performance automated design for domain-specific accelerators.
  • Development of Architecture Design Language (ADL) tailored for automated processor design.
  • Utilization of Large Language Models (LLMs) in automated circuit design.
  • Design Technology Co-Optimization (DTCO) methodologies.
  • Verification and testing strategies for automated processor design.
  • Design space definition and exploration to improve automated design quality.
  • Emerging methods for automated processor generation.

Program

Zoom

Meeting Room ID: 895 9953 2173 Password: 8NBcsA



Location

Buenos Aires, Argentina, at the Hilton Buenos

Program Details

AI for Automated Chip Design: Everything, Everywhere, All at Once (?)

Abstract

AI for chip design and EDA has received tremendous interests from both academia and industry in recent years. It touches everything that chip designers care about, from power/performance/area (PPA) to cost/yield, turn-around-time, security, and so on. It is everywhere, in all levels of design abstractions, testing, verification, DTCO, mask synthesis / ILT, and some aspects of analog/mixed-signal and RF IC designs as well. It has also been used to tweak the overall design flow and hyper-parameter tuning, but not yet all at once, e.g., generative AI from design specification to layout, in a correct-by-construction manner. In this talk, I will cover some recent advancement/breakthroughs in AI for automated chip design and share my perspectives.

Bio

Dr. David Pan (Fellow of ACM, IEEE, and SPIE) is the Silicon Laboratories Endowed Chair Professor at ECE Department, UT Austin. His research interests include electronic design automation, synergistic AI and IC co-optimizations, design for manufacturing, and CAD for analog/mixed-signal/RF and emerging technologies. He has published over 480 refereed journal/conference papers and 9 US patents. He has served in many journal editorial boards and conference committees, e.g., as DAC 2024 TPC Chair and ICCAD 2019/2018 General/TPC Chair. He has received many awards, including 21 Best Paper Awards from premier venues, SRC Technical Excellence Award, DAC Top 10 Author Award in Fifth Decade, ASP-DAC Frequently Cited Author Award, among others. He has graduated over 50 PhD students and postdocs who are now holding key academic and industry positions.

Automatically generating robotics accelerators

Abstract

In this talk I’ll talk about the challenges and solution of automatically generating accelerators for autonomous machines. I’ll describe a framework that adapts to environmental changes while being aware of hardware resources. In the end I’ll discuss some future challenges.

Bio

Yuhao Zhu is an Associate Professor of Computer Science and Brain and Cognitive Sciences at University of Rochester. He works at the intersection of imaging, computer systems, and human perception & cognition. He has received the typical awards for someone at this stage in his career. More about his research can be found at: http://www.horizon-lab.org/

A Systematic and Rapid Approach to Design Space Exploration for Tensor Accelerators

Abstract

Architectural Design Space Exploration (DSE) is a critical yet challenging aspect of chip design. The process involves searching through a vast space of hardware design options and mapping configurations, along with performance evaluations to guide optimization. Although AI/ML is at an inflection point for wide application in data-rich problems, its direct adoption in DSE remains inefficient due to several challenges: the high-dimensional and discrete nature of the search space, lengthy mapping and evaluation cycles, and a lack of design data for generalization. In this talk, we will discuss the tools and techniques we applied in DSE to address these challenges. Specifically, we will cover 1) a variational autoencoder (VAE) approach to transform the design space into a continuous low-dimensional space, 2) an optimization-based mapper and an analytical performance model for rapid tensor accelerator evaluation, and 3) a differentiable formulation for DSE that leverages domain knowledge to further improve search efficiency and generalizability.

Bio

Qijing Jenny Huang is a research scientist at NVIDIA. Her research focuses on emerging GPU architecture and the co-optimization of algorithms, mappings, and hardware to enhance the system performance. Before NVIDIA, she earned her PhD in Computer Science from UC Berkeley. During her PhD, she has worked on algorithm-hardware codesign for FPGA accelerators, HLS-based design methodology, and ML/ILP-assisted compiler optimization and hardware DSE techniques.

Empowering Physical Design of VLSI Circuits with Deep Learning: from Modeling to Optimization

Abstract

Physical design stands as a pivotal step in the intricate design flow of contemporary VLSI circuits. It maps a circuit design into a physical layout with manufacturable gates and wires. Modern physical design necessitates numerous iterations across multiple design stages to achieve convergence in performance, power, and area. This iterative process is often exceedingly time-intensive. However, amidst the current wave of artificial intelligence, deep learning has emerged as a transformative force, demonstrating remarkable potential across diverse domains such as computer vision, recommendation systems, and robotics. Integrating deep learning into the VLSI design workflow has thus gained significant attention as a promising avenue. In this presentation, we delve into our recent endeavors in developing efficient cross-stage models grounded on open-source datasets like CircuitNet, and examine how these models can facilitate effective optimization in physical design.

Bio

Yibo Lin is an assistant professor in the School of Integrated Circuits at Peking University. He received the B.S. degree in microelectronics from Shanghai Jiaotong University in 2013, and his Ph.D. degree from the Electrical and Computer Engineering Department of the University of Texas at Austin in 2018. His research interests include physical design, machine learning applications, and GPU/FPGA acceleration. He has received 7 Best Paper Awards at premier venues including DATE 2023, DATE 2022, TCAD 2021, and DAC 2019. He has also served in the Technical Program Committees of many major conferences, including ICCAD, ICCD, ISPD, and DAC.

Video

Machine Learning for System-Level Design: Challenges and Opportunities

Abstract

Applications of machine learning (ML) techniques to design problems has seen a lot of excitement and promise at lower levels of abstraction. By contrast, corresponding ML approaches for system modeling, design and optimization at higher levels of abstraction have been relatively less explored. In this talk, we will discuss potential solutions, limitations and pitfalls for ML-based system-level design. We will specifically discuss unique challenges and opportunities when applying ML to design problems at higher abstraction levels to close the system-level gap. This will include approaches for predictive modeling and learning-based co-design both vertically across the compute stack as well as horizontally across heterogeneous target components. Time permitting, we will further discuss applications of such ML-based approaches to system architecture exploration, programming and proactive runtime resource management.

Bio

Andreas Gerstlauer is a Cullen Trust for Higher Education Endowed Professor and Associate Chair for Academic Affairs in the Electrical and Computer Engineering (ECE) Department at The University of Texas at Austin. He received his Ph.D. degree in Information and Computer Science (ICS) from the University of California, Irvine (UCI) in 2004. Prior to joining UT Austin in 2008, he was an Assistant Researcher in the Center for Embedded Computer Systems (CECS) at UC Irvine, leading a research group to develop electronic system-level design tools. Dr. Gerstlauer is co-author on 3 books and more than 150 conference and journal publications. His work was recognized with the 2021 MLCAD, 2016 DAC and 2015 SAMOS best paper awards, several best paper nominations from, among others, DAC, DATE and HOST, and as a 2021 IEEE HSTTC Top Pick in Hardware and Embedded Security and one of the most influential contributions in 10 years at DATE in 2008. He is the recipient of a 2016-2017 Humboldt Research Fellowship, and he serves or has served as an Associate and Special Issue Editor for ACM TECS and TODAES journals as well as General or Program Chair for major international conferences such as ESWEEK, MEMOCODE, CODES+ISSS and SAMOS. His research interests include system-level design automation, system modeling, design languages and methodologies, and embedded hardware and software synthesis.

Chip Learning for Processor Design

Abstract

Designing a central processing unit (CPU), the brain of a computer, requires intensive manual work of talented experts to implement the circuit logic from design specifications. To relieve human efforts and thus boost the design process, considerable progress has recently been made in electronic design automation and artificial intelligence (AI). However, existing approaches still require hand-crafted formal program code as the input to generate circuit logic. Here we report a RISC-V CPU automatically designed by a new AI approach without a formal blueprint. This approach generates the circuit logic, which is represented by a graph structure called Binary Speculation Diagram (BSD), of the CPU design with almost 100% validation accuracy, only from input-output examples. By efficiently exploring a search space of unprecedented size 10^10^540, our approach generates an industrial-scale RISC-V CPU within only 5 hours. The taped-out CPU successfully runs the Linux operating system and performs comparably against the human-designed Intel 80486SX CPU.

Bio

Du, Zidong is a full professor at State Key Laboratory of Processors (SKLP), Institute of Computing Technology (ICT), Chinese Academy of Sciences (CAS). He obtained his Ph.D degree in computer architecture from ICT, CAS in 2016 and the bachelor degree of Engineering from Department of Electronic Engineering, Tsinghua University in 2011. His research interests mainly focus on artificial intelligence and computer architecture, including designing novel architectures for artificial intelligence (Arch4AI) and with artificial intelligence (AI4Arch). He has won the Best Paper award at ASPLOS 2014, Influential Paper Award at ASPLOS 2024, and Best Paper Runner-up Award at MICRO 2022. His paper from ISCA 2015 was selected in ISCA@50 25-Year Retrospective: 1996-2020". He also received a second-class National Natural Science Award as the third awardee in 2021 for his exceptional research achievements.

Video

A High-Level Synthesis Based Framework for Design Space Exploration and Generation of Neural Network Accelerators

Abstract

Designing neural network accelerators requires significant engineering effort, and as the rapidly evolving field of machine learning develops new models, the current approach of designing ad hoc accelerators does not scale. In this talk, we will present our on-going work on a high-level synthesis (HLS)-based framework for design space exploration and generation of neural network accelerators. Given architectural parameters, such as datatype, scaling granularity, compute parallelization and buffer sizes, our framework generates a performant tape-out-ready RTL accelerator. Accelerators generated through this framework have been taped out in several chips, targeting various workloads including CNNs, Transformers, and extended reality applications.

Bio

Kartik Prabhu is a PhD student at Stanford University advised by Prof. Priyanka Raina. He received his BS in Computer Engineering from Georgia Institute of Technology in 2018 and his MS in Electrical Engineering from Stanford University in 2021. Kartik’s research focuses on efficient accelerators for machine learning using emerging technologies and chip design automation. His work won the Best Student Paper Award (Circuits) at the 2021 IEEE Symposia on VLSI Technology & Circuits.

Scaling Up the Hardware Design Capability of LLMs: Lessons from the 1st OpenDACs Contest of Processor Design

Abstract

Hardware description language (HDL) code designing is a critical component of the chip design process, requiring substantial engineering and time resources. Recent advancements in large language models (LLMs), such as GPTs, have demonstrated their potential for automate this intricate task. Nevertheless, the existing LLM-based approaches to HDL code generation do not yet meet the complex requirements of real-world hardware design. Although modern LLMs can process long sequences of context tokens, they often do not produce correspondingly long stretches of code; furthermore, their effectiveness in generating HDL code is still restricted. We propose the AutoSilicon framework, which aims to scale up the hardware design capability of LLMs. AutoSilicon incorporates an agent system, which 1) allows for the decomposition of large-scale, complex code design tasks into smaller, simpler tasks; 2) provides a compilation and simulation environment that enables LLMs to compile and test each piece of code it generates; and 3) introduces a series of optimization strategies. Experimental results show that AutoSilicon could scale the hardware design capability well LLMs in terms of both the design size and quality. To further promote the LLM-based hardware design methodology, we organized the 1st OpenDACs LLM-based Processor Design Competition using the AutoSilicon framework. The outcomes provided a set of lessons learned of LLM-based hardware design methodologies.

Bio

Cangyuan Li is a PhD student at the Institute of Computing Technology, Chinese Academy of Sciences, where he is mentored by Professor Ying Wang. His research is centered on the design automation of application-specific accelerators, exploring both large language model-based methodologies and template-based approaches. His researches have been published in conferences such as ATC and DAC.

Machine Learning Assisted Memory and Storage System Management

Bio

Onur Mutlu is a Professor of Computer Science at ETH Zurich. He is also a Visiting Professor at Stanford University and a faculty member at Carnegie Mellon University, where he previously held the Strecker Early Career Professorship. His current broader research interests are in computer architecture, systems, hardware security, and bioinformatics. A variety of techniques he, along with his group and collaborators, has invented over the years have influenced industry and have been employed in commercial microprocessors and memory/storage systems. He obtained his PhD and MS in ECE from the University of Texas at Austin and BS degrees in Computer Engineering and Psychology from the University of Michigan, Ann Arbor. He started the Computer Architecture Group at Microsoft Research (2006-2009), and held various product and research positions at Intel Corporation, Advanced Micro Devices, VMware, and Google. He received various honors for his research, including the IFIP Jean-Claude Laprie Award in Dependable Computing, Persistent Impact Prize of the Non-Volatile Memory Systems Workshop, Intel Outstanding Researcher Award, IEEE High Performance Computer Architecture Test of Time Award, IEEE Computer Society Edward J. McCluskey Technical Achievement Award, ACM SIGARCH Maurice Wilkes Award and a healthy number of best paper or “Top Pick” paper recognitions at various computer systems, architecture, and security venues. He is an ACM Fellow, IEEE Fellow, and an elected member of the Academy of Europe. His computer architecture and digital logic design course lectures and materials are freely available on YouTube https://www.youtube.com/OnurMutluLectures, and his research group makes a wide variety of software and hardware artifacts freely available online https://safari.ethz.ch/. For more information, please see his webpage at https://people.inf.ethz.ch/omutlu/.


Organizing Committee