AI for Fully-Automated Chip Design

The Times They Are a-Changin'

June 21-25, 2025, Tokyo, Japan

Co-Located with ISCA 2025


About

Welcome to the workshop on AI for Fully-Automated Chip Design, to be located with ISCA 2025. This year has marked a transformative leap in AI technology, particularly in the domain of reasoning tasks, where AI has surpassed average human performance in areas such as mathematical problem-solving and competitive programming. Impressively, AlphaProof won a silver medal at the IMO, OpenAI O3 has achieved a programming level comparable to the top 200 competitors worldwide on Codeforces, and DeepSeek R1 has demonstrated the self-emergence of reasoning in LLMs. Chip design, which represents the apex of logical computation — a fundamental reasoning task — has seen its potential for fully automated design greatly enhanced by the rapid advancements in AI-driven methodologies and computational intelligence. We are eager to explore whether the revolutionary advancements in AI are truly sufficient to solve this Holy Grail problem in computing, ushering in a new era.

Therefore, we aim to organize this workshop to bring together more architectural designers to discuss how to apply the cutting-edge AI techniques to boost automated architectural design, and necessitate collaboration among researchers with expertise in machine learning, programming languages, compilers, computer architecture, and electronic design automation (EDA).


Topics

This workshop will focus on but not limited to the following topics:
  • Benchmarks, Datasets, and Tool-suits for AI-driven Automated Chip Design.
  • Foundation Models (LLMs and Multimodal Models), Agents, and Frameworks for Automated Circuit and Chip Design.
  • Architecture Design Language or Software Stack for AI-driven Automated Chip Design.
  • AI Techniques for Power/Performance/Area (PPA) Prediction and Optimization.
  • Verification and Testing Flows for AI-driven Automated Chip Design.
  • AI Techniques for Verification and Testing.
  • Automated Design Space Exploration.
  • Emerging Methods for Automated Chip Generation.

Program



Location

Tokyo, Japan, at the Waseda University — Room 113, B1, Building 121.

Program Details

Hypothesizing (Fantasizing) Autonomous Hardware Design

Abstract

Inspired by recent successes of AI in coding and mathematical reasoning, it is natural to ask: can similar approaches learn to design hardware autonomously, guided by verifiable feedback like compilation success or performance metrics? A major bottleneck in this feedback loop is today's EDA tool stack, which is slow to run, hard to adapt to new architectures/technologies, and heavily reliant on brittle handcrafted heuristics. This talk outlines our ongoing exploration of combining LLMs, formal methods, and differentiable compiler optimizations--that could enable scalable, feedback-driven EDA tool construction and ultimately unlock the path toward generative hardware design.

Bio

Zhiru Zhang is a Professor in the School of ECE at Cornell University. His current research investigates new algorithms, design methodologies, and automation tools for heterogeneous computing. Dr. Zhang is an IEEE Fellow and has been honored with the Intel Outstanding Researcher Award, AWS AI Amazon Research Award, Facebook Research Award, Google Faculty Research Award, DAC Under-40 Innovators Award, DARPA Young Faculty Award, IEEE CEDA Ernest S. Kuh Early Career Award, and NSF CAREER Award. He has also received multiple best paper awards from premier conferences and journals in the fields of computer systems and machine learning. He was a co-founder of AutoESL, a high-level synthesis start-up later acquired by Xilinx (now part of AMD). AutoESL's HLS tool evolved into Vivado/Vitis HLS, which is widely used for designing FPGA-based hardware accelerators.

The Role of AI for Next Generation SW/HW Codesign

Abstract

SW/HW codesign offers the promise of higher performance and more efficient systems by jointly optimizing software (SW) and hardware (HW) design decisions in a closed feedback loop. Historically, closing this loop between SW algorithm designers and HW experts has been challenging due to various factors such as differences in execution speed, semantic gaps in design representations, and expertise disparities. This talk will provide a view on how AI for HW can help address these challenges and unlock unprecedented design flow efficiencies and close gaps in the SW/HW codesign loop. By fully automating the HW design processes with AI, we can eventually empower algorithm designers to make hardware-aware design decisions and unlock a powerful new class of SW/HW codesign optimizations.

Bio

Vincent T. Lee is a Research Scientist in Reality Labs Research at Meta working on next-generation XR systems and architectures. He received his PhD in Computer Science and Engineering from the University of Washington specializing in computer architecture. His research interests include full-system architecture modeling and optimization, SW/HW codesign, AI-assisted EDA/CAD, agile hardware design methodologies, and sustainability-aware system design for XR systems.

QiMeng: Automated Hardware and Software Design for Processor Chip

Abstract

With the rapid advancement of information technology, conventional design paradigms face three major challenges: the physical constraints of fabrication technologies, the escalating demands for design resources, and the increasing diversity of ecosystems. Automated processor chip design has emerged as a transformative solution to address these challenges. However, substantial challenges remain in establishing domain-specific LLMs for processor chip design. In this talk, I will introduce QiMeng, a novel system for fully automated hardware and software design of processor chips. QiMeng comprises three hierarchical layers. In the bottom-layer, it constructs a domain-specific Large Processor Chip Model (LPCM) that introduces novel designs in architecture, training, and inference, to address key challenges such as knowledge representation gap, data scarcity, correctness assurance, and enormous solution space. In the middle-layer, leveraging the LPCM's knowledge representation and inference capabilities, it develops the Hardware Design Agent and the Software Design Agent to automate the design of hardware and software for processor chips. Currently, several components of QiMeng have been completed and successfully applied in various top-layer applications, demonstrating significant advantages and providing a feasible solution for efficient, fully automated hardware/software design of processor chips. Future research will focus on integrating all components and performing iterative top-down and bottom-up design processes to establish a comprehensive QiMeng system.

Bio

Di Huang is a postdoctoral researcher at the Institute of Computing Technology, Chinese Academy of Sciences. He obtained his Ph.D degree in computer architecture from ICT, CAS in 2023 and the bachelor degree of Science from Department of Physics, Tsinghua University in 2018. His research interests mainly focus on AI for computer architecture and code generation, especially large language model-based methodologies. His researches have been published in conferences such as Micro, OSDI, NeurIPS, ICLR, AAAI.

Hair of the Dog: How AI Can Help Formally Verify AI-Designed Chips

Abstract

The use of AI in chip design has grown by leaps and bounds, improving our ability to effectively carry out tasks like layout generation and clock tree synthesis. Ongoing work is also looking at incorporating AI into earlier parts of the architectural design process. However, the outputs of AI models are not guaranteed to be correct. To truly enable fully-automated chip design, we need to ensure that AI-designed chips satisfy functional requirements, security guarantees, and more.

Formal verification can enable the strong correctness guarantees that we need from our chips today. However, formal verification is difficult to scale to large designs like those of commercial processors. Can AI help bridge this gap? In this talk, I will describe two techniques from the formal methods community that use AI to help formal methods tackle larger problems. I will cover the use of AI in neurosymbolic synthesis approaches, which can be used to generate formal microarchitectural models to use for verification. I will also cover how AI can be used to improve verification efficiency by synthesising invariants necessary for verification. Finally, I will cover some of the open challenges in applying AI to formal hardware verification.

Bio

Yatin Manerkar is an Assistant Professor in the Computer Science and Engineering Division at the University of Michigan. He received his PhD from Princeton University in January 2021, and was advised by Prof. Margaret Martonosi. His research lies on the boundary between computer architecture and formal methods, and develops automated formal methodologies and tools for the design and verification of hardware and software systems. Yatin's work covers multiple areas, including concurrency, hardware security, and formal synthesis. His research has been recognised with three best paper nominations, and three of his papers have been recognised as IEEE Micro Top Picks or Honorable Mentions for their high potential impact. Yatin's PhD dissertation also received an Honorable Mention for the ACM SIGARCH/IEEE CS TCCA Outstanding Dissertation Award in 2021. He has received Distinguished Reviewer Awards from PLDI and ASPLOS.

Learning with Limited Resources: Optimizing Neural Networks for Extreme Efficiency

Abstract

In this talk, we discuss a few cutting-edge methodologies for optimizing neural networks in resource-constrained environments. To this end, we address key challenges in neural architecture search, dynamic adaptation, on-device training, while focusing on significantly reducing the computational and memory requirements while maintaining energy efficiency for various IoT applications. By optimizing for efficiency, these methods pave the way towards more scalable, adaptable, and sustainable AI solutions, thus facilitating broader deployment across diverse, resource-limited settings.

Bio

Radu Marculescu is a Professor and the Laura Jennings Turner Chair in Engineering in the Department of Electrical and Computer Engineering at The University of Texas at Austin. Between 2000-2019, he was a Professor in the Electrical and Computer Engineering department at Carnegie Mellon University. His current research focuses on developing AI/ML algorithms and tools for system design and optimization for computer vision, bioimaging, and IoT applications. He has received the 2019 IEEE Computer Society Edward J. McCluskey Technical Achievement Award, for seminal contributions to the science of network on chip design, analysis, and optimization. He has also received the 2020 ESWEEK Test-of-Time Award from The International Conference on Hardware/Software Co-Design and System Synthesis (CODES). He is an IEEE Fellow and an ACM Fellow.


Organizing Committee

PC Chairs

Web Chairs