Machine Learning For Vlsi Computer Aided Design

Download Machine Learning For Vlsi Computer Aided Design PDF/ePub or read online books in Mobi eBooks. Click Download or Read Online button to get Machine Learning For Vlsi Computer Aided Design book now. This website allows unlimited access to, at the time of writing, more than 1.5 million titles, including hundreds of thousands of titles in various foreign languages.
Machine Learning in VLSI Computer-Aided Design

This book provides readers with an up-to-date account of the use of machine learning frameworks, methodologies, algorithms and techniques in the context of computer-aided design (CAD) for very-large-scale integrated circuits (VLSI). Coverage includes the various machine learning methods used in lithography, physical design, yield prediction, post-silicon performance analysis, reliability and failure analysis, power and thermal analysis, analog design, logic synthesis, verification, and neuromorphic design. Provides up-to-date information on machine learning in VLSI CAD for device modeling, layout verifications, yield prediction, post-silicon validation, and reliability; Discusses the use of machine learning techniques in the context of analog and digital synthesis; Demonstrates how to formulate VLSI CAD objectives as machine learning problems and provides a comprehensive treatment of their efficient solutions; Discusses the tradeoff between the cost of collecting data and prediction accuracy and provides a methodology for using prior data to reduce cost of data collection in the design, testing and validation of both analog and digital VLSI designs. From the Foreword As the semiconductor industry embraces the rising swell of cognitive systems and edge intelligence, this book could serve as a harbinger and example of the osmosis that will exist between our cognitive structures and methods, on the one hand, and the hardware architectures and technologies that will support them, on the other....As we transition from the computing era to the cognitive one, it behooves us to remember the success story of VLSI CAD and to earnestly seek the help of the invisible hand so that our future cognitive systems are used to design more powerful cognitive systems. This book is very much aligned with this on-going transition from computing to cognition, and it is with deep pleasure thatI recommend it to all those who are actively engaged in this exciting transformation. Dr. Ruchir Puri, IBM Fellow, IBM Watson CTO & Chief Architect, IBM T. J. Watson Research Center
Machine Learning for VLSI Computer Aided Design

Consumer electronics have become an integral part of people’s life putting at their disposal immense computational power that provides numerous applications. This has been enabled by the ceaseless down scaling of integrated circuit (IC) technologies which keeps pushing the performance boundary. Such scaling continues to drive, as a byproduct, an up scale in the challenges associated with circuit design and manufacturability. Among the major challenges facing modern IC Computer Aided Design (CAD) are those related to manufacturing and yield which are manifested through: (1) expensive modeling and simulation (e.g. large and complex designs); (2) entangled design and manufacturability (e.g., yield sensitive to design patterns); and (3) strict design constraints (e.g., high yield). While these challenges associated with retaining the robustness of modern designs continue to exacerbate, Very Large-Scale Integration (VLSI) CAD is becoming more critical, yet more challenging. Parallel to these developments are the recent advancements in Machine Learning (ML) which have altered the perception of computing. This dissertation attempts to address the aforementioned challenges in VLSI CAD through machine learning techniques. Our research includes efficient analog modeling, learning-assisted physical design and yield analysis, and model adaptation schemes tailored to the ever-changing IC environment. With aggressive scaling, process variation manifests itself among the most prominent factors limiting the yield of analog and mixed-signal (AMS) circuits. In modern ICs, the expensive simulation cost is one of the challenges facing accurate modeling of this variation. Our study develops a novel semi-supervised learning framework for AMS design modeling that is capable of significantly reducing the modeling cost. In addition, a new perspective towards incorporating sparsity in the modeling task is proposed. At the lithography stage, resolution enhancement techniques in general, and Sub Resolution Assist Feature (SRAF) insertion in particular, have become indispensable given the ever shrinking feature size. While different approaches have been proposed for SRAF insertion, the trade-off between efficiency and accuracy is still the governing principle. To address this, we recast the SRAF insertion process as an image translation task and propose a deep learning-based approach for efficient SRAF insertion. Besides, with complex designs, challenges at the physical design stage have exacerbated. Therefore, across-layers information sharing has become imperative for timely design closure. Particularly, in modern Field Programmable Gate Array (FPGA) place and route flows, leveraging routing congestion information during placement has demonstrated imperative benefit. Our study develops a new congestion prediction approach for large-scale FPGA designs that achieves superior prediction accuracy. Moreover, during fabrication, a critical first step towards improving production yield is to identify the underlying factors that contribute most to yield loss. And for that, wafer map defect analysis is a key. We present a novel wafer map defect pattern classification framework using confidence-aware deep selective learning. The use of ML for CAD tasks has the promise of delivering better performance and efficiency. However, one of the main characteristics of the field is that it is evolving with a fast rate of change. Therefore, by the time enough data is available to train accurate models under a given environment, changes start to occur. In this sense, the frequent restarts limit the returns on developing ML models. To address this, we develop a framework for the fast migration of classification models across different environments. Our approaches are validated with extensive experiments where they proved capable of advancing the VLSI CAD flow
Advancing VLSI through Machine Learning

Author: Abhishek Narayan Tripathi
language: en
Publisher: CRC Press
Release Date: 2025-03-31
This book explores the synergy between very large-scale integration (VLSI) and machine learning (ML) and its applications across various domains. It investigates how ML techniques can enhance the design and testing of VLSI circuits, improve power efficiency, optimize layouts, and enable novel architectures. This book bridges the gap between VLSI and ML, showcasing the potential of this integration in creating innovative electronic systems, advancing computing capabilities, and paving the way for a new era of intelligent devices and technologies. Additionally, it covers how VLSI technologies can accelerate ML algorithms, enabling more efficient and powerful data processing and inference engines. It explores both hardware and software aspects, covering topics like hardware accelerators, custom hardware for specific ML tasks, and ML-driven optimization techniques for chip design and testing. This book will be helpful for academicians, researchers, postgraduate students, and those working in ML-driven VLSI.