Shanghai Innovation Center for Processor Technologies
Data is all you need: Finetuning LLMs for Chip Design via an Automated design-data augmentation framework

An automated design-data augmentation framework enables finetuning of open-source Large Language Models (LLMs) like Llama2 for chip design tasks, achieving state-of-the-art performance in Verilog generation, Verilog repair, and EDA script generation. The framework effectively addresses data scarcity by systematically creating high-quality, aligned datasets for these specialized tasks.

View blog
Resources35
RealBench: Benchmarking Verilog Generation Models with Real-World IP Designs

Researchers introduce RealBench, a benchmark for evaluating Verilog generation models that uses real-world IP designs, multi-modal specifications, and rigorous formal verification. Evaluations demonstrate that current state-of-the-art large language models exhibit low accuracy on complex hardware design tasks, particularly for hierarchical designs and submodule instantiations.

View blog
Resources
Cambricon-LLM: A Chiplet-Based Hybrid Architecture for On-Device Inference of 70B LLM

The Cambricon-LLM architecture enables efficient on-device inference of large language models up to 70 billion parameters by integrating a Neural Processing Unit with custom flash memory featuring on-die computation capabilities. This approach achieves an inference speed of 3.44 tokens/s for a 70B parameter model, demonstrating a 22x speedup over the best existing solutions.

View blog
Resources
InputSnatch: Stealing Input in LLM Services via Timing Side-Channel Attacks
Large language models (LLMs) possess extensive knowledge and question-answering capabilities, having been widely deployed in privacy-sensitive domains like finance and medical consultation. During LLM inferences, cache-sharing methods are commonly employed to enhance efficiency by reusing cached states or responses for the same or similar inference requests. However, we identify that these cache mechanisms pose a risk of private input leakage, as the caching can result in observable variations in response times, making them a strong candidate for a timing-based attack hint. In this study, we propose a novel timing-based side-channel attack to execute input theft in LLMs inference. The cache-based attack faces the challenge of constructing candidate inputs in a large search space to hit and steal cached user queries. To address these challenges, we propose two primary components. The input constructor employs machine learning techniques and LLM-based approaches for vocabulary correlation learning while implementing optimized search mechanisms for generalized input construction. The time analyzer implements statistical time fitting with outlier elimination to identify cache hit patterns, continuously providing feedback to refine the constructor's search strategy. We conduct experiments across two cache mechanisms and the results demonstrate that our approach consistently attains high attack success rates in various applications. Our work highlights the security vulnerabilities associated with performance optimizations, underscoring the necessity of prioritizing privacy and security alongside enhancements in LLM inference.
View blog
Resources
MigGPT: Harnessing Large Language Models for Automated Migration of Out-of-Tree Linux Kernel Patches Across Versions
Out-of-tree kernel patches are essential for adapting the Linux kernel to new hardware or enabling specific functionalities. Maintaining and updating these patches across different kernel versions demands significant effort from experienced engineers. Large language models (LLMs) have shown remarkable progress across various domains, suggesting their potential for automating out-of-tree kernel patch migration. However, our findings reveal that LLMs, while promising, struggle with incomplete code context understanding and inaccurate migration point identification. In this work, we propose MigGPT, a framework that employs a novel code fingerprint structure to retain code snippet information and incorporates three meticulously designed modules to improve the migration accuracy and efficiency of out-of-tree kernel patches. Furthermore, we establish a robust benchmark using real-world out-of-tree kernel patch projects to evaluate LLM capabilities. Evaluations show that MigGPT significantly outperforms the direct application of vanilla LLMs, achieving an average completion rate of 74.07 for migration tasks.
View blog
Resources
Luban: Building Open-Ended Creative Agents via Autonomous Embodied Verification
Building open agents has always been the ultimate goal in AI research, and creative agents are the more enticing. Existing LLM agents excel at long-horizon tasks with well-defined goals (e.g., `mine diamonds' in Minecraft). However, they encounter difficulties on creative tasks with open goals and abstract criteria due to the inability to bridge the gap between them, thus lacking feedback for self-improvement in solving the task. In this work, we introduce autonomous embodied verification techniques for agents to fill the gap, laying the groundwork for creative tasks. Specifically, we propose the Luban agent target creative building tasks in Minecraft, which equips with two-level autonomous embodied verification inspired by human design practices: (1) visual verification of 3D structural speculates, which comes from agent synthesized CAD modeling programs; (2) pragmatic verification of the creation by generating and verifying environment-relevant functionality programs based on the abstract criteria. Extensive multi-dimensional human studies and Elo ratings show that the Luban completes diverse creative building tasks in our proposed benchmark and outperforms other baselines (33%33\% to 100%100\%) in both visualization and pragmatism. Additional demos on the real-world robotic arm show the creation potential of the Luban in the physical world.
View blog
Resources
QiMeng-CPU-v2: Automated Superscalar Processor Design by Learning Data Dependencies

A machine learning-based approach enables automated design of superscalar RISC-V processors through a novel State-BSD model for learning data dependencies, resulting in QiMeng-CPU-v2 which achieves 380x performance improvement over previous automated designs and successfully runs Linux and SPEC benchmarks on FPGA implementation.

View blog
Resources
An Empirical Study of Data Ability Boundary in LLMs' Math Reasoning
Large language models (LLMs) are displaying emergent abilities for math reasoning tasks,and there is a growing attention on enhancing the ability of open-source LLMs through supervised fine-tuning (SFT).In this paper, we aim to explore a general data strategy for supervised data to help optimize and expand math reasoning ability.Firstly, we determine the ability boundary of reasoning paths augmentation by identifying these paths' minimal optimal set.Secondly, we validate that different abilities of the model can be cumulatively enhanced by Mix of Minimal Optimal Sets of corresponding types of data, while our models MMOS achieve SOTA performance on series base models under much lower construction costs.Besides, we point out GSM-HARD is not really hard and today's LLMs no longer lack numerical robustness.Also, we provide an Auto Problem Generator for robustness testing and educational applications.Our code and data are publicly available at https://github.com/cyzhh/MMOS.
View blog
Resources
GDR-HGNN: A Heterogeneous Graph Neural Networks Accelerator Frontend with Graph Decoupling and Recoupling
Heterogeneous Graph Neural Networks (HGNNs) have broadened the applicability of graph representation learning to heterogeneous graphs. However, the irregular memory access pattern of HGNNs leads to the buffer thrashing issue in HGNN accelerators. In this work, we identify an opportunity to address buffer thrashing in HGNN acceleration through an analysis of the topology of heterogeneous graphs. To harvest this opportunity, we propose a graph restructuring method and map it into a hardware frontend named GDR-HGNN. GDR-HGNN dynamically restructures the graph on the fly to enhance data locality for HGNN accelerators. Experimental results demonstrate that, with the assistance of GDR-HGNN, a leading HGNN accelerator achieves an average speedup of 14.6 times and 1.78 times compared to the state-of-the-art software framework running on A100 GPU and itself, respectively.
View blog
Resources
AGON: Automated Design Framework for Customizing Processors from ISA Documents
Customized processors are attractive solutions for vast domain-specific applications due to their high energy efficiency. However, designing a processor in traditional flows is time-consuming and expensive. To address this, researchers have explored methods including the use of agile development tools like Chisel or SpinalHDL, high-level synthesis (HLS) from programming languages like C or SystemC, and more recently, leveraging large language models (LLMs) to generate hardware description language (HDL) code from natural language descriptions. However, each method has limitations in terms of expressiveness, correctness, and performance, leading to a persistent contradiction between the level of automation and the effectiveness of the design. Overall, how to automatically design highly efficient and practical processors with minimal human effort remains a challenge. In this paper, we propose AGON, a novel framework designed to leverage LLMs for the efficient design of out-of-order (OoO) customized processors with minimal human effort. Central to AGON is the nano-operator function (nOP function) based Intermediate Representation (IR), which bridges high-level descriptions and hardware implementations while decoupling functionality from performance optimization, thereby providing an automatic design framework that is expressive and efficient, has correctness guarantees, and enables PPA (Power, Performance, and Area) optimization. Experimental results show that superior to previous LLM-assisted automatic design flows, AGON facilitates designing a series of customized OoO processors that achieve on average 2.35 ×\times speedup compared with BOOM, a general-purpose CPU designed by experts, with minimal design effort.
View blog
Resources
FicGCN: Unveiling the Homomorphic Encryption Efficiency from Irregular Graph Convolutional Networks

Researchers from ICT Chinese Academy of Sciences develop FicGCN, a homomorphic encryption framework for privacy-preserving Graph Convolutional Network inference that achieves 4.10x speedup over CryptoGCN on large graphs (Corafull with 19,793 nodes) and up to 34.2x improvement over Gazelle by resolving the fundamental conflict between GCN's irregular sparsity and CKKS homomorphic encryption's SIMD computation model through a novel Sparse Intra-Ciphertext Aggregation (SpIntra-CA) algorithm that reduces expensive rotation operations by 66.4% on Cora dataset, combined with latency-aware packing schemes and Node Order Optimization that leverages cyclic shift properties to enable efficient neighbor aggregation within single ciphertexts rather than generating redundant encrypted copies, addressing the critical bottleneck in deploying GCNs for sensitive applications like healthcare and finance where data privacy is paramount but existing HE-GCN solutions suffer from prohibitive computational overhead that makes cloud-based inference impractical.

View blog
Resources
Brain-Inspired Two-Stage Approach: Enhancing Mathematical Reasoning by Imitating Human Thought Processes
Although large language models demonstrate emergent abilities in solving math word problems, there is a challenging task in complex multi-step mathematical reasoning tasks. To improve model performance on mathematical reasoning tasks, previous work has conducted supervised fine-tuning on open-source models by improving the quality and quantity of data. In this paper, we propose a novel approach, named Brain, to imitate human thought processes to enhance mathematical reasoning abilities, using the Frontal Lobe Model to generate plans, and then employing the Parietal Lobe Model to generate code and execute to obtain answers. First, we achieve SOTA performance in comparison with Code LLaMA 7B based models through this method. Secondly, we find that plans can be explicitly extracted from natural language, code, or formal language. Our code and data are publicly available at this https URL.
View blog
Resources
Brain-Inspired Two-Stage Approach: Enhancing Mathematical Reasoning by Imitating Human Thought Processes
Although large language models demonstrate emergent abilities in solving math word problems, there is a challenging task in complex multi-step mathematical reasoning tasks. To improve model performance on mathematical reasoning tasks, previous work has conducted supervised fine-tuning on open-source models by improving the quality and quantity of data. In this paper, we propose a novel approach, named Brain, to imitate human thought processes to enhance mathematical reasoning abilities, using the Frontal Lobe Model to generate plans, and then employing the Parietal Lobe Model to generate code and execute to obtain answers. First, we achieve SOTA performance in comparison with Code LLaMA 7B based models through this method. Secondly, we find that plans can be explicitly extracted from natural language, code, or formal language. Our code and data are publicly available at this https URL.
View blog
Resources
HiHGNN: Accelerating HGNNs through Parallelism and Data Reusability Exploitation
Heterogeneous graph neural networks (HGNNs) have emerged as powerful algorithms for processing heterogeneous graphs (HetGs), widely used in many critical fields. To capture both structural and semantic information in HetGs, HGNNs first aggregate the neighboring feature vectors for each vertex in each semantic graph and then fuse the aggregated results across all semantic graphs for each vertex. Unfortunately, existing graph neural network accelerators are ill-suited to accelerate HGNNs. This is because they fail to efficiently tackle the specific execution patterns and exploit the high-degree parallelism as well as data reusability inside and across the processing of semantic graphs in HGNNs. In this work, we first quantitatively characterize a set of representative HGNN models on GPU to disclose the execution bound of each stage, inter-semantic-graph parallelism, and inter-semantic-graph data reusability in HGNNs. Guided by our findings, we propose a high-performance HGNN accelerator, HiHGNN, to alleviate the execution bound and exploit the newfound parallelism and data reusability in HGNNs. Specifically, we first propose a bound-aware stage-fusion methodology that tailors to HGNN acceleration, to fuse and pipeline the execution stages being aware of their execution bounds. Second, we design an independency-aware parallel execution design to exploit the inter-semantic-graph parallelism. Finally, we present a similarity-aware execution scheduling to exploit the inter-semantic-graph data reusability. Compared to the state-of-the-art software framework running on NVIDIA GPU T4 and GPU A100, HiHGNN respectively achieves an average 41.5×\times and 8.6×\times speedup as well as 106×\times and 73×\times energy efficiency with quarter the memory bandwidth of GPU A100.
View blog
Resources
Prompt-based Visual Alignment for Zero-shot Policy Transfer
Overfitting in RL has become one of the main obstacles to applications in reinforcement learning(RL). Existing methods do not provide explicit semantic constrain for the feature extractor, hindering the agent from learning a unified cross-domain representation and resulting in performance degradation on unseen domains. Besides, abundant data from multiple domains are needed. To address these issues, in this work, we propose prompt-based visual alignment (PVA), a robust framework to mitigate the detrimental domain bias in the image for zero-shot policy transfer. Inspired that Visual-Language Model (VLM) can serve as a bridge to connect both text space and image space, we leverage the semantic information contained in a text sequence as an explicit constraint to train a visual aligner. Thus, the visual aligner can map images from multiple domains to a unified domain and achieve good generalization performance. To better depict semantic information, prompt tuning is applied to learn a sequence of learnable tokens. With explicit constraints of semantic information, PVA can learn unified cross-domain representation under limited access to cross-domain data and achieves great zero-shot generalization ability in unseen domains. We verify PVA on a vision-based autonomous driving task with CARLA simulator. Experiments show that the agent generalizes well on unseen domains under limited access to multi-domain data.
View blog
Resources
Unlearnable Examples Give a False Sense of Data Privacy: Understanding and Relearning
Unlearnable examples are proposed to prevent third parties from exploiting unauthorized data, which generates unlearnable examples by adding imperceptible perturbations to public publishing data. These unlearnable examples proficiently misdirect the model training process, leading it to focus on learning perturbation features while neglecting the semantic features of the image. In this paper, we make an in-depth analysis and observe that models can learn both image features and perturbation features of unlearnable examples at an early training stage, but are rapidly trapped in perturbation features learning since the shallow layers tend to learn on perturbation features and propagate harmful activations to deeper layers. Based on the observations, we propose Progressive Staged Training, a self-adaptive training framework specially designed to break unlearnable examples. The proposed framework effectively prevents models from becoming trapped in learning perturbation features. We evaluated our method on multiple model architectures over diverse datasets, e.g., CIFAR-10, CIFAR-100, and ImageNet-mini. Our method circumvents the unlearnability of all state-of-the-art methods in the literature, revealing that existing unlearnable examples give a false sense of privacy protection and provide a reliable baseline for further evaluation of unlearnable techniques.
View blog
Resources
There are no more papers matching your filters at the moment.