What is CIMFlow?
Understanding CIMFlow's architecture and design philosophy
Compute-in-Memory (CIM) is a computing paradigm where data processing occurs directly within memory arrays, eliminating the energy-intensive data movement between separate memory and processing units. SRAM-based CIM accelerators have emerged as promising solutions for neural network inference, offering significant improvements in energy efficiency and throughput compared to conventional architectures.
CIMFlow addresses the challenge of evaluating CIM accelerator designs before hardware implementation. The framework provides an integrated toolchain that compiles neural network models to CIM-specific instructions and simulates their execution with cycle-accurate timing.
Framework Overview
Two-Level Compiler
The compiler separates workload partitioning from code generation, enabling optimization at both the graph and operator levels.
Instruction Set Architecture
CIMFlow defines a flexible 32-bit ISA with hierarchical hardware abstraction spanning chip, core, and unit levels.
Cycle-Accurate Simulator
The simulator provides cycle-accurate performance analysis through detailed modeling of the digital CIM architecture.
Use Cases
Citation
CIMFlow was presented at the 62nd ACM/IEEE Design Automation Conference (DAC 2025). If you use CIMFlow in your research, please cite:
@inproceedings{qi2025cimflow,
title={CIMFlow: An Integrated Framework for Systematic Design and
Evaluation of Digital CIM Architectures},
author={Qi, Yingjie and Yang, Jianlei and Wang, Yiou and Wang, Yikun
and Wang, Dayu and Tang, Ling and Duan, Cenlin and He, Xiaolin
and Zhao, Weisheng},
booktitle={2025 62nd ACM/IEEE Design Automation Conference (DAC)},
pages={1--7},
year={2025},
doi={10.1109/DAC63849.2025.11133270}
}The full paper is available on arXiv.
Last updated on