Chaojian Li
I am a final-year Ph.D. student at Georgia Tech, advised by Prof. Yingyan (Celine) Lin.
My research interests are in deep learning and computer architecture, with a focus on 3D reconstruction and rendering in an algorithm-hardware co-design approach and deep learning on edge devices.
I am currently on the job market for tenure-track faculty positions and would appreciate any information about potential opportunities!
Before coming to Georgia Tech, I received my B.Eng. from Department of Precision Instrument at Tsinghua University.
I was a part-time Research Engineer Intern at Meta Mobile Vision Team from 2021 to 2022, mentored by Dr. Bichen Wu and Dr. Peizhao Zhang.
Email /
CV /
Google Scholar /
LinkedIn /
Github
|
|
|
MixRT: Mixed Neural Representations For Real-Time NeRF Rendering
Chaojian Li,
Bichen Wu,
Peter Vajda,
Yingyan (Celine) Lin
3DV, 2024
Project Page
Mixing a low-quality mesh, a view-dependent-displacement map, and a compressed NeRF model to achieve real-time rendering speeds on edge devices.
|
|
Instant-NeRF: Instant On-Device Neural Radiance Field Training via Algorithm-Accelerator Co-Designed Near-Memory Processing
Yang (Katie) Zhao,
Shang Wu,
Jingqun Zhang,
Sixu Li,
Chaojian Li,
Yingyan (Celine) Lin
DAC, 2023
Paper
Training NeRF in Near-Memory Processing hardware architecture.
|
|
Instant-3D: Instant Neural Radiance Field Training Towards On-Device AR/VR 3D Reconstruction
Sixu Li,
Chaojian Li,
Wenbo Zhu,
Boyang (Tony) Yu,
Yang (Katie) Zhao,
Cheng Wan,
Haoran You,
Yingyan (Celine) Lin
ISCA, 2023
Paper
The first algorithm-hardware co-design acceleration framework that achieves instant on-device NeRF training.
|
|
An Investigation on Hardware-Aware Vision Transformer Scaling
Chaojian Li,
Kyungmin Kim,
Bichen Wu,
Peizhao Zhang,
Hang Zhang,
Xiaoliang Dai,
Peter Vajda,
Yingyan (Celine) Lin
ACM Transactions on Embedded Computing Systems, 2023
Paper
Simply scaling ViT's depth, width, input size, and other basic configurations, we show that a scaled vanilla ViT model without bells and whistles can achieve comparable or superior accuracy-efficiency trade-off than most of the latest ViT variants.
|
|
RT-NeRF: Real-Time On-Device Neural Radiance Fields Towards Immersive AR/VR Rendering
Chaojian Li,
Sixu Li,
Yang (Katie) Zhao,
Wenbo Zhu,
Yingyan (Celine) Lin
ICCAD, 2022
Paper /
Project Page
The first algorithm-hardware co-design acceleration of NeRF rendering.
|
|
DANCE: DAta-Network Co-optimization for Efficient Segmentation Model Training and Inference
Chaojian Li,
Wuyang Chen,
Yuchen Gu,
Tianlong Chen,
Yonggan Fu,
Zhangyang (Atlas) Wang,
Yingyan (Celine) Lin
ACM Transactions on Design Automation of Electronic Systems, 2022
Paper
A framework for boosting semantic segmentation efficiency during both training and inference, leveraging the hypothesis that maximum model accuracy and efficiency should be achieved when the data and model are optimally matched.
|
|
HW-NAS-Bench: Hardware-aware neural architecture search benchmark
Chaojian Li,
Zhongzhi Yu,
Yonggan Fu,
Yongan Zhang,
Yang (Katie) Zhao,
Haoran You,
Qixuan Yu,
Yue Wang,
Yingyan (Celine) Lin
ICLR, 2021
Paper /
Code
The first public dataset for HW-NAS research aiming to (1) democratize HW-NAS research to non-hardware experts and (2) facilitate a unified benchmark for HW-NAS to make HW-NAS research more reproducible and accessible.
|
|
HALO: Hardware-aware learning to optimize
Chaojian Li,
Tianlong Chen,
Haoran You,
Zhangyang (Atlas) Wang,
Yingyan (Celine) Lin
ECCV, 2020
Paper /
Code
A practical meta optimizer dedicated to resource-efficient on-device adaptation.
|
|