Li Shang
Professor
Fudan University
Educational Background:
- 1999-2004: Princeton University Computer Engineering Ph.D.
- 1997-1999: Tsinghua University Microelectronics Masters
- 1992-1997: Tsinghua University Electrical Engineering Bachelor
Work Experience:
- 2020-Now:Fudan University College of Computer Science and Artificial Intelligence Professor
- 2010-2012: Intel Labs China Deputy Director/Chief Architect
- 2008-2019: University of Colorado Boulder, Electrical Computer and Energy Engineering Assistant/Associate Professor
Li Shang, PhD, is a Professor and Doctoral Supervisor at the College of Computer Science and Artificial Intelligence, Fudan University. He graduated from Princeton University with a PhD in Computer Engineering. His research focuses on machine learning systems, embodied intelligence, VLSI and EDA. He has hosted multiple key projects from NSFC and MOST of China and published over 200 papers in top journals/conferences, with multiple best paper awards and nominations, and over 10,000 citations. Cooperation in machine learning systems, embodied intelligence, VLSI and EDA fields and applications from outstanding students are welcome.
Z. Xu, T. Lu, Y. Zhao, Y. Wang, M. Dong, Y. Chang, Q. Lv, R. P. Dick, F. Yang, T. Lu, N. Gu, L. Shang*, “ActiveEye: Enabling continuous and responsive video understanding for smart eyewear systems,” ACM Interactive Mobile Wearable Ubiquitous Technologies, Vol. 9, No. 4, Dec. 2025.
J. Zhou, F. Dong, R. Huang, H. Cao, M. Chen, Y. Yang, A. Chen, M. Dong, Y. Wang, D. Li, D. A. Clifton, Q. Lv, R. Zhu, C. Zhang, F. Yang, T. Lu N. Gu, L. Shang*, “Oracle-MoE: Locality-preserving routing in the oracle space for memory-constrained large language model inference,” in Proc. the 42nd International Conference on Machine Learning, Jul. 2025.
Y. Liu, H. Zhou, J. Wang, F. Yang, X. Zeng, L. Shang*, “Graph Signal Processing-Based Initialization for Chip Placement Acceleration,” IEEE Trans. Computer-Aided Design of Integrated Circuits and Systems, Vol. 44, No. 10, pp. 3924-3937, Oct. 2025.
F. Dong, M. Chen, J. Zhou, Y. Shi, Y. Chen, M. Dong, Y. Wang, D. Li, X. Yang, R. Zhou, R. Dick, Q. Lv, F. Yang, T. Lu, N. Gu, L. Shang*, “Once Read is Enough: Domain-specific Pretraining-free Language Models with Cluster-guided Sparse Experts for Long-tail Domain Knowledge,” in Proc. 37th Conference on Neural Information Processing Systems, Dec. 2024.
Z. Xu, H. Xu, Z. Lu, Y. Zhao, R. Zhu, Y. Wang, M. Dong, Y. Chang, Q. Lv, R. P. Dick, F. Yang, T. Lu, N. Gu, L. Shang*, “Can large language models be good companions?: An LLM-based eyewear system with conversational common ground,” ACM Interactive Mobile Wearable Ubiquitous Technologies, Vol. 8, No. 2, pp. 1-41, Aug. 2024.