Liu Weiyang
Assistant Professor
The Chinese University of Hong Kong
Educational Background (in reverse chronological order):
- 2020-2024: University of Cambridge, PhD in Machine Learning
- 2016-2020: Georgia Institute of Technology, PhD in Computer Science
Work Experience (in reverse chronological order):
- 2024-2025: Max Planck Institute for Intelligent Systems, Postdoc Researcher
2.Interdisciplinary Research Fields: AI for Science, AI for Creativity
Weiyang Liu is an Assistant Professor in the Department of Computer Science and Engineering at The Chinese University of Hong Kong. He received a Ph.D. in Machine Learning from the University of Cambridge (UK) and a Ph.D. in Computer Science from the Georgia Institute of Technology (USA), and previously served as a postdoctoral researcher at the Max Planck Institute for Intelligent Systems (Germany). His research interests include large-scale machine learning, foundational algorithms for large models, formal reasoning, and symbolic reasoning. He has published more than 70 papers in leading journals and conferences, including Nature Machine Intelligence, PNAS, IEEE TPAMI, NeurIPS, ICML, ICLR, AISTATS, UAI, CVPR, ICCV, and ECCV. He welcomes research collaborations in machine learning and artificial intelligence and invites outstanding students to apply.
[1] Zeju Qiu, Simon Buchholz, Tim Z. Xiao, Maximilian Dax, Bernhard Schölkopf, Weiyang Liu*, Reparameterized LLM Training via Orthogonal Equivalence Transformation, NeurIPS, 2025
[2] Tim Z. Xiao, Robert Bamler, Bernhard Schölkopf, Weiyang Liu*, Verbalized Machine Learning: Revisiting Machine Learning with Language Models, TMLR, 2025
[3] Zeju Qiu*, Weiyang Liu*, Haiwen Feng*, Zhen Liu**, Tim Z. Xiao**, Katherine M. Collins**, Joshua B. Tenenbaum, Adrian Weller, Michael J. Black, Bernhard Schölkopf, Can Large Language Models Understand Symbolic Graphics Programs?, ICLR, 2025
[4] Weiyang Liu*, Zeju Qiu*, Yao Feng**, Yuliang Xiu**, Yuxuan Xue**, Longhui Yu**, Haiwen Feng, Zhen Liu, Juyeon Heo, Songyou Peng, Yandong Wen, Michael J. Black, Adrian Weller, Bernhard Schölkopf, Parameter-Efficient Orthogonal Finetuning via Butterfly Factorization, ICLR, 2024
[5] Longhui Yu, Weisen Jiang, Han Shi, J. Yu, Z. Liu, Yu Zhang, James Kwok, Zhenguo Li, Adrian Weller, Weiyang Liu*, MetaMath: Bootstrap Your Own Mathematical Questions for Large Language Models, ICLR, 2024
[6] Zeju Qiu*, Weiyang Liu*, Haiwen Feng, Yuxuan Xue, Yao Feng, Zhen Liu, Dan Zhang, Adrian Weller, Bernhard Schölkopf, Controlling Text-to-Image Diffusion by Orthogonal Finetuning, NeurIPS, 2023
[7] Weiyang Liu*, Longhui Yu*, Adrian Weller, Bernhard Schölkopf, Generalizing and Decoupling Neural Collapse via Hyperspherical Uniformity Gap, ICLR, 2023
[8] Zhen Liu, Yao Feng, Michael J. Black, Derek Nowrouzezahrai, Liam Paull, Weiyang Liu*, MeshDiffusion: Score-based Generative 3D Mesh Modeling, ICLR, 2023
[9] Weiyang Liu*, Zhen Liu*, Liam Paull, Adrian Weller, Bernhard Schölkopf, Structural Causal 3D Reconstruction, ECCV, 2022
[10] Weiyang Liu*, Yandong Wen*, Bhiksha Raj, Rita Singh, Adrian Weller, SphereFace Revived: Unifying Hyperspherical Face Recognition, TPAMI, 2022
[11] Yandong Wen*, Weiyang Liu*, Adrian Weller, Bhiksha Raj, Rita Singh, SphereFace2: Binary Classification is All You Need for Deep Face Recognition, ICLR, 2022
[12] Weiyang Liu*, Zhen Liu*, Hanchen Wang*, Liam Paull, Bernhard Schölkopf, Adrian Weller, Iterative Teaching by Label Synthesis, NeurIPS, 2021
[13] Weiyang Liu, Rongmei Lin, Zhen Liu, Li Xiong, Bernhard Schölkopf, Adrian Weller, Learning with Hyperspherical Uniformity, AISTATS, 2021
[14] Weiyang Liu*, Rongmei Lin*, Zhen Liu, James Rehg, Liam Paull, Li Xiong, Le Song, Adrian Weller, Orthogonal Over-Parameterized Training, CVPR, 2021