• 内网
  • Search
  • 简体中文
  • About
    • About Us
    • Contact Us
  • Faculty
  • Admissions
    • Admission
  • Research
    • Center for AI Theoretical Foundation and Systems
    • Center for Language, Intelligence and Machines
    • Center for AI for Science and Engineering
    • Center for AI for Social Science
    • Center for Embodied Artificial Intelligence and Computer Vision
  • News
    • School News
  • Recruitment
    • Academic Positions
  • Academic Forum
    • Forum Schedule

Breadcrumb

  • Home

Yue Zhengjun

Assistant Professor

Education Background

1. Educational Background (in reverse chronological order):

  • 2018-2022: University of Sheffield, Computer Science, Doctoral Degree (Full-time)
  • 2017-2018: University of Edinburgh, Artificial Intelligence, Master Degree (Full-time)
  • 2013-2017: Shanghai University, Telecommunication engineering, Bachelor Degree (Full-time)

·

2. Work Experience (in reverse chronological order):

  • 2026-present: Shenzhen Loop Area Institute, Language Intelligence and Machine, Assistant Professor
  • 2022-present: King’s College London, Faculty of Engineering, Associate Researcher
  • 2022-2025: Department of Intelligent System, Technology University of Delft, Assistant Professor
  • 2021-2022: Faculty of Engineering, King’s College London, Postdoc Researcher
  • 2018-Present: School of Computer Science, XX University, Professor
  • 2018-Present: School of Computer Science, XX University, Professor
Research Field
Inclusive speech processing & ASR,Speech generation & transformation,AI-enabled media dubbing,On-device speech–language models,Clinical speech AI Core Research Interests:
1,Inclusive speech processing & ASR: low-resource, pathological, child, and older-adult speech recognition and robustness
2, Speech generation & transformation: speech synthesis (TTS), voice conversion, and controllable speech generation
3, AI-enabled media dubbing: movie/short-video dubbing, cross-lingual voice transfer, and lip-sync/timing alignment
4. On-device speech–language models: efficient deployment of speech and language large models on edge devices
5,Clinical speech AI: multimodal speech biomarkers, cognitive impairment detection, and clinically interpretable modeling (XAI) using speech foundation models and LLMs for medical dialogue and assistive interaction
Interdisciplinary Research Fields: AI + digital health/rehabilitation medicine/speech-language pathology (SLP); AI + modern intelligent cinema; accessibility and assistive technologies with human–computer interaction (HCI); and inclusive data collection and evaluation methodologies for low-resource languages and underserved populations.
Personal Website
https://zhengjunyue.github.io//
Email
zhengjunyue@slai.edu.cn
Biography

Zhengjun Yue, PhD, is an Assistant Professor at the LIMA Center, Shenzhen Loop Area Institute (SLAI). She received her PhD from the University of Sheffield (Marie Skłodowska-Curie fellowship), and subsequently held a postdoctoral position at King’s College London funded by the EPSRC. Afterwards,  she worked as an Assistant Professor (tenured) at Delft University of Technology (TU Delft), the Netherlands. Her research focuses on inclusive speech technology for healthcare and wellbeing, including pathological/child/older-adult speech processing and recognition, cognitive impairment detection, speech modeling for low-resource languages and underserved populations, assistive and accessible interactive technologies, multimodal biomarkers, and interpretable as well as generative AI empowered by large language models and speech foundation models. She is also interested in AI+modern intelligent cinema. She has led and participated in applications for EU H2020 projects and NWO projects, and has published 20+ papers in top venues such as IEEE/ACM TASLP, Computer Speech & Language, ICASSP, and Interspeech. Collaboration and applications from outstanding students are welcome.

Academic Publications

Z. Yue*, E. Loweimi, Z. Cvetkovic, J. Barker and H. Christensen (2025). Raw Acoustic-articulatory Multimodal Dysarthric Speech Recognition. Computer Speech & Language, Vol. 95, pp. 101839. DOI:10.1016/j.csl.2025.101839.
E. Loweimi, Z. Yue*, P. Bell, S. Renals, and Z. Cvetkovic (2023). Multi-stream Acoustic Modelling using Raw Real and Imaginary Parts of the Fourier Transform. IEEE/ACM Transaction on Audio, Speech and Language Processing (TASLP), Vol. 31, pp. 876 - 890. DOI: 10.1109/TASLP.2023.3237167.

Z. Yue*, E. Loweimi J. Barker, H. Christensen, and Z. Cvetkovic. (2022). Modelling from Raw Source and Filter Components for Dysarthric Speech Recognition. IEEE/ACM Transaction on Audio, Speech and Language Processing (TASLP), Vol. 30, pp. 2968 - 2980. DOI: 10.1109/TASLP.2022.3205766.

Z. Yue*, M. Barberis, T. Patel, J. Dineley,W. Doedens, L. Stipdonk, E. Witte, E. Loweimi, H. Van hamme, D. Satoer, M. Ruiter, LM. Velazquez, N. Cummins, O. Scharenborg. (2025). Challenges and practical guidelines for atypical speech data collection, annotation, usage and sharing: A multi-project perspective. In INTERSPEECH. pp. 3943–3947. ISCA. 17-21 August, Rotterdam, the Netherlands. DOI: 10.21437/Interspeech.2025-2774. (Shortlisted Best Theme Paper Award.)

Z. Yue*, H. Christensen, and J. Barker (2020). Autoencoder bottleneck features with multi-task optimisation for improved continuous dysarthric speech recognition. In INTERSPEECH, pp. 4581-4585. ISCA. 25-29 October, Virtual. DOI: 10.21437/Interspeech.2020-2746.

Z. Yue*, F. Xiong, H. Christensen, and J. Barker (2020). Exploring appropriate acoustic and language model choices for continuous dysarthric speech recognition. In ICASSP, pp. 6094-6098. IEEE. 04-08 May, Barcelona, Spain. DOI: 10.1109/ICASSP40776.2020.9054343。

Z. Yue*, D. Kayande, Z. Cvetkovic, E. Loweimi. (2026). Probing Whisper for Dysarthric Speech in Detection and Assessment. In IEEE ICASSP 2026. DOI: 10.48550/arXiv.2510.04219.

D. Groot, T. Patel, D. Kayande, O. Scharenborg, Z. Yue*. (2025). Objective and Subjective Evaluation of Diffusion-Based Speech Enhancement for Dysarthric Speech. In INTERSPEECH. pp. 2025-2768. ISCA. 17-21 August, Rotterdam, the Netherlands. DOI: 10.21437/Interspeech.2025-2768.

F. Xiong, J. Barker, Z. Yue*, and H. Christensen (2020). Source domain data selection for improved transfer learning targeting dysarthric speech recognition speech recognition. In ICASSP, pp. 7424-7428. IEEE. 04-08 May, Barcelona, Spain. DOI: 10.1109/ICASSP40776.2020.9054694.

Z. Yue*, E. Loweimi, Z. Cvetkovic, H. Christensen, and J. Barker (2022). Multi-modal Acoustic-articulatory Feature Fusion for Dysarthric Speech Recognition. In ICASSP, pp. 7372-7376. IEEE. 23-27 May, Singapore. DOI: 10.1109/ICASSP43922.2022.9746855.

Z. Yue*, E. Loweimi, Z. Cvetkovic (2022). Raw Source and Filter modelling for dysarthric speech recognition. In ICASSP, IEEE. 23-27 May, Singapore. DOI: 10.1109/ICASSP43922.2022.9746553.

Z. Yue*, E. Loweimi and Z. Cvetkovic. (2023). Dysarthric speech recognition, detection and classification using raw phase and magnitude spectra. In INTERSPEECH, ISCA. 20-24 August, Dublin, Ireland. DOI:10.21437/Interspeech. 2023-222.

C. Li, E. Yeo, K. Choi, PA. Pérez-Toro, M. Someki, RK. Das, Z. Yue*, J. Rafael Orozco-Arroyave, E. Nöth, DR Mortensen. (2025). Towards Inclusive ASR: Investigating Voice Conversion for Dysarthric Speech Recognition in Low-Resource Languages. In INTERSPEECH. pp. 2025-2768. ISCA. 17-21 August, Rotterdam, the Netherlands. DOI: 10.48550/arXiv.2505.14874.

Z. Yue*, Y. Zhang. (2025). End-to-end acoustic-articulatory dysarthric speech recognition leveraging large-scale pretrained acoustic features. In ICASSP, pp. 1-5. IEEE. 06-11 April, Hyderabad, India. DOI: 10.1109/49660.2025.10888412.

Contact Us
Contact Us
  • Admissions:admission@slai.edu.cn Admissions Hotline:(86)0755 81970253 (Weekdays, 9:30–11:00 am & 3:00–5:00 pm) Faculty Recruitment:FacultyHiring@slai.edu.cn Industry-Academia Collaboration:coop@slai.edu.cn
  • Staff Careers:staff_careers@slai.edu.cn Executive Office: executiveoffice@slai.edu.cn Student Affairs: student@slai.edu.cn Bidding: bidding@slai.edu.cn Dean's Office: deanoffice@slai.edu.cn
  • Finance Office: financeoffice@slai.edu.cn Tel:0755-83590055 (Weekdays, 9:30–11:00 am & 3:00–5:00 pm) No. 6 Hongmian Road, Futian Free Trade Zone
Business Hours
  • 8:30–12:00, 13:00–17:30 (Monday to Friday) Closed on Weekends & Public Holidays

Copyright © SLAI All Rights Reserved. 粤ICP备14099122号-14 

​