姬艳丽
教授
日本九州大学博士,东京大学访问研究员。入选深圳市高层次人才,2023年度AI华人女性青年学者榜单。她在CCF A类顶级期刊和会议上发表论文60余篇,申请专利30余项。曾获第28届澳大拉西亚数据库会议最佳论文奖,第二十届中国虚拟现实大会最佳论文提名。现任中国计算机学会高级会员、中国图象图形学学会青年工作委员会执行委员兼副秘书长、VALSE SAC委员会主席。作为程序主席、workshop主席等核心组织者参与组织国内外学术会议20余次,包括VALSE、TURC ACM SIGAI 中国大会等,并参与组织ACMMM 2021、ACMMM Asia 2021及ACCV2022等重要国际会议。
代表性论文(近5-10年,按影响力排序):
1.K Gedamu, Y JI*, Y Yang, J Shao, H T Shen. Self-supervised Sub-Action Parsing Network for Semi-supervised Action Quality Assessment, TIP, 2024. (IF = 10.8)
2.X Liang, Y JI*, W-S Zheng, W Zuo, X Zhu. SV-Learner: Support-Vector Contrastive Learning for Robust Learning with Noisy Labels, TKDE, 2024. (IF = 8.9)
3.Z Lin, Y JI*, Y Yang, Independence Adversarial Learning for Cross-modal Sound Separation. AAAI, 2024. (CCF A)
4.J Huang, Y JI*, Y Yang, H T Shen. Dominant SIngle-Modal SUpplementary Fusion (SIMSUF) For Multimodal Sentiment Analysis, TMM, 2023. (IF = 8.4)
5.K Gedamu, Y JI*, Y Yang, J Shao, H T Shen. Fine-grained Spatio-temporal Parsing Network for Action Quality Assessment, TIP, Vol.32, pp. 6386-6400, 2023. (IF = 10.8)
6.Y JI, L Ye, H Huang, L Mao, Localization-assisted Uncertainty Score Disentanglement Network for Action Quality Assessment, ACM MM, pp. 8590–8597, 2023. (CCF A)
7.J Huang, Y JI*, Y Yang, H T Shen. Cross-modality Representation Interactive Learning for Multimodal Sentiment Analysis. ACM MM, 2023. (CCF A)
8.K Gedamu, Y JI*, L Gao, Y Yang, H T Shen. Relation-mining self-attention network for skeleton-based human action recognition, Pattern Recognition, Vol. 139, 109455, 2023. (IF = 8.6)
9.Y JI, S Ma, X Xu, X Li, HT Shen, Self-supervised Fine-grained Cycle-Separation Network (FSCN) for Visual-Audio Separation, TMM, Vol.25, pp. 5864-5876, 2022. (IF = 8.4)
10.L Gao, Y JI*, Y Yang, H T Shen, Global-local Cross-view Fisher Discrimination for View-Invariant Action Recognition, ACM MM, pp. 5255–5264, 2022. (CCF A)
11.G Wang, X Xu, F Shen, H Lu, Y JI, HT Shen. Cross-modal Dynamic Networks for Video Moment Retrieval with Text Query, TMM, 2022. (IF = 8.4)
12.Y JI, Y Hu, Y Yang, HT Shen. Region Attention Enhanced Unsupervised Cross-Domain Facial Emotion Recognition, IEEE Transactions on Knowledge and Data Engineering, Vol. 35(4), pp. 4190-4201, 2023. (IF = 8.9)
13.X Zhu, H Li, HT Shen, Z Zhang, Y JI, Y Fan. Fusing functional connectivity with network nodal information for sparse network pattern learning of functional brain networks, Information Fusion, Vol. 75, 131-139, 2021.
14.S Ma, Y JI*, X Xu, X Zhu. Vision-guided Music Source Separation via a Fine-grained Cycle-Separation Network, ACMMM, 4202-4210, 2021. (CCF A)
15.L Gao, Y JI*, X Xu, X Zhu, HT Shen. View-invariant Human Action Recognition via View Transformation Network, TMM, Vol. 24, pp. 4493-4503, 2022. (IF = 8.4)
16.K Gedamu, Y JI*,Y Yang, L Gao, HT Shen. Arbitrary-view human action recognition via novel-view action generation, Pattern Recognition, Vol. 118, 108043, 2021.(IF = 8.6)
17.Y Fu, M Zhang, X Xu, Z Cao, C Ma, Y JI, K Zuo, H Lu. Partial Feature Selection and Alignment for Multi-Source Domain Adaptation, CVPR, 2021. (CCF A)
18.M Zhang, Y Yang, X Chen, Y JI, X Xu, J Li, HT Shen. Multi-stage aggregated transformer network for temporal language localization in videos, CVPR, 2021. (CCF A)
19.L Peng, Y Yang, X Zhang, Y JI, H Lu, HT Shen. Answer Again: Improving VQA with Cascaded-Answering Model, IEEE Transactions on Knowledge and Data Engineering, 2020. (IF = 8.9)
20.Y JI, Y Yang, F Shen, HT Shen, WS Zheng. Arbitrary-view Human Action Recognition: A Varying-view RGB-D Action Dataset, TCSVT, Vol. 31 (1), 289-300, 2020.(IF = 8.3)
21.J Wei, X Xu, Y Yang, Y JI, Z Wang, HT Shen. Universal Weighting Metric Learning for Cross-modal Matching, CVPR, 2020. (CCF A)
22.Y Li, A Bozic, T Zhang, Y JI, T Harada, M Nießner. Learning to Optimize Non-Rigid Tracking, CVPR, 2020. (CCF A)
23.Y JI, Y Zhan, Y Yang, X Xu, F Shen, HT Shen, A Context Knowledge Map Guided Coarse-to-Fine Action Recognition, TIP, Vol. 29, 2742-2752, 2019. (IF = 10.8)
24.Y JI, F Xu, Y Yang, N Xie, HT Shen, T Harada. Attention Transfer (ANT) Network for View-invariant Action Recognition, ACM MM, 574-582, 2019. (CCF A)
25.Y JI, Y Yang, F Shen, HT Shen, X Li. A Survey of Human Action Analysis in HRI Applications, TCSVT, Vol. 30 (7), 2114-2128, 2020. (IF = 8.3)
专利成果:
1. 一种基于视觉头部检测的自动考勤方法, ZL201711161391.5, 2021-03.
2.一种基于视线估计的注意力智能监督方法, ZL201710546644.4, 2020-04.
3.一种基于深度信息和校正方式的手部姿态估计方法, ZL201610321710.3, 2019-08.
4. 一种基于深度数据的手部全局姿态检测方法, ZL201610093720.6, 2019-07.
5. 一种基于深度数据的三维手势姿态估计方法及系统, ZL201510670919.6, 2019-06.
6. 基于眼部关键点检测的 3D 视线方向估计方法, ZL201611018884.9, 2019-03-15.
7. 一种基于 STDW 的连续字符手势轨迹识别方法, ZL201610688950.7, 2019-01.
8. 一种基于眼动跟踪的人机交互方法, ZL201310684342.5, 2016-08.
9. 任意视角动作识别方法,ZL202011541269.2,2022-03.
10. 结合风格迁移的可控表情生成方法,ZL202011618332.8,2022-04.
11. 视觉辅助跨模态音频信号分离方法,ZL202011537001.1,2022-07.
12. 将文本转换为指定风格语音的方法,ZL202010128298.X,2023-04-18.
13. 基于质量分数解耦的动作质量评估方法,202310465335X,2023-04-26.