代表性论文
1. Xixin WU, Yuewen CAO, Hui LU, Songxiang LIU, Disong WANG, Zhiyong WU, Xunying LIU, Helen MENG, Speech Emotion Recognition Using Sequential Capsule Networks, IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP), vol. 29, pp. 3280-3291, 2021. (SCI, EI) (CCF A)
2. Xixin WU, Yuewen CAO, Hui LU, Songxiang LIU, Shiyin KANG, Zhiyong WU, Xunying LIU, Helen MENG, Exemplar-Based Emotive Speech Synthesis, IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP), vol. 29, pp. 874-886, 2021. (SCI, EI) (CCF A)
3. Yingmei GUO, Linjun SHOU, Jian PEI, Ming GONG, Mingxing XU, Zhiyong WU and Daxin JIANG, Learning from Multiple Noisy Augmented Data Sets for Better Cross-Lingual Spoken Language Understanding, [in] Proc. EMNLP, pp. 1-12. Punta Cana, Dominican Republic, 7-11 November, 2021. (EI) (THU A)
4. Yaohua BU, Tianyi MA, Weijun LI, Hang ZHOU, Jia JIA, Shengqi CHEN, Kaiyuan XU, Dachuan SHI, Haozhe WU, Zhihan YANG, Kun LI, Zhiyong WU, Yuanchun SHI, Xiaobo LU, Ziwei LIU, PTeacher: a Computer-Aided Personalized Pronunciation Training System with Exaggerated Audio-Visual Corrective Feedback, [in] Proc. CHI, pp. 1-14. Yokohama, Japan, 8-13 May, 2021. (EI) (CCF A)
5. Suping ZHOU, Jia JIA, Zhiyong WU, Zhihan YANG, Yanfeng WANG, Wei CHEN, Fanbo MENG, Shuo HUANG, Jialie SHEN, Xiaochuan WANG, Inferring Emotion from Large-Scale Internet Voice Data: A Semi-supervised Curriculum Augmentation based Deep Learning Approach, [in] Proc. AAAI, pp. 6039-6047. 2-9 February, 2021. (EI) (CCF A)
6. Runnan LI, Zhiyong WU, Jia JIA, Yaohua BU, Sheng ZHAO, Helen MENG, Towards Discriminative Representation Learning for Speech Emotion Recognition, [in] Proc. IJCAI, pp. 5060-5066. Macao, China, 10-16 August, 2019. (EI) (CCF A)
7. Yishuang NING, Sheng HE, Zhiyong WU, Chunxiao XING, Liangjie ZHANG, A Review of Deep Learning Based Speech Synthesis, Applied Sciences-Basel, vol. 9, no. 19, pp. 4050, September 2019. (SCI, EI)
8. Runnan LI, Zhiyong WU, Jia JIA, Jingbei LI, Wei CHEN, Helen MENG, Inferring User Emotive State Changes in Realistic Human-Computer Conversational Dialogs, [in] Proc. ACM Multimedia, pp. 136-144. Seoul, Korea, 22-26 October, 2018. (EI) (CCF A)
9. Kun LI, Shaoguang MAO, Xu LI, Zhiyong WU, Helen MENG, Automatic Lexical Stress and Pitch Accent Detection for L2 English Speech using Multi-Distribution Deep Neural Networks, Speech Communication, vol. 96, pp. 28-36, Elsevier, February 2018. (SCI, EI) (CCF B)
10. Yishuang NING, Jia JIA, Zhiyong WU, Runnan LI, Yongsheng AN, Yanfeng WANG, Helen MENG, Multi-task Deep Learning for User Intention Understanding in Speech Interaction Systems, [in] Proc. AAAI, pp. 161-167. San Francisco, USA, 4-9 February, 2017. (EI) (CCF A)
11. Zhiyong WU, Yishuang NING, Xiao ZANG, Jia JIA, Fanbo MENG, Helen MENG, Lianhong CAI, Generating Emphatic Speech with Hidden Markov Model for Expressive Speech Synthesis, Multimedia Tools and Applications, vol. 74, pp. 9909-9925, Springer, 2015. (SCI, EI) (CCF C)
12. Zhiyong WU, Kai ZHAO, Xixin WU, Xinyu LAN, Helen MENG, Acoustic to Articulatory Mapping with Deep Neural Network, Multimedia Tools and Applications, vol. 74, pp. 9889-9907, Springer, 2015. (SCI, EI) (CCF C)
13. Qi LYU, Zhiyong WU, Jun ZHU, Polyphonic Music Modelling with LSTM-RTRBM, [in] Proc. ACM Multimedia, pp. 991-994. Brisbane, Australia, 26-30 October, 2015. (EI) (CCF A)
14. Qi LYU, Zhiyong WU, Jun ZHU, Helen MENG, Modelling High-dimensional Sequences with LSTM-RTRBM: Application to Polyphonic Music Generation, [in] Proc. IJCAI, pp. 4138-4139. Buenos Aires, Argentina, 25-31 July, 2015. (EI) (CCF A)
15. Jia JIA, Zhiyong WU, Shen ZHANG, Helen MENG, Lianhong CAI, Head and Facial Gestures Synthesis using PAD Model for an Expressive Talking Avatar, Multimedia Tools and Applications, vol. 73, no. 1, pp. 439-461, Springer, 2014. (SCI, EI) (CCF C)
16. Zhiyong WU, Helen M. MENG, Hongwu YANG, Lianhong CAI, Modeling the Expressivity of Input Text Semantics for Chinese Text-to-Speech Synthesis in a Spoken Dialog System, IEEE Transaction on Audio, Speech and Language Processing (TASLP), vol. 17, no. 8, pp. 1567-1577, November, 2009. (SCI, EI) (CCF A)
主要专利成果
1. 吴志勇, 刘良琪. 一种基于多模态特征的重音检测方法及系统, 2019-10-18, 中国, ZL201910995480.2
2. 吴志勇, 张坤. 一种语音关键词检测方法及系统, 2019-10-17, 中国, ZL201910990230.X
3. 吴志勇, 代东洋. 一种基于对抗学习的端到端的跨语言语音情感识别方法, 2019-08-08, 中国, ZL201910731716.1
4. 吴志勇, 杜耀, 康世胤, 苏丹, 俞栋. 一种韵律层级标注的方法、模型训练的方法及装置, 2019-01-22, 中国, ZL201910751371.6
5. 吴志勇, 代东洋, 康世胤, 苏丹, 俞栋. 确定多音字发音的方法及装置, 2019-06-25,中国, ZL201910555855.3
6. 金欣, 姜奕祺, 张磊, 张新, 吴志勇. 一种视频镜头分割边界检测的方法及装置, 2015-12-29, 中国, ZL201511020545.X
7. 金欣, 姜奕祺, 张磊, 吴志勇. 一种视频中的污染区域的内容补绘方法, 2015.11.10, 中国, ZL201510760914.2