李宏毅教授的著作列表 - Publication List of Hung-Yi Lee

Publication List of 李宏毅 Hung-Yi Lee

Journal articles & book chapters:

  1. Lin-shan Lee, James Glass, Hung-yi Lee, Chun-an Chan, “Spoken Content Retrieval —Beyond Cascading Speech Recognition with Text Retrieval,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, Sept. 2015
  2. Hung-yi Lee, Po-wei Chou, Lin-shan Lee, “Improved open-vocabulary spoken content retrieval with word and subword lattices using acoustic feature similarity,” Computer Speech & Language, Sept. 2014
  3. Hung-yi Lee, Ching-feng Yeh, Yun-Nung Chen, Yu Huang, Sheng-Yi Kong and Lin-shan Lee, “Spoken Knowledge Organization by Semantic Structuring and a Prototype Course Lecture System for Personalized Learning,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, May 2014
  4. Hung-yi Lee, Lin-shan Lee, “Improved Semantic Retrieval of Spoken Content by Document/Query Expansion with Random Walk over Acoustic Similarity Graphs,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, Jan. 2014
  5. Hung-yi Lee, Lin-shan Lee, “Enhanced Spoken Term Detection Using Support Vector Machines and Weighted Pseudo Examples,” IEEE Transactions on Audio, Speech, and Language Processing, Jun. 2013
  6. Hung-yi Lee, Chia-ping Chen, Lin-shan Lee, “Integrating Recognition and Retrieval with Relevance Feedback for Spoken Term Detection,” IEEE Transactions on Audio, Speech, and Language Processing, Sept. 2012
  7. Yi-cheng Pan, Hung-yi Lee, Lin-shan Lee, “Interactive Spoken Document Retrieval With Suggested Key Terms Ranked by a Markov Decision Process,” IEEE Transactions on Audio, Speech, and Language Processing, Feb. 2012

Conference & proceeding papers:

  1. Chia-Hsuan Lee, Yun-Nung Chen, Hung-Yi Lee, “Mitigating the Impact of Speech Recognition Errors on Spoken Question Answering by Adversarial Domain Adaptation,” ICASSP, 2019
  2. Yi-Lin Tuan, Hung-Yi Lee, “Improving Conditional Sequence Generative Adversarial Networks by Stepwise Evaluation,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2019
  3. Pei-Hung Chung, Kuan Tung, Ching-Lun Tai, Hung-Yi Lee, “Joint Learning of Interactive Spoken Content Retrieval and Trainable User Simulator,” INTERSPEECH, 2018
  4. Chih-Wei Lee, Yau-Shian Wang, Tsung-Yuan Hsu, Kuan-Yu Chen, Hung-Yi Lee, Lin-Shan Lee, “Scalable Sentiment for Sequence-to-sequence Chatbot Response with Performance Analysis,” ICASSP, 2018
  5. Chia-Hsuan Li, Szu-Lin Wu, Chi-Liang Liu, Hung-yi Lee, “Spoken SQuAD: A Study of Mitigating the Impact of Speech Recognition Errors on Listening Comprehension,” INTERSPEEH, 2018
  6. Chia-Hsuan Lee, Shang-Ming Wang, Huan-Cheng Chang, Hung-Yi Lee, “ODSQA: Open-domain Spoken Question Answering Dataset,” SLT, 2018
  7. Yu-An Chung, Hung-Yi Lee, James Glass, “Supervised and Unsupervised Transfer Learning for Question Answering,” NAACL-HLT, 2018
  8. Yau-Shian Wang, Hung-Yi Lee, “Learning to Encode Text as Human-Readable Summaries using Generative Adversarial Networks,” EMNLP, 2018
  9. Da-Rong Liu, Kuan-Yu Chen, Hung-Yi Lee, Lin-shan Lee, “Completely Unsupervised Phoneme Recognition by Adversarially Learning Mapping Relationships from Audio Embeddings,” INTERSPEECH, 2018
  10. Da-Rong Liu, Chi-Yu Yang, Szu-Lin Wu, Hung-Yi Lee, “Improving Unsupervised Style Transfer in End-to-End Speech Synthesis with End-to-End Speech Recognition,” SLT, 2018
  11. Ju-chieh Chou, Cheng-chieh Yeh, Hung-yi Lee, Lin-shan Lee, “Multi-target Voice Conversion without Parallel Data by Adversarially Learning Disentangled Audio Representations,” INTERSPEECH, 2018
  12. Hung-Yi Lee, Pei-Hung Chung, Yen-Chen Wu, Tzu-Hsiang Lin, Tsung-Hsien Wen, “Interactive Spoken Content Retrieval by Deep Reinforcement Learning,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2018
  13. Tzu-Ray Su, Hung-Yi Lee, “Learning Chinese Word Representations From Glyphs Of Characters,” EMNLP, 2017
  14. Yu-Hsuan Wang, Cheng-Tao Chung, Hung-yi Lee, “Gate Activation Signal Analysis for Gated Recurrent Neural Networks and Its Correlation with Phoneme Boundaries,” INTERSPEECH, 2017
  15. Hung-yi Lee, Bo-Hsiang Tseng, Tsung-Hsien Wen, Yu Tsao, “Personalizing Recurrent Neural Network Based Language Model by Social Network,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2017
  16. Bo-Hsiang Tseng, Sheng-syun Shen, Hung-Yi Lee, Lin-Shan Lee, “Towards Machine Comprehension of Spoken Content: Initial TOEFL Listening Comprehension Test by Machine,” INTERSPEECH, 2016
  17. Yu-An Chung, Chao-Chung Wu, Chia-Hao Shen, Hung-Yi Lee, Lin-Shan Lee, “Audio Word2Vec: Unsupervised Learning of Audio Segment Representations Using Sequence-to-Sequence Autoencoder,” INTERSPEECH, 2016
  18. Sheng-syun Shen, Hung-yi Lee, Shang-wen Li, Victor Zue and Lin-shan Lee, “Structuring Lectures in Massive Open Online Courses (MOOCs) for Efficient Learning by Linking Similar Sections and Predicting Prerequisites,” InterSpeech, Sept. 2015
  19. Hung-tsung Lu, Yuan-ming Liou, Hung-yi Lee and Lin-shan Lee, “Semantic Retrieval of Personal Photos using a Deep Autoencoder Fusing Visual Features with Speech Annotations Represented as Word/Paragraph Vectors,” InterSpeech, Sept. 2015
  20. Ching-Feng Yeh, Yuan-ming Liou, Hung-yi Lee and Lin-shan Lee, “Personalized Speech Recognizer with Keyword-based Personalized Lexicon and Language Model using Word Vector Representations,” InterSpeech, Sept. 2015
  21. Hung-yi Lee, Yu Zhang, Ekapol Chuangsuwanich, James Glass, “Graph-based Re-ranking using Acoustic Feature Similarity between Search Results for Spoken Term Detection on Low-resource Languages,” InterSpeech, Sept. 2014
  22. Han Lu, Sheng-syun Shen, Sz-Rung Shiang, Hung-yi Lee and Lin-shan Lee, “Alignment of Spoken Utterances with Slide Content for Easier Learning with Recorded Lectures using Structured Support Vector Machine (SVM),” InterSpeech, Sept. 2014
  23. Sz-Rung Shiang, Hung-yi Lee and Lin-shan Lee, “Spoken Question Answering Using Tree-structured Conditional Random Fields and Two-layer Random Walk,” InterSpeech, Sept. 2014
  24. Yung-ming Liou, Yi-sheng Fu, Hung-yi Lee and Lin-shan Lee, “Semantic Retrieval of Personal Photos using Matrix Factorization and Two-layer Random Walk Fusing Sparse Speech Annotations with Visual Features,” InterSpeech, Sept. 2014
  25. Yun-Chiao Li, Hung-yi Lee, Cheng-Tao Chung, Chun-an Chan, and Lin-shan Lee, “Towards Unsupervised Semantic Retrieval of Spoken Content with Query Expansion based on Automatically Discovered Acoustic Patterns,” ASRU, Dec. 2013
  26. Hung-yi Lee, Ting-yao Hu, How Jing, Yun-Fan Chang, Yu Tsao, Yu-Cheng Kao, Tsang-Long Pao, “Ensemble of Machine Learning and Acoustic Segment Model Techniques for Speech Emotion and Autism Spectrum Disorders Recognition,” InterSpeech, Aug. 2013
  27. Sz-Rung Shiang, Hung-yi Lee, Lin-shan Lee, “Supervised Spoken Document Summarization Based on Structured Support Vector Machine with Utterance Clusters as Hidden Variables,” InterSpeech, Aug. 2013
  28. Tsung-Hsien Wen, Aaron Heidel, Hung-yi Lee, Yu Tsao, Lin-shan Lee, “Recurrent Neural Network Based Language Model Personalization by Social Network Crowdsourcing,” InterSpeech, Aug. 2013
  29. Ching-Feng Yeh, Hung-yi Lee and Lin-shan Lee, “Speaking Rate Normalization with Lattice-based Context-dependent Phoneme Duration Modeling for Personalized Speech Recognizers on Mobile Devices,” InterSpeech, Aug. 2013
  30. Hung-yi Lee, Yu-yu Chou, Yow-Bang Wang, Lin-shan Lee, “Unsupervised Domain Adaptation for Spoken Document Summarization with Structured Support Vector Machine,” ICASSP, May 2013
  31. Hung-yi Lee, Yun-Chiao Li, Cheng-Tao Chung, Lin-shan Lee, “Enhancing Query Expansion for Semantic Retrieval of Spoken Content with Automatically Discovered Acoustic Patterns,” ICASSP, May 2013
  32. Tsung-Hsien Wen, Hung-yi Lee, Pei-Hao Su, Lin-shan Lee, “Interactive Spoken Content Retrieval by Extended Query Model and Continuous State Space Markov Decision Process,” ICASSP, May 2013
  33. Hung-yi Lee, Tsung-Hsien Wen, Lin-shan Lee, “Improved Semantic Retrieval of Spoken Content by Language models Enhanced with Acoustic Similarity Graph,” SLT, Dec. 2012
  34. Tsung-Hsien Wen, Hung-yi Lee, Lin-shan Lee, “Personalized Language Modeling by Crowd Sourcing with Social Network Data for Voice Access of Cloud Applications,” SLT, Dec. 2012
  35. Hung-yi Lee, Yu-yu Chou, Yow-Bang Wang, Lin-shan Lee, “Supervised Spoken Document Summarization Jointly Considering Utterance Importance and Redundancy by Structured Support Vector Machine,” InterSpeech, Sept. 2012
  36. Hung-yi Lee, Po-wei Chou, Lin-shan Lee, “Open-Vocabulary Retrieval of Spoken Content with Shorter/Longer Queries Considering Word/Subword-based Acoustic Feature Similarity,” InterSpeech, Sept. 2012
  37. Tsung-Hsien Wen, Hung-yi Lee, Lin-shan Lee, “Interactive Spoken Content Retrieval with Different Types of Actions Optimized by a Markov Decision Process,” InterSpeech, Sept. 2012
  38. Hung-yi Lee, Yun-nung Chen, Lin-shan Lee, “Utterance-level Latent Topic Transition Modeling for Spoken Documents and its Application in Automatic Summarization,” ICASSP, Mar. 2012
  39. Tsung-wei Tu, Hung-yi Lee, Lin-shan Lee, “Semantic Query Expansion and Context-based Discriminative Term Modeling for Spoken Document Retrieval,” ICASSP, Mar. 2012
  40. Yun-Nung Chen, Yu Huang, Hung-yi Lee, Lin-shan Lee, “Unsupervised Two-Stage Keyword Extraction from Spoken Documents by Topic Coherence and Support Vector Machine,” ICASSP, Mar. 2012
  41. Ching-Feng Yeh, Aaron Heidel, Hung-yi Lee, Lin-shan Lee, “Recognition of Highly Imbalanced Code-mixed Bilingual Speech with Frame-level Language Detection based on Blurred Posteriorgram,” ICASSP, Mar. 2012
  42. Tsung-wei Tu, Hung-yi Lee, Lin-shan Lee, “Improved Spoken Term Detection using Support Vector Machines with Acoustic and Context Features from Pseudo-relevance Feedback,” ASRU, Dec. 2011
  43. Hung-yi Lee, Yun-nung Chen, Lin-shan Lee, “Improved Speech Summarization and Spoken Term Detection with Graphical Analysis of Utterance Similarities,” APSIPA, Oct. 2011
  44. Hung-yi Lee, Tsung-wei Tu, Chia-ping Chen, Chao-yu Huang, Lin-shan Lee, “Improved Spoken Term Detection Using Support Vector Machines based on Lattice Context Consistency,” ICASSP, May 2011
  45. Yun-nung Chen, Chia-ping Chen, Hung-yi Lee, Chun-an Chan, Lin-shan Lee, “Improved Spoken Term Detection with Graph-based Re-ranking in Feature Space,” ICASSP, May 2011
  46. Hung-yi Lee, Chia-ping Chen, Ching-feng Yeh, Lin-shan Lee, “A Framework Integrating Different Relevance Feedback Scenarios and Approaches for Spoken Term Detection,” SLT, Dec. 2010
  47. Hung-yi Lee, Chia-ping Chen, Ching-feng Yeh, Lin-shan Lee, “Improved Spoken Term Detection by Discriminative Training of Acoustic Models based on User Relevance Feedback,” InterSpeech, Sept. 2010
  48. Chia-ping Chen, Hung-yi Lee, Ching-feng Yeh, Lin-shan Lee, “Improved Spoken Term Detection by Feature Space Pseudo-Relevance Feedback,” InterSpeech, Sept. 2010
  49. Hung-yi Lee and Lin-shan Lee, “Integrating Recognition and Retrieval with User Feedback: A New Framework for Spoken Term Detection,” ICASSP, Mar. 2010
  50. Yu-Hui Chen, Chia-Chen Chou, Hung-yi Lee, Lin-shan Lee, “An Initial Attempt to Improve Spoken Term Detection by Learning Optimal Weights for Different Indexing Features,” ICASSP, Mar. 2010
  51. Hung-yi Lee, Yueh-Lien Tang, Hao Tang, Lin-shan Lee, “Spoken Term Detection from Bilingual Spontaneous Speech Using Code-switched Lattice-based Structures for Words and Subword Units,” ASRU, Dec. 2009
  52. Chao-hong Meng, Hung-yi Lee, Lin-shan Lee, “Improved Lattice-based Spoken Document Retrieval by Directly Learning from the evaluation Measures,” ICASSP, Apr. 2009