Lin-shan Lee, James Glass, Hung-yi Lee, Chun-an Chan, “Spoken Content Retrieval —Beyond Cascading Speech Recognition with Text Retrieval,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, Sept. 2015
Hung-yi Lee, Po-wei Chou, Lin-shan Lee, “Improved open-vocabulary spoken content retrieval with word and subword lattices using acoustic feature similarity,” Computer Speech & Language, Sept. 2014
Hung-yi Lee, Ching-feng Yeh, Yun-Nung Chen, Yu Huang, Sheng-Yi Kong and Lin-shan Lee, “Spoken Knowledge Organization by Semantic Structuring and a Prototype Course Lecture System for Personalized Learning,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, May 2014
Hung-yi Lee, Lin-shan Lee, “Improved Semantic Retrieval of Spoken Content by Document/Query Expansion with Random Walk over Acoustic Similarity Graphs,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, Jan. 2014
Hung-yi Lee, Lin-shan Lee, “Enhanced Spoken Term Detection Using Support Vector Machines and Weighted Pseudo Examples,” IEEE Transactions on Audio, Speech, and Language Processing, Jun. 2013
Hung-yi Lee, Chia-ping Chen, Lin-shan Lee, “Integrating Recognition and Retrieval with Relevance Feedback for Spoken Term Detection,” IEEE Transactions on Audio, Speech, and Language Processing, Sept. 2012
Yi-cheng Pan, Hung-yi Lee, Lin-shan Lee, “Interactive Spoken Document Retrieval With Suggested Key Terms Ranked by a Markov Decision Process,” IEEE Transactions on Audio, Speech, and Language Processing, Feb. 2012
Conference & proceeding papers:
Chia-Hsuan Lee, Yun-Nung Chen, Hung-Yi Lee, “Mitigating the Impact of Speech Recognition Errors on Spoken Question Answering by Adversarial Domain Adaptation,” ICASSP, 2019
Yi-Lin Tuan, Hung-Yi Lee, “Improving Conditional Sequence Generative Adversarial Networks by Stepwise Evaluation,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2019
Pei-Hung Chung, Kuan Tung, Ching-Lun Tai, Hung-Yi Lee, “Joint Learning of Interactive Spoken Content Retrieval and Trainable User Simulator,” INTERSPEECH, 2018
Chia-Hsuan Li, Szu-Lin Wu, Chi-Liang Liu, Hung-yi Lee, “Spoken SQuAD: A Study of Mitigating the Impact of Speech Recognition Errors on Listening Comprehension,” INTERSPEEH, 2018
Da-Rong Liu, Chi-Yu Yang, Szu-Lin Wu, Hung-Yi Lee, “Improving Unsupervised Style Transfer in End-to-End Speech Synthesis with End-to-End Speech Recognition,” SLT, 2018
Ju-chieh Chou, Cheng-chieh Yeh, Hung-yi Lee, Lin-shan Lee, “Multi-target Voice Conversion without Parallel Data by Adversarially Learning Disentangled Audio Representations,” INTERSPEECH, 2018
Hung-Yi Lee, Pei-Hung Chung, Yen-Chen Wu, Tzu-Hsiang Lin, Tsung-Hsien Wen, “Interactive Spoken Content Retrieval by Deep Reinforcement Learning,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2018
Tzu-Ray Su, Hung-Yi Lee, “Learning Chinese Word Representations From Glyphs Of Characters,” EMNLP, 2017
Yu-Hsuan Wang, Cheng-Tao Chung, Hung-yi Lee, “Gate Activation Signal Analysis for Gated Recurrent Neural Networks and Its Correlation with Phoneme Boundaries,” INTERSPEECH, 2017
Hung-yi Lee, Bo-Hsiang Tseng, Tsung-Hsien Wen, Yu Tsao, “Personalizing Recurrent Neural Network Based Language Model by Social Network,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2017
Bo-Hsiang Tseng, Sheng-syun Shen, Hung-Yi Lee, Lin-Shan Lee, “Towards Machine Comprehension of Spoken Content: Initial TOEFL Listening Comprehension Test by Machine,” INTERSPEECH, 2016
Sheng-syun Shen, Hung-yi Lee, Shang-wen Li, Victor Zue and Lin-shan Lee, “Structuring Lectures in Massive Open Online Courses (MOOCs) for Efficient Learning by Linking Similar Sections and Predicting Prerequisites,” InterSpeech, Sept. 2015
Hung-tsung Lu, Yuan-ming Liou, Hung-yi Lee and Lin-shan Lee, “Semantic Retrieval of Personal Photos using a Deep Autoencoder Fusing Visual Features with Speech Annotations Represented as Word/Paragraph Vectors,” InterSpeech, Sept. 2015
Ching-Feng Yeh, Yuan-ming Liou, Hung-yi Lee and Lin-shan Lee, “Personalized Speech Recognizer with Keyword-based Personalized Lexicon and Language Model using Word Vector Representations,” InterSpeech, Sept. 2015
Hung-yi Lee, Yu Zhang, Ekapol Chuangsuwanich, James Glass, “Graph-based Re-ranking using Acoustic Feature Similarity between Search Results for Spoken Term Detection on Low-resource Languages,” InterSpeech, Sept. 2014
Han Lu, Sheng-syun Shen, Sz-Rung Shiang, Hung-yi Lee and Lin-shan Lee, “Alignment of Spoken Utterances with Slide Content for Easier Learning with Recorded Lectures using Structured Support Vector Machine (SVM),” InterSpeech, Sept. 2014
Sz-Rung Shiang, Hung-yi Lee and Lin-shan Lee, “Spoken Question Answering Using Tree-structured Conditional Random Fields and Two-layer Random Walk,” InterSpeech, Sept. 2014
Yung-ming Liou, Yi-sheng Fu, Hung-yi Lee and Lin-shan Lee, “Semantic Retrieval of Personal Photos using Matrix Factorization and Two-layer Random Walk Fusing Sparse Speech Annotations with Visual Features,” InterSpeech, Sept. 2014
Yun-Chiao Li, Hung-yi Lee, Cheng-Tao Chung, Chun-an Chan, and Lin-shan Lee, “Towards Unsupervised Semantic Retrieval of Spoken Content with Query Expansion based on Automatically Discovered Acoustic Patterns,” ASRU, Dec. 2013
Hung-yi Lee, Ting-yao Hu, How Jing, Yun-Fan Chang, Yu Tsao, Yu-Cheng Kao, Tsang-Long Pao, “Ensemble of Machine Learning and Acoustic Segment Model Techniques for Speech Emotion and Autism Spectrum Disorders Recognition,” InterSpeech, Aug. 2013
Sz-Rung Shiang, Hung-yi Lee, Lin-shan Lee, “Supervised Spoken Document Summarization Based on Structured Support Vector Machine with Utterance Clusters as Hidden Variables,” InterSpeech, Aug. 2013
Tsung-Hsien Wen, Aaron Heidel, Hung-yi Lee, Yu Tsao, Lin-shan Lee, “Recurrent Neural Network Based Language Model Personalization by Social Network Crowdsourcing,” InterSpeech, Aug. 2013
Ching-Feng Yeh, Hung-yi Lee and Lin-shan Lee, “Speaking Rate Normalization with Lattice-based Context-dependent Phoneme Duration Modeling for Personalized Speech Recognizers on Mobile Devices,” InterSpeech, Aug. 2013
Hung-yi Lee, Yu-yu Chou, Yow-Bang Wang, Lin-shan Lee, “Unsupervised Domain Adaptation for Spoken Document Summarization with Structured Support Vector Machine,” ICASSP, May 2013
Hung-yi Lee, Yun-Chiao Li, Cheng-Tao Chung, Lin-shan Lee, “Enhancing Query Expansion for Semantic Retrieval of Spoken Content with Automatically Discovered Acoustic Patterns,” ICASSP, May 2013
Tsung-Hsien Wen, Hung-yi Lee, Pei-Hao Su, Lin-shan Lee, “Interactive Spoken Content Retrieval by Extended Query Model and Continuous State Space Markov Decision Process,” ICASSP, May 2013
Hung-yi Lee, Tsung-Hsien Wen, Lin-shan Lee, “Improved Semantic Retrieval of Spoken Content by Language models Enhanced with Acoustic Similarity Graph,” SLT, Dec. 2012
Tsung-Hsien Wen, Hung-yi Lee, Lin-shan Lee, “Personalized Language Modeling by Crowd Sourcing with Social Network Data for Voice Access of Cloud Applications,” SLT, Dec. 2012
Hung-yi Lee, Yu-yu Chou, Yow-Bang Wang, Lin-shan Lee, “Supervised Spoken Document Summarization Jointly Considering Utterance Importance and Redundancy by Structured Support Vector Machine,” InterSpeech, Sept. 2012
Tsung-Hsien Wen, Hung-yi Lee, Lin-shan Lee, “Interactive Spoken Content Retrieval with Different Types of Actions Optimized by a Markov Decision Process,” InterSpeech, Sept. 2012
Hung-yi Lee, Yun-nung Chen, Lin-shan Lee, “Utterance-level Latent Topic Transition Modeling for Spoken Documents and its Application in Automatic Summarization,” ICASSP, Mar. 2012
Tsung-wei Tu, Hung-yi Lee, Lin-shan Lee, “Semantic Query Expansion and Context-based Discriminative Term Modeling for Spoken Document Retrieval,” ICASSP, Mar. 2012
Yun-Nung Chen, Yu Huang, Hung-yi Lee, Lin-shan Lee, “Unsupervised Two-Stage Keyword Extraction from Spoken Documents by Topic Coherence and Support Vector Machine,” ICASSP, Mar. 2012
Ching-Feng Yeh, Aaron Heidel, Hung-yi Lee, Lin-shan Lee, “Recognition of Highly Imbalanced Code-mixed Bilingual Speech with Frame-level Language Detection based on Blurred Posteriorgram,” ICASSP, Mar. 2012
Tsung-wei Tu, Hung-yi Lee, Lin-shan Lee, “Improved Spoken Term Detection using Support Vector Machines with Acoustic and Context Features from Pseudo-relevance Feedback,” ASRU, Dec. 2011
Hung-yi Lee, Yun-nung Chen, Lin-shan Lee, “Improved Speech Summarization and Spoken Term Detection with Graphical Analysis of Utterance Similarities,” APSIPA, Oct. 2011
Hung-yi Lee, Tsung-wei Tu, Chia-ping Chen, Chao-yu Huang, Lin-shan Lee, “Improved Spoken Term Detection Using Support Vector Machines based on Lattice Context Consistency,” ICASSP, May 2011
Yun-nung Chen, Chia-ping Chen, Hung-yi Lee, Chun-an Chan, Lin-shan Lee, “Improved Spoken Term Detection with Graph-based Re-ranking in Feature Space,” ICASSP, May 2011
Hung-yi Lee, Chia-ping Chen, Ching-feng Yeh, Lin-shan Lee, “A Framework Integrating Different Relevance Feedback Scenarios and Approaches for Spoken Term Detection,” SLT, Dec. 2010
Hung-yi Lee, Chia-ping Chen, Ching-feng Yeh, Lin-shan Lee, “Improved Spoken Term Detection by Discriminative Training of Acoustic Models based on User Relevance Feedback,” InterSpeech, Sept. 2010
Chia-ping Chen, Hung-yi Lee, Ching-feng Yeh, Lin-shan Lee, “Improved Spoken Term Detection by Feature Space Pseudo-Relevance Feedback,” InterSpeech, Sept. 2010
Hung-yi Lee and Lin-shan Lee, “Integrating Recognition and Retrieval with User Feedback: A New Framework for Spoken Term Detection,” ICASSP, Mar. 2010
Yu-Hui Chen, Chia-Chen Chou, Hung-yi Lee, Lin-shan Lee, “An Initial Attempt to Improve Spoken Term Detection by Learning Optimal Weights for Different Indexing Features,” ICASSP, Mar. 2010
Hung-yi Lee, Yueh-Lien Tang, Hao Tang, Lin-shan Lee, “Spoken Term Detection from Bilingual Spontaneous Speech Using Code-switched Lattice-based Structures for Words and Subword Units,” ASRU, Dec. 2009
Chao-hong Meng, Hung-yi Lee, Lin-shan Lee, “Improved Lattice-based Spoken Document Retrieval by Directly Learning from the evaluation Measures,” ICASSP, Apr. 2009