Time Alignment using Lip Images for Frame-based Electrolaryngeal Voice Conversion

Paper: [arXiv]
Authors: Yi-Syuan Liou, Wen-Chin Huang, Ming-Chi Yen, Shu-Wei Tsai, Yu-Huai Peng, Tomoki Toda, Yu Tsao, Hsin-Min Wang
Comments: Accepted to APSIPA 2021.

Abstract: Voice conversion (VC) is an effective approach to electrolaryngeal (EL) speech enhancement, a task that aims to improve the quality of the artificial voice from an electrolarynx device. In frame-based VC methods, time alignment needs to be performed prior to model training, and the dynamic time warping (DTW) algorithm is widely adopted to compute the best time alignment between each utterance pair. The validity is based on assumption that the same phonemes of the speakers have similar features and can be mapped by measuring a pre-defined distance between speech frames of the source and the target. However, the special characteristics of the EL speech can break the assumption, resulting in a sub-optimal DTW alignment. In this work, we propose to use lip images for time alignment, as we assume that the lip movements of laryngectomee remain normal compared to healthy people. We investigate two naive lip representations and distance metrics, and experimental results demonstrate that the proposed method can significantly outperform the audio-only alignment in terms of objective and subjective evaluations.

Proposed method

Dataset

Audio and video signals of a doctor was recorded.

Sample EL videoSample NL video

Alignment methods

Subjective results

Speech Samples

Transcription: 醫院大門口有很多花店 (yi yuan da men kou you hen duo hua dian)


EL speechDTW-mcepDTW-lip-rawDTW-lip-landmarkNL speech

Transcription: 他的手錶慢了十五分鐘 (ta de shou biao man le shi wu fen zhong)


EL speechDTW-mcepDTW-lip-rawDTW-lip-landmarkNL speech

Transcription: 周末我幫媽媽打掃房子 (zhou mo wo bang ma ma da sao fang zi)


EL speechDTW-mcepDTW-lip-rawDTW-lip-landmarkNL speech

[Back to top]