Any-to-One Sequence-to-Sequence Voice Conversion using Self-Supervised Discrete Speech Representations

Paper: [arXiv]
Authors: Wen-Chin Huang, Yi-Chiao Wu, Tomoki Hayashi, Tomoki Toda.
Comments: Accepted to ICASSP 2021.

Abstract: We present a novel approach to any-to-one (A2O) voice conversion (VC) in a sequence-to-sequence (seq2seq) framework. A2O VC aims to convert any speaker, including those unseen during training, to a fixed target speaker. We utilize vq-wav2vec (VQW2V), a discretized self-supervised speech representation that was learned from massive unlabeled data, which is assumed to be speaker-independent and well corresponds to underlying linguistic contents. Given a training dataset of the target speaker, we extract VQW2V and acoustic features to estimate a seq2seq mapping function from the former to the latter. With the help of a pretraining method and a newly designed postprocessing technique, our model can be generalized to only 5 min of data, even outperforming the same model trained with parallel data.

Proposed method

Dataset

We conducted all our experiments on the CMU Arctic database.
A male speaker (bdl) and a female speaker (clb) were chosen as source speakers, and a female speaker (slt) was chosen as the target speaker.

Models

Speech Samples

Transcription: There were stir and bustle, new faces, and fresh facts.


Modelclb(F)-slt(F)bdl(M)-slt(F)
Source
Target
Analysis-Synthesis
VTN (932)
VTN (80)
Proposed (932)
Proposed (80)

Transcription: And there was Ethel Baird, whom also you must remember.


Modelclb(F)-slt(F)bdl(M)-slt(F)
Source
Target
Analysis-Synthesis
VTN (932)
VTN (80)
Proposed (932)
Proposed (80)

Transcription: He had become a man very early in life.


Modelclb(F)-slt(F)bdl(M)-slt(F)
Source
Target
Analysis-Synthesis
VTN (932)
VTN (80)
Proposed (932)
Proposed (80)

[Back to top]