Exploring VQ-VAE with Prosody Parameters for Speaker Anonymization - Statistique pour le Vivant et l’Homme
Communication Dans Un Congrès Année : 2024

Exploring VQ-VAE with Prosody Parameters for Speaker Anonymization

Résumé

Human speech conveys prosody, linguistic content, and speaker identity. This article investigates a novel speaker anonymization approach using an end-to-end network based on a Vector-Quantized Variational Auto-Encoder (VQ-VAE) to deal with these speech components. This approach is designed to disentangle these components to specifically target and modify the speaker identity while preserving the linguistic and emotionalcontent. To do so, three separate branches compute embeddings for content, prosody, and speaker identity respectively. During synthesis, taking these embeddings, the decoder of the proposed architecture is conditioned on both speaker and prosody information, allowing for capturing more nuanced emotional states and precise adjustments to speaker identification. Findings indicate that this method outperforms most baseline techniques in preserving emotional information. However, it exhibits more limited performance on other voice privacy tasks, emphasizing the need for further improvements.
Fichier principal
Vignette du fichier
VQVAEForSpeakerAnonymization.pdf (316.01 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04706860 , version 1 (23-09-2024)

Licence

Identifiants

  • HAL Id : hal-04706860 , version 1

Citer

Sotheara Leang, Anderson Augusma, Eric Castelli, Frédérique Letué, Sethserey Sam, et al.. Exploring VQ-VAE with Prosody Parameters for Speaker Anonymization. Voice Privacy Challenge 2024 at INTERSPEECH 2024, Sep 2024, Kos Island, Greece. ⟨hal-04706860⟩
33 Consultations
24 Téléchargements

Partager

More