Diffusion-based Unsupervised Audio-visual Speech Enhancement - Department of Natural Language Processing & Knowledge Discovery
Preprints, Working Papers, ... Year : 2024

Diffusion-based Unsupervised Audio-visual Speech Enhancement

Abstract

This paper proposes a new unsupervised audiovisual speech enhancement (AVSE) approach that combines a diffusion-based audio-visual speech generative model with a non-negative matrix factorization (NMF) noise model. First, the diffusion model is pre-trained on clean speech conditioned on corresponding video data to simulate the speech generative distribution. This pre-trained model is then paired with the NMF-based noise model to iteratively estimate clean speech. Specifically, a diffusion-based posterior sampling approach is implemented within the reverse diffusion process, where after each iteration, a speech estimate is obtained and used to update the noise parameters. Experimental results confirm that the proposed AVSE approach not only outperforms its audio-only counterpart but also generalizes better than a recent supervisedgenerative AVSE method. Additionally, the new inference algorithm offers a better balance between inference speed and performance compared to the previous diffusion-based method.

Fichier principal
Vignette du fichier
cmxyyzzrpvkmyrykwgbnjftrchwgjsgk.pdf (394.02 Ko) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-04718254 , version 1 (03-10-2024)

Licence

Identifiers

Cite

Jean-Eudes Ayilo, Mostafa Sadeghi, Romain Serizel, Xavier Alameda-Pineda. Diffusion-based Unsupervised Audio-visual Speech Enhancement. 2024. ⟨hal-04718254⟩
4 View
7 Download

Altmetric

Share

More