Loading Events

« All Events

  • This event has passed.

PhD Defense: Can Cui (Multispeech)

1 October 2024 @ 15:00 pm - 17:00 pm

Can Cui (Multispeech) will defend his thesis, entitled “Joint speech separation, diarization and recognition for automatic meeting transcription”, on Tuesday, October 1 at 3 p.m., in room A008.

Abstract:

Far-field microphone-array meeting transcription is particularly challenging due to overlapping speech, ambient noise, and reverberation. To address these issues, we explored three approaches. First, we employ a multichannel speaker separation model to isolate individual speakers, followed by a single-channel, single-speaker automatic speech recognition (ASR) model to transcribe the separated and enhanced audio. This method effectively enhances speech quality for ASR.
Second, we propose an end-to-end multichannel speaker-attributed ASR (MC-SA-ASR) model, which builds on an existing single-channel SA-ASR model and incorporates a multichannel Conformer-based encoder with multi-frame cross-channel attention (MFCCA). Unlike traditional approaches that require a multichannel front-end speech enhancement model, the MC-SA-ASR model handles far-field microphones in an end-to-end manner. We also experimented with different input features, including Mel filterbank and phase features, for that model.
Lastly, we incorporate a multichannel beamforming and enhancement model as a front-end processing step, followed by a single-channel SA-ASR model to process the enhanced multi-speaker speech signals. We tested different fixed, hybrid, and fully neural network-based beamformers and proposed to jointly optimize the neural beamformer and SA-ASR models using the training objective for the latter.
In addition to these methods, we developed a meeting transcription pipeline that integrates voice activity detection, speaker diarization, and SA-ASR to process real meeting recordings effectively.
Experimental results indicate that, while using a speaker separation model can enhance speech quality, separation errors can propagate to ASR, resulting in suboptimal performance. A guided speaker separation approach proves to be more effective. Our proposed MC-SA-ASR model demonstrates efficiency in integrating multichannel information and the shared information between the ASR and speaker blocks.
Experiments with different input features reveal that models trained with Mel filterbank features perform better in terms of word error rate (WER) and speaker error rate (SER) when the number of channels and speakers is low (2 channels with 1 or 2 speakers). However, for settings with 3 or 4 channels and 3 speakers, models trained with additional phase information outperform those using only Mel filterbank features. This suggests that phase information can enhance ASR by leveraging localization information from multiple channels.
Although MFCCA-based MC-SA-ASR outperforms the single-channel SA-ASR and MC-ASR models without a speaker block, the joint beamforming and SA-ASR model further improves the performance. Specifically, joint training of the neural beamformer and SA-ASR yields the best performance, indicating that improving speech quality might be a more direct and efficient approach than using an end-to-end MC-SA-ASR model for multichannel meeting transcription.
Furthermore, the study of the real meeting transcription pipeline underscores the potential for better end-to-end models. In our investigation on improving speaker assignment in SA-ASR, we found that the speaker block does not effectively help improve the ASR performance. This highlights the need for improved architectures that more effectively integrate ASR and speaker information.

Key words: Multichannel separation, end-to-end speaker-attributed ASR, meeting transcription, speaker diarization

Thesis Committee:

Reviewers:

  • Reinhold Häb-Umbach, Professor, University of Paderborn, Germany
  • Yannick Estève, Professor, Avignon Université, France

Examiners:

  • Marie Tahon, Professor, Université du Mans, France

Supervisors:

  • Emmanuel Vincent, Senior Research Scientist, Centre Inria de l’U. de Lorraine, France
  • Mostafa Sadeghi, Inria Starting Faculty Position, Centre Inria de l’U. de Lorraine, France
  • Imran Sheikh, Research engineer, Vivoka, France

 

Details

Date:
1 October 2024
Time:
15:00 pm - 17:00 pm
Event Category:

Venue

A008