SPEECH/AUDIO

U-Convolution Based Residual Echo Suppression With Multiple Encoders

김의성(카카오엔터프라이즈), 전재진(카카오엔터프라이즈), 서혜지(카카오엔터프라이즈)

International Conference on Acoustics, Speech, and Signal Processing (ICASSP)

2021-06-13

Abstract

In this paper, we propose an efficient end-to-end neural network that can estimate near-end speech using a U- convolution block by exploiting various signals to achieve residual echo suppression (RES). Specifically, the proposed model employs multiple encoders and an integration block to utilize complete signal information in an acoustic echo can- cellation system and also applies the U-convolution blocks to separate near-end speech efficiently. The proposed network affords an improvement in the perceptual evaluation of speech quality (PESQ) and the short-time objective intelligi- bility (STOI), as compared to baselines, in scenarios involving smart audio devices. The experimental results show that the proposed method outperforms the baselines for various types of mismatched background noise and environmental reverberation, while requiring low computational resources.