SpeechWave project to address accuracy in speech recognition
Researchers set to improve speech recognition by using deep learning.

Speech recognition has made major advances in the past few years, and speech-based applications and assistants (such as Apple's Siri, Amazon's Alexa, and Google voice search) have become part of daily life for many people. However, speech recognition is still very fragile and degrades severely when the environment changes - acoustic conditions which have essentially no effect on the accuracy of human speech recognition can have a catastrophic impact on the accuracy of a state-of-the-art automatic system.
In SpeechWave, scientists are addressing this brittleness by developing speech recognition models that are explicitly engineered for acoustic robustness. SpeechWave will pursue an alternative approach to robust speech recognition, in which deep learning will be used to learn speech recognition directly from the speech waveform. To do this scientists are combining expertise in speech and signal processing and in machine learning, and have partnered with a range of organisations including the BBC and two UK startup companies.
We are very excited to be working on Speech Wave with the team from King's College, and our project partners. SpeechWave is a highly ambitious project which is aiming to address the principal problem in speech recognition - how to prevent automatic speech recognition systems failing in levels of noise and reverberation with which humans cope effortlessly.