Artificial intelligence has come a long way in recent years but there are a few obstacles that still stand in its way. One of those obstacles is the ability to create speech that sounds like a human’s. To have artificial intelligence that operates at a level seen in Hollywood movies like Her and Ex-Machina, AI will need to be able to perfectly replicate human speech so that a human could not differentiate between a talking machine and a talking person. The London-based research group, DeepMind, which was acquired by Google back in 2014, says they’re closer than ever and that they’ve cut the quality gap between human speech and machine speech in half with the creation of a new talking computer program they call WaveNet.
DeepMind says WaveNet is far superior compared to existing artificial voice generators (also called text-to-speech or TTS systems). The downside to WaveNet is that it requires way too much computing power making it impractical to use in any products being released in the near future. To use the new technology in consumer electronics, DeepMind will need to find a way to achieve the accuracy and quality of WaveNet without such a heavy processing load.
Despite the setbacks, this is still a huge breakthrough for text-to-speech systems and it has far-reaching implications as it could be used in everything from smartphones to movies. The reason WaveNet is so successful compared to existing artificial voice generators is because unlike all the rest, WaveNet uses a neural network (technology that simulates the way a human brain works) to analyze wave forms and then model speech after these waveforms. You can listen to samples off WaveNet “speaking” and it’s readily apparent that the speech sounds more human than Apple’s Siri or Microsoft’s Cortana.
Artificial Intelligence News brought to you by artificialbrilliance.com