The Future of Voice-Activated AI
For decades science fiction movies have been imagining the future as one where humans talk to machines just as naturally as they speak to family and friends. In reality, however, using voice to interact with machines has been maddeningly frustrating, with Siri often mistaking “open up my email” for “look up some kale,” for example. This is changing. Increasingly the experience of speaking to your mobile device elicits genuine surprise when Siri or Google Now understands your request and seamlessly executes your request. Put simply, voice recognition in machines is getting very good and is going to get so good that it will completely change the way humans interact with their computing devices. The next few years in voice and speech recognition are going to be exciting. Here are some things to look forward to.
Voice recognition gets freakishly good. It used to be that voice recognition always fell short of our expectations, but there have been some recent major technology breakthroughs that have cracked the code on speech recognition. In the past 18 months, commercial speech recognition technologies have seen a dramatic 30 percent improvement. To put that into perspective, that’s a bigger gain in performance than we’ve seen in the past 15 years combined. These improvements are in part being driven by deep learning approaches combined with massive data sets.
Deep learning is a tool that is used to create systems that have very good accuracy for tasks such as image analysis, speech recognition and language analysis, among other things. Most of the companies that are viewed as leaders in this space do not yet have their platforms available for use by customers; DeepMind and Vicarious fall in this category. There are a few companies that offer APIs, which rely on deep learning. The Alchemy API is one example of a company that uses deep learning for image and language analysis.
As more voice usage data becomes available, speech recognition accuracy will get better and better. This is what is known as the “virtuous cycle of AI,” the more people use voice interfaces, more data is gathered, and as more data is gathered, the better the algorithms work, thus delivering dramatic improvements in accuracy.
Siri, Cortana and Google Now won’t be the only intelligent voice assistants. As computing devices of all shapes and sizes increasingly surround us, we’ll come to rely more on natural interfaces such as voice, touch and gesture. In the past, developing an intelligent voice interface was a complex undertaking – feasible only if you had the development team of a major corporation like Apple, Google or Microsoft. Today however, due to the emergence of a small but growing number of cloud based APIs like MindMeld, it’s now possible for developers to build an intelligent voice interface for any app or website without requiring an advanced degree in natural language processing.
There aren’t many companies doing this since it is one of the most complex areas of artificial intelligence research. On the consumer side, Google, Apple, Microsoft, Baidu, and Amazon are investing heavily to make web-wide voice search better. For other companies that do not have millions to invest in voice search technology, it’s possible to leverage a cloud-based service to create intelligent voice functionality. Companies that offer a cloud-based API to voice-enable applications include my company, Expect Labs, as well as Wit.ai, and api.ai. The Siri founders are also working on Viv, but they have not yet launched a product so it is unclear if it is relevant to the emerging generation of voice applications.
Computers will start listening to us non-stop…like the Star Trek computer. Machines already see better than humans, recognize objects better, and can listen and hear better. Eventually they will also understand meaning better. What does a world where computers listen constantly look like? It will certainly change the way we interact with our devices. A conference room, automobile or wearable device that can listen to our conversations and understand what we need will eventually become the norm. This new world will emerge since we will all expect to have information at our fingertips at any time no matter where we are. It may seem odd now, but it won’t be long before intelligent voice interfaces are built into all kinds of apps. Right now companies that are invested in the connected home (e.g. Samsung, Comcast, etc.) are leading the way, but we are also seeing other technology companies testing the waters with devices like Amazon’s Echo and Jibo.
Researchers will get closer to developing generalized intelligence. As AI systems get closer to understanding the full breadth of human knowledge, they will become much better at answering all kinds of questions. Eventually, machine learning techniques will be used to help computer scientists develop a universal intelligent assistant that understands a large fraction of all of human knowledge. Human knowledge, while vast, is not infinite. In fact, researchers estimate that a corpus of 100 to 500 billion concepts or “entities” would likely begin to approach the full extent of all useful human knowledge. With deep learning techniques getting better and better at extracting patterns from massive, internet-scale data sets, many AI researchers see the steps toward a form of generalized intelligence coming into focus.
Beyond 2015? Artificial intelligence gets smarter but it won’t destroy human civilization…yet. There’s been a lot of fear mongering of late about artificial intelligence. While any sci-fi movie-goer can envision numerous dangerous AI outcomes – automatically setting off nuclear warheads, stopping to reboot while in auto-driving mode, or destroying us all based on an ill-fated conclusion that humanity is the root of all problems – we are far off from this dystopian reality. Today’s AI systems are so far from becoming self-aware that it is not even a useful exercise to speculate when we might have to pull the plug. We will likely benefit from decades of incremental AI advances before any of us need to seriously confront the existential threat foretold by hollywood movies.
How can we prevent our domination by robot overlords? Assimilation is inevitable. Resistance is futile. Seriously, we are a long way from even being able to constructively speculate about this. Over the next 15 years, computing systems are going to get really good at many different specific human-like tasks such as understanding images, videos, language and answering questions. There has not yet been any evidence that this will lead to higher-level intelligence that could rival the human brain. Some theorists speculate this might be possible, but at this point, it is merely speculation. If higher-level intelligence does emerge from machine systems over the coming decades, we will certainly need to have a serious debate over the best way to prevent any chance of a robot apocalypse.
Recent Comments