
Five years ago, there were 400 new chatbot startups, each promising that they would revolutionize retail, customer support, and ordering pizza. Last year, almost all of these companies had failed or they had pivoted. In CB Insights report “Lessons From The Failed Chatbot Revolution” they report that Facebook and Google had both implemented chatbots for customer service but in the end, they had mostly abandoned them in favor of live support people.
I get that because I have never had a useful chatbot experience. Here’s why.
A.I. (artificial intelligence) turned 65 this year. Chatbots, a branch of AI study, are now 55 years old because Joseph Weizenbaum’s Eliza (the first chatbot) was launched in 1966 from MIT’s A.I. Lab. Eliza was a text-to-text, word matching bot requiring a user to type a statement and if Eliza could match the words in the statement to words in a script, it would return a pre-written response. In 2005, a Javascript version of Eliza was released by Norbert Landsteiner and it can be found here http://www.masswerk.at/eliza/.
In 1995, Richard Wallace launched Alice, written in XML-based AIML (artificial intelligence markup language). AIML was easy to use and developed a huge following. The AIML-like conversation became the backbone of Interactive Voice Response (IVR) phone systems. Say the keyword… get a response. Today’s modern chatbots have evolved very little from Eliza 55 years ago and knowing this is the key to understanding why they work so poorly. A chatbot will never have any understanding of what you say.
When my co-founder, Bruce Wilcox, began designing our present system, he knew that we would have to go beyond simple chatbots. He wanted a system that could analyze natural conversation to extract the true message that a user was trying to convey. He found the answer in the branch of A.I. science called symbolic reasoning. We later improved on this by adding machine learning components where it made sense.
When SapientX hears a user speak, it converts the audio to text with a speech recognition system. This text is then analyzed the way your English teacher taught you to break sentences down into verbs, nouns, etc. From this, we use a form of pattern recognition to match the sentence to common patterns. This is also computationally very efficient because one line of code may handle 1,000 variations of how someone might ask for something. This same approach also allows us to handle complex statements, understand user sentiment, and customize an appropriate response. SapientX also learns about a user in order to serve them better. That text response is then converted to spoken audio. All this happens in a few milliseconds.
At this point, you may be asking how can SapientX be better than the much-touted machine learning systems that Google and Apple use. The first answer is that they were designed for different tasks so a direct comparison is unfair. The second answer is that Google, on its best day, scores only 88% (Apple gets 75%) accuracy according to ZDnet while when I speak to SapientX, I’m able to reach up to 99% accuracy in narrow domain use. I’ll explain why the tech giant systems fail in future posts.
Leave a Reply