Young people, it is often said, are more tech friendly than older people. Take Tina – a very popular chatbot on the messaging platform called Telegram – who serves as a digital friend, sometimes shrink, to an entire millennial generation of Iran – a fan base of around 2.6 million. Tina, its creators claim, is trained to be compassionate, witty, and helpful. Just like a perfect friend.
Can chatbots really help humans like that?
Facebook does not think so. In January, it decided to get rid of its chatbot or virtual assistant called “M” that was designed for its Messenger app. Unlike Siri and Alexa, M was text not voice based. The move by a company with deep pockets which is not afraid to spend vast sums of money on seemingly impossible tasks, says something about the difficulty of the task.
Alexa or Siri have a narrow range of commands they can comprehend substituting voice for text to do simple searches on the internet or provide delightful diversions for kids but little beyond. Play a song on Spotify. Find and read an email. Search for cinema tickets or find the opening hours of neighbourhood shops? Set an alarm. Pair a Bluetooth device. Read out the weather forecast. Call home. Open an app. Order an Uber. Simple things with voice substituting fingers on a tablet.
Facebook’s ambition was to change that. Introduced to only a few thousand Messenger users in California, Facebook wanted M to answer more complex questions. To learn to do so, M cheated. Most questions were answered by humans. But there was a reason for that. The idea was that M would learn from human responses and improve its skills to understand a question and provide an accurate and relevant relevance. But in the end, the task proved too difficult. Artificial intelligence and the science of cognitive machine learning – a process with which a machine learns from experience and uses data to expand an fine-tune its algorithms – is in its very early stages.
Complex tasks cannot be fully handled by machines just yet. Not until artificial intelligence techniques begin to deliver on their promise.
Chatbots can be used to enhance productivity. Think of a help desk where a computer engages with a user through a series of questions and answers to identify the problem and provides ways to fix it. By helping other users with similar problems, it builds up enough knowledge on common problems learning on the job to gradually solve more complex ones.
The field of artificial intelligence has advanced substantially. But expectations know no bounds and buzzwords and acronyms abound. There are concepts that sound fiendishly clever: Deep learning, recurrent and recursive neural networks, machine learning and machine consciousness, and even artificial ignorance. How these concepts are applied to complex problems is less clear.
At a simple level, a virtual assistant has 3 parts
1. Natural language interface or NLI
This is something scientists have been working on for years. Computers can now understand human voice and detect the intonation and even the relevance of a question or statement accurately, but the deeper meaning in a conversation is still difficult to decipher. Better NLI techniques can help an application understand exactly what is asked or required before it can determine how to help.
2. Reasoning Engine
For the reasoning engine – the computer program that does the computations – people make a distinction between two approaches: “rules-based” algorithms which run as programmed, and AI – which is capable of learning from experience. Needless to say that most chatbots and virtual assistants that are available today are primarily rules-based. But the more complex computer programs have bundled both approaches together with applications based on the science of neural networks that learn as they go, recognise patterns, and are trained through reinforced learning where the machine learns to adapt its computations through repetitive events or messages. Bit like rote learning at school.
In 1996 IBM’s Deep Blue, a computer chess playing program, beat Gary Kasparov, arguably the finest player that ever lived, and recently Google’s AlphaGo, beat a world champion of go, a game of infinite possibilities. These games require rapid calculations that require probabilistic simulation techniques such as the Monte Carlo simulation but also operate by recognising board positions at a very advanced level. Kasparov later attributed the loss to “brute force” technique – the ability of machines to simulate hundreds of moves ahead, nothing to do with artificial intelligence.
The irony of AI is that it lends itself well to tasks that can beat the smartest players of strategy games such as chess because every move can be evaluated in terms of probability of success of the eventual outcome derived from records of games previously played. But much less to everyday tasks such as rescheduling diaries by reading incoming emails and juggling scheduling restrictions, because these require complex levels of interpretation and contextual inference.
3. Data Vault
Virtual assistants are able to search and use all the resources available on the internet just like people but they also rely on their own data vaults in which they “salt away” information accumulated through conversations and interactions even if such data points were irrelevant at that time.
We are still far away from replicating human conversations but computers are gradually putting the pieces together and in some areas such as language translation and gaming, computers may soon become significantly better than human beings. But already they can do things as well as humans in conversations like avoiding a topic they don’t understand or changing the subject.
Forbes magazine notes that Tina is trained to do this when she does not understand a question. When asked an irrelevant question, Tina changed the subject. When pressed, she confessed: “Yes! Don’t tell anyone I didn’t understand what you meant. :)”