Why Does AI ≠ ML? A Dialogue With a Machine in Natural Language

Author: Stanislav Ashmanov, CEO of Nanosemantics Lab, Ashmanov Neural Networks, SOVA.AI.

Conferences for artificial intelligence, machine learning, and chatbots, have in recent years equated artificial intelligence with machine learning, as well as machine learning with neural networks.

The hype surrounding “Machine Learning” in the last few years has persuaded the majority of the IT industry that AI is when we find giant Big Data and feed it to Artificial Intelligence, which then gives its own insights and finds the perfect solutions for different problems.

The reason we would like to discuss why AI is not 100% machine learning is because we'd like to talk about what we do best. We are the SOVA Project and we work with virtual assistants and the creation of chatbots for already more than 15 years. Later we'll explain how exactly machine learning benefits chatbots, as well as discuss some myths and high expectations surrounding it.

What chatbots are we talking about

Our customers are large banks, telecom operators, and retail chains. The chatbots we make for them, are virtual consultants for customer support. Their main job is to reduce the load on the contact center. Basically, they are put on websites, in mobile apps, and connected to messengers where they talk with customers that might have some questions about different products and services.

There are other applications for chatbots (“button” bots to quick-order products, bots for internal employee support, promo-bots for promotions, etc.) each of which carries out different tasks and has different requirements. Those are also interesting, but we’ll talk about them some other time.

For our largest clients, the costs of contact center operations reach millions of dollars a year. Contact centers deal with a huge quantity of hits: up to 100,000 a month. The quantity of just the type of questions they handle can vary from a couple hundred to a couple thousand.

A well-designed and well-trained chatbot can give significant cost savings to those non-stop working operators. In a few of our cases, a chatbot managed to take on 50-80% of all incoming requests, thus significantly saving our clients money and reducing line loading.

The functions of a chatbot as a contact center

Chatbots should do a few things:

  • Know the answers to as many informational requests as possible (“What are your rates?”, “Where is your closest office?”, “How can I change my phone company, but still keep the same number?”),

  • Be able to recognize and fulfill transactional requests (“I want to change my rate”, “What’s my balance?”), by completing a query in the CRM database and/or billing system,

  • Be able to understand when a question is too complicated and quickly and carefully transfer the client to a live employee from the support or sales department, while still maintaining the context of the conversation.

How exactly the recognition of query types is achieved and how the bot's knowledge base is filled and updated depends on the way it it implementated.

If the technology can cope with cognitive tasks, for example, the task is automated to resolve the problems of customers, then it can be called artificial intelligence, although not in the Hollywood sense of the term.

Terminology

In the modern “stylish” machine learning for chatbots, recognizing the type of question is called intent recognition, conducting a dialog according to a script or dialog tree is called dialog management, and assigning parameters from user cues (for example, names, tariff names, mobile number, etc.) is called slot filling.

We implement all of these stages within our projects using the rule-based approach, or the “correct” approach: bots are taught by possible user questions using templates that have quantifiers in the description language of dialogs, similar to using regular expressions.

This is how Joseph Weizenbaum taught his ELIZA which was the first in the history of the virtual companion, and how the well-known AIML (Artificial Intelligence Mark-up Language) language was originally conceived. This was all invented in the last century, but now it’s currently the only approach to man-machine dialog in a natural language that works well and predictably.

There are open source tools for building rule-based chatbots – for example, the project ChatScript.

Our own language is called Dialog Language. It is more powerful, simpler, and more comfortable than AIML, but not currently available for third-party developers.

The “correct” approach

Rule-based chatbots are made like this:

  1. Problem Statement. We define, with the customer, what general functions the bot will have, it’s positioning, which types of questions the chatbot should answer (it usually helps if the customer already has some FAQs, a site, and an internal knowledge base). In addition, the identity of the chatbot is designed (personality, style, area of knowledge for example).

  2. Creating general templates and dialog trees. Data engineers (specialists in knowledge bases) write templates in a special language that is necessary to identify possible issues. The templates can cover hundreds of thousands of possible correct language constructions that a native speaker could think of. The templates are then combined into interactive trees, with context memory and extracted variables.

  3. Extract query parameters. During the dialog process, the need to extract data from a user request always arises (to do slot filling). This is decided by recognizing named entities (named entity recognition), which can be implemented in different ways.

  4. Access to external services. In informational as well as in transactional requests, calls to external services are required (database, CRM, etc.). Chatbots access external databases (like billing, CRM, authorization databases, etc.), transmit parameters, receive information, and then form a smooth response for the user.

Then comes the internal and external testing, the introduction of the chatbot to the customer, and deploying it in our cloud or on the customer’s server. In general, there doesn’t seem to be anything tricky. But the devil is in the details.

Criticism of the “correct” approach

Our claim to the rule-based approach is usually dissatisfaction with the need to write templates. Programmers, IT directors, and former programmers hate it like as a rule of thumb. In addition to the instinct, “this is nonsense – everything is written manually!”, there is a short saying, “Are you serious? It’s impossible to think of all the possible variations in advance – it’s a huge job that yields a very small amount of complete answers.”

It would seem yes, but alas, it’s no. The power of descriptive dialog language helps one template cover tens and hundreds of thousands of question-wording variations. Strange as it may seem, writing a rich template just shows dramatically greater efficiency than machine learning.

https://nanosemantics.ai/wp-content/uploads/2018/09/%D0%9F%D1%80%D0%B8%D0%BC%D0%B5%D1%80-%D1%88%D0%B0%D0%B1%D0%BB%D0%BE%D0%BD%D0%B0-1024x246.png Sample template in our programming language DL

As for us, for 15 years we have been saving a massive database of templates and word databases for various industries and subject areas (banks, air transportation, retail, insurance, etc.) In general, it’s our key intellectual property that is more important than our chatbot engine and Dialogue Language.

Because we have such a base, it’s very simple for us to make a new chatbot (for example, for a bank) or master a new topic (like retail) requiring, in total, only an extra 1-2 months of work for 2 data engineers.

Chatbots in machine learning

In contrast to the described rule-based approach, machine learning chatbots use the following scheme:

  1. Data Intake. From the client, we get a journal of hundreds of thousands of dialogs from the contact center.

  2. Clean up the data. (What does clean up mean? We’ll explain this later)

  3. Mark Up. In the cleaned up data, we mark intentions: every replica is assigned to a specific type

  4. Teach. We feed the “replica + intent” database to the machine learning algorithm. To which algorithm we feed the data is not very important, usually it’s SVM (Support Vector Method), or DSSM (Deeply Structured Semantic Model), with the preliminary vectorization of words.

The result is that we can classify input references by type. It is further assumed that by knowing the type of question, we can guide a person through the corresponding dialog tree. Enthusiasts of machine learning, of course, do not want to build these trees manually. Here again, the intent classifier is used, which works no longer with input replicas, but with secondary ones. And so on.

At the right moment for transactional requests, the chatbot retrieves the values of the variable parameters using the above-named entity recognition, just like in the rule-based approach.

Criticism of the ML-approach

Machine learning at our company is worked on by 8 employees. As for the conversation log, we’re constantly in contact with our clients. For this reason, we are well aware of the problems of the described implementations of machine learning chatbots:

  1. Too little data. Clients often don’t have a database of dialogs or they do have them, but they’re audio recordings from the call center. Those can be typed out with an automatic speech recognizer, but the quality is incredibly low and requires more work later on to clean it up.

  2. The data is sensitive. If there is a database, the client might not be able to give it to us without violating personal data laws. It is then required on the side of the client to edit the database, deleting any data that shows the identity of the client.

  3. What’s in the database is trash. If the client does have a database, and they do clean it up, then later there is a problem of clearing specific answer to specific people from the database. For example, the answer to the question “What’s my telephone balance?” in the database will be a specific sum for each specific client. “25 dollars” which is from the training sample, of course, needs to be thrown out so that when Peter asks for his September 2018 balance, he’s not given Kate’s balance from December 2016.

  4. The database doesn’t have references to external databases. In order to process transactional requests, there must be a transaction - access to an external service. In the dialog journal there aren’t these references, there are only answers about what balance Kate had in 2016. This logic can’t be implemented by machine learning, it is still necessary for the developer to analyze logs of conversations and to embed the calls to billing, CRM, etc. into chat-bot dialogs.

  5. The layout is still manual. If we edit the database, what’s left is the layout of intentions, which is a big manual job. Yes, people who do this can sometimes be paid 2-3 times less than data engineers, but it’s hard to work with layouts in general, they’re not very careful about doing the layouts. In addition to that, data engineers generally carry out a smaller volume of tasks: looking at examples of dialogs, FAQs, etc., understand what and how a client can ask and then write a good template. Our usual enterprise projects we work on for 2-3 months, using 2-3 data engineers. This is not such a giant scale of work.

  6. A fundamental incompleteness of data. It’s necessary to understand that no matter what the large sets of dialog journals are, they are fundamentally incomplete. Native speakers can ask a question – grammatically correct and organically – which hasn’t been asked in the past. If there are 5 words in the request and each word has a few synonyms, then the quantity of possible variations is already in the tens of thousands for just one question. Often the share of new questions is in 10% per month. The bot, learning from the journal of past dialogs, can’t cover all the new questions and then makes mistakes. It is in this place that writing templates with good coverage and variability has a huge advantage over machine learning "by area."

  7. Poor dialog focus. Recognizing "intents" at each stage is not the same as a dialog tree with memorizing answers and context. For highly responsible applications, such a "guessing" of intent doesn’t work.

We’d like to separately note that machine learning systems are often unpredictable. Testing them, especially for completeness, is a whole other, complicated job.

That is, we can’t guarantee our clients that a chatbot will act in a predictable way. And this is an absolute legal requirement, often even written by the clients in terms of reference to the contract.

Rule-based + ML = friendship

If somebody told you that to they made a well-functioning chatbot completely on a neural network or on machine learning, just know that they’re lying to you. Or that chatbot doesn’t work well at all, or machine learning is only 5% of the whole system.

But machine learning can benefit us in the following ways:

  • Typo correction in input phrases

  • Selection of synonyms for words to assist the data engineer in the development process

  • Clustering the contact center logs to identify frequent requests to help the data engineer

  • Automatic answers for FAQ-bots (interesting technology – we’ll talk about it in another article)

  • Unclear search for question-answer pairs for information requests

These are all useful applications that save development time and improve the overall quality of chat-bots.

Conclusions

The goal of this article was to say why you shouldn’t expect quick and easy success from machine learning in the field of chatbots

  • Chatbots can’t be fully implemented in machine learning: machine learning is a small addition to the system, which increases the efficiency of development.

  • Chatbots can and need to be made on the rule-based approach: this works, it’s quite an uphill task, and it guarantees quality.

  • Artificial intelligence is not only machine learning – chatbots on rules – but there are also expert systems, logic and other areas, in addition to machine learning.

The hype surrounding machine learning has made it to our industry, and maybe soon, it will make it to yours.