The key for a bot to understand the humans is its ability to understand the intentions of humans and extraction of relevant information from that intention and of course relevant action against that information.
NLP (Natural language processing) is the science of extracting the intention of text and relevant information from text. The reason why you see so many bot platforms popping up like mushrooms is the advent of many NLP as a service platforms. Connecting to channels and developing bots was not a problem, the only missing link was a NLP platform which can scale and is easier to work with because you won’t like to learn NLP to make a silly Bot !
Some popular NLP as a service platforms are
1. LUIS.ai — By Microsoft (BTW I work for MS)
2. Wit.ai — By Facebook
3. Api.ai — By Google
4. Watson — By IBM
An ideal Bot platform offers
1. A NLP service — that you can train yourself.
2. SDK to support and handle conversations and their meta-data.
3. A Platform to host the bot code
4. A Platform to connect the Bot logic with multiple channels
While NLP as a service platforms helps developer in developing the NLP capabilities in as least amount of time as possible, at times the developers find themselves out of wits to understand the basic jargon of NLP and training their NLP as a service platform to the best of its ability.
Each NLP service has its own corpora of Language and domain that it bootstraps with, the corpora gives ability to models to understand language, grammar and terminologies of a certain domain and you must choose the most suitable domain when you are deploying the NLP service.
In this article, I would point out some best practices to train your NLP as a service models.
Intent — Simply put, intents are the intentions of the end-user, these intentions or intents are conveyed by the user to your bot. You can mainly put your intents in 2 categories
1. Casual Intents
2. Business Intents
1. Casual Intents — I also call them ‘Small talk’ Intents. These intents are the opener or closer of a conversation. The Greetings like “hi”, “hello”, “Hola”, “Ciao” or “bye” are the opening or closing statements in a conversation. These intents should direct your bot to respond with a small talk reply like “Hello what can I do for you today” or “Bye thanks for talking to me”.
The casual intents also comprise of Affirmative and Negative intents for utterances like “Ok”, “yes please”, “No not this one but the first one”, “Nope”.
Having General affirmative and negative intents help you handle all such intents and rather take them in context with the conversation bot just had with the client.
For ex — if the Bot just asked a question to end-user — you should expect either an affirmative or a negative intent and if its anything else Bot can ask the same question again. Your affirmative and negative intents should be able to handle most such utterances.
2. Business Intents — These are the intents that directly map to business of the bot. For eg — if it’s a Movie information Bot then an utterance from client like “When was Schindler’s list released?” is a business intent that intends to find out the Release year of Schindler’s list and you should label it accordingly with an understandable name like “GetReleaseYearByTile”.
Ideally you should think more about business intents because rest of small talk like saying hellos or affirming choices is taken care by general casual intents.
Business intents have metadata about the intent called “Entities”. Let’s take an example for an Intent “GetReleaseYearByTitle” — Sample Utterance “When was Schindler’s list released ?”
Here “Schindler’s list” is the title of the movie for which the user “intends” to find out the release year. The process of finding the entities can be understood at Part of sentence (POS) tagging. However as a user of NLP as a service you don’t need to get into the technicalities of knowing how POS tagging work but if you do want to here is a nice paper on it http://nlp.stanford.edu/software/tagger.shtml
Whenever user is thinking about designing their intents the entities must also be identified and labelled accordingly. Again, in entities you can have general entities labelled for use throughout the intents like metrics (including quantity, count, volume), Dates and most of NLP as service allows you to tag entities of such general types without any big hassle.
Some entities may be labelled as composite entities that is having more than one entities (component entity) inside it. As a science, it doesn’t matter if you don’t have this feature with your NLP service as long as you have simple entity labelling. One must define component entities before labelling composite entities.
For ex:- Find me a pair of Size 8 Red Adidas Sport shoes.
Intent — SearchProduct
Composite Entity — ProductDetail
Component Entity —
Size — 8
Brand — Adidas
color — Red
Category — Sport Shoes
Training for Intents and Entities
Ideally one should train the NLP as a service with some real corpus, so if you have some chat messages with your clients over Facebook or skype or whatever channel you work with those messages/utterances can help training for intents, otherwise you can think of training for intent with your own “Manufactured” utterances for any intent. For ex — Training for intent of “GetReleaseYearByTitle” can have utterances like
“what was the release year of movie Pulp fiction”
“in which year Pulp fiction was released”
“when did pulp fiction came” — Bad English I know 🙂
“When was Pulp fiction released”
The training with Manufactured utterances helps in bootstrapping the system but one must re-train the NLP service when it starts getting some real utterances. The process of re-training the system should keep going on until the error rate reduces. The more variance of utterances you receive from real conversations the better you can train your NLP service for intents. Ideally Minimum 5 or optimally 10 utterances per intent are good enough to bootstrap the system.
The NLP services go through a routine of Supervised — Unsupervised — Supervised learning phases. Each supervised learning phase serves as a feedback loop through which the course correction is done for the NLP models.
User trains the system with utterances — this is supervised learning, then the NLP service learns on its own on the basis of supervised learning and about 10% of what the NLP Service has learnt during unsupervised learning is asked back from the NLP service to the user if what it has learnt has been correct or not, the user again trains or affirms/negates the unsupervised learning results of NLP service and re-trains the model. This process keeps going on and eventually user will find less and less questions by NLP service with more and more confidence towards the questions its asking the user to affirm.
Key Takeaways -
- Identify Intents in advance — differentiate between general/casual and business intents.
- Identify Entities — differentiate between metric related and noun related entities.
- If possible train Intents with original corpus of conversations, otherwise train with manufactured utterances. Minimum 5 utterances, optimum 10 utterances.
- Train, Converse, re-train — feedback loop must continue in order to train your NLP models.
Interesting reads -
Similar article at Medium - https://medium.com/@brijrajsingh/chat-bots-designing-intents-and-entities-for-your-nlp-models-35c385b7730d#.dj39gyhtl