Tutorial: building a restaurant search bot

Note: see Migrating an existing app for how to clone your existing wit/LUIS/api.ai app.

As an example we’ll use the domain of searching for restaurants. We’ll start with an extremely simple model of those conversations. You can build up from there.

Let’s assume that anything our bot’s users say can be categorized into one of the following intents:

  • greet
  • restaurant_search
  • thankyou

Of course there are many ways our users might greet our bot:

  • Hi!
  • Hey there!
  • Hello again :)

And even more ways to say that you want to look for restaurants:

  • Do you know any good pizza places?
  • I’m in the North of town and I want chinese food
  • I’m hungry

The first job of rasa NLU is to assign any given sentence to one of the categories: greet, restaurant_search, or thankyou.

The second job is to label words like “Mexican” and “center” as cuisine and location entities, respectively. In this tutorial we’ll build a model which does exactly that.

Preparing the Training Data

The best way to get training data is from real users, and the best way to do that is to pretend to be the bot yourself. But to help get you started we have some data saved here

Download the file and open it, and you’ll see a list of training examples like these:

{
  "text": "hey",
  "intent": "greet",
  "entities": []
}
{
  "text": "show me chinese restaurants",
  "intent": "restaurant_search",
  "entities": [
    {
      "start": 8,
      "end": 15,
      "value": "chinese",
      "entity": "cuisine"
    }
  ]
}

hopefully the format is intuitive if you’ve read this far into the tutorial, for details see rasa NLU data format

In your working directory, create a data folder, and copy the demo-rasa.json file there.

It’s always a good idea to look at your data before, during, and after training a model. To make this a bit simpler rasa NLU has a visualise tool, see Visualization. For the demo data the output should look like this:

https://cloud.githubusercontent.com/assets/5114084/20884979/452df93c-bae6-11e6-8a2b-a6ad52306ae0.png

It is strongly recommended that you use the visualizer to do a sanity check before training.

Training Your Model

Now we’re going to create a configuration file. Make sure first that you’ve set up a backend, see Setting up a backend . Create a file called config.json in your working directory which looks like this

{
  "backend": "spacy_sklearn",
  "path" : "./",
  "data" : "./data/demo-restaurants.json"
}

or if you’ve installed the MITIE backend instead:

{
  "backend": "mitie",
  "path" : "./",
  "mitie_file" : "path/to/total_word_feature_extractor.dat",
  "data" : "./data/demo-restaurants.json"
}

Now we can train the model by running:

$ python -m rasa_nlu.train -c config.json

After a few minutes, rasa NLU will finish training, and you’ll see a new dir called something like model_YYYYMMDD-HHMMSS with the timestamp when training finished.

To run your trained model, add a server_model_dir to your config.json:

{
  "backend": "spacy_sklearn",
  "path" : "./",
  "data" : "./data/demo-restaurants.json",
  "server_model_dir" : "./model_YYYYMMDD-HHMMSS"
}

and run the server with

$ python -m rasa_nlu.server -c config.json

you can then test our your new model by sending a request. Open a new tab/window on your terminal and run

$ curl -XPOST localhost:5000/parse -d '{"q":"I am looking for Chinese food"}' | python -mjson.tool

which should return

{
  "intent" : "restaurant_search",
  "confidence": 0.6127775465094253,
  "entities" : [
    {
      "start": 8,
      "end": 15,
      "value": "chinese",
      "entity": "cuisine"
    }
  ]
}

If you are using the spacy_sklearn backend and the entities aren’t found, don’t panic! This tutorial is just a toy example, with far too little training data to expect good performance. rasa NLU will also print a confidence value. You can use this to do some error handling in your bot (maybe asking the user again if the confidence is low) and it’s also helpful for prioritising which intents need more training data.

With very little data, rasa NLU can in certain cases already generalise concepts, for example:

$ curl -XPOST localhost:5000/parse -d '{"q":"I want some italian"}' | python -mjson.tool
{
  "entities": [
    {
      "end": 19,
      "entity": "cuisine",
      "start": 12,
      "value": "italian"
    }
  ],
  "intent": "restaurant_search",
  "text": "I want some italian"
  "confidence": 0.4794813722432127
}

even though there’s nothing quite like this sentence in the examples used to train the model. To build a more robust app you will obviously want to use a lot more data, so go and collect it!