Skip to content

Testing bots

Why should you test your bots

When you start your first bot with maybe one or two files containing dialogs, it may not seem necessary to create a test for that bot. But as your bot grows, the complexity of your bot grows at the same time. This growing complexity also means that there is a higher chance of unexpected things happening when a small change is made. A way to manage this growth in complexity is by creating unit-tests. Each unittest describes a part of the code that is in your bot. The test ensures that when you change code internally, nothing changes for the person talking to the bot.

Test-driven bot development, an example.

Test-driven development (TDD) is a development style where the code is driven by (you guessed it): tests. This means that we usually start writing tests before we write the functionality in code. We will create a bot that can perform addition and subtraction written in words.

Instructions are given to the bot in the following manner: "add 10 and 20", "subtract 5 from 10". With this, we have enough to start creating tests for our bot, and from these tests, we will create the actual bot. Tests do not have to be written in the same file, they can be seperated into multiple files.

test "the bot greets us with its name" do
  expect "Hello I am CompuTron"
  expect "What do you want to compute"
end

test "adding two numbers" do
  expect "What do you want to compute?"
  say "add 5 and 5"
  expect "10"

  # addition of negative numbers
  say "add -5 and -10"
  expect "-15"

  say "add 5.5 and 5"
  expect "10.5"
end

test "subtracting two numbers" do
  expect "What do you want to compute?"
  say "subtract 5 from 50"
  expect "45"

  say "subtract 100 from 50"
  expect "-50"

  # test if we can subtract negative numbers
  say "subtract -50 from 50"
  expect "100"
end

The code for the test cases is also written in Bubblescript. Instead of writing a Dialog {{do..end}} block, we use the test statement to define a test-case. The string that follows the test statement describes what we are testing for, thus giving more context.

Inside the test block we can use the expect statement to define what we expect the bot to say at that point. The expect statement is exhaustive, meaning it will wait for the bot to stop saying things and then check whether the expected thing was received. At the moment, only say and ask can be tested, statements like show and events will be supported in the future.

Running the tests

We can run all the tests with Ctrl+Alt+T or by clicking the "Run All" button in the "Tests" panel in the editor. To only run the test in one file, you can click on the "play" button to the right of the test file in the tree navigation. When a single test is double clicked, only that test will be run.

Below is the implementation of the calculation bot according to the tests that were written for it. The regex in the @number constant matches on negative numbers, and on floating point numbers. When we run all the tests (Ctrl+Alt+T) in the studio, they should all have a green checkmark in front of them.

@number "-*\\d+\.*\\d*"

dialog main do
  say "Hello I am CompuTron"
  prompt "What do you want to compute?"
end

dialog add, trigger: "add #<number:#{@number}> and #<other_number:#{@number}>" do
  say number(entity.number) + number(entity.other_number)
end

dialog subtract, trigger: "subtract #<other_number:#{@number}> from #<number:#{@number}>" do
  say number(entity.number) - number(entity.other_number)
end

Creating tests based on conversations

We created the tests above by hand, which is something you would do if your bot has not been interacting with people yet. But, if your bot has had interactions with people, you can navigate to a conversation in the studio and click the "Create test" button on the right panel. This will take the conversation and convert it to a test. This way, you can refactor parts of your bot while making sure that this conversation is still a valid one.