An In-Depth Guide to Testing Ethereum Smart Contracts

Part Four: Running Your Tests

Ben Hauser
6 min readJul 23, 2020

This article is part of a series. If you haven’t yet, check out the previous articles:

Part One: Why we Test
Part Two: Core Concepts of Testing
Part Three: Writing Basic Tests
Part Four: Running Your Tests
Part Five: Tools and Techniques for Effective Testing
Part Six: Parametrization and Property-Based Testing
Part Seven: Stateful Testing

Now that we’ve covered the basics of writing tests, let’s look at how to run them and how to understand the output.

Running Tests

To execute the entire test suite, run the following command in the root folder of your project:

brownie test

You will receive output similar to this:

The output shows a list of filenames, with a series of dots or F’s to show that tests are passing or failing. Everything is color coded to make life easier — as long as you’re seeing green, things are good!

Only Running Updated Tests

As your test suite grows you will find your tests taking longer and longer to run. Waiting on feedback from your tests can eat into your development time, so it’s sometimes useful to only run tests where the outcome might have changed. For this, we have the --update flag (or just -U):

brownie test --update

In this mode, Brownie only runs tests where either the test itself or the contracts it touches have been modified. The . and F become s, still color coded to show the outcome when it did run.

— update works on a per-file basis, so if you modify a single test you will have to re-run the entire test module. For this reason, it’s usually preferable to have many short test modules as opposed to a few long ones.

Running Tests in Parallel

Another way to speed up execution is by running your tests in parallel on multiple processors. This is possible thanks to a wonderful plugin known as pytest-xdist, which is included when installing Brownie.

To run tests in parallel we use the -n flag:

brownie test -n auto

In the above command, auto indicates how many processes to use. You can manually set it, or let xdist choose the ideal value based on the number of CPUs your computer has.

You’ll notice that there’s a longer delay in the setup phase. This is because Brownie must launch a separate instance of Ganache for each processor. For this reason, the benefit to this mode is much more apparent when running large test suites.

A couple of important things to know about this mode:

  • It only works if all of your tests are properly isolated with an isolation fixture. If even a single test isn’t isolated, it will raise an exception.
  • Tests are distributed to workers on a per-file basis. For the best performance with this mode, organize your test suite into many small modules rather than just a few large ones.

When Tests Fail

Next, let’s take a look at some failing tests. If you’re following along, copy the following code and save it under tests/test_failure.py:

Let’s run it and see what happens:

As expected, both tests have failed. Below the test results, pytest gives us some information on why:

Pytest provides the following information to help determine what happened:

  • a list of fixtures used in this test, and their values
  • the test source code, with the failing line highlighted
  • a translated version of the source code where variable names are shown as actual values
  • the filename and line number where the test failed

And so we can see that the test has failed because the balance of accounts[0] was 10²¹, not 31337.

Whereas our first example failed from an assertion, the second failed because of a reverting transaction. In this case we see:

  • the error string of the revert
  • the contract name and line number where the revert occurred
  • a source highlight showing the failing contract code

The test has failed because accounts[1] had an insufficient balance to complete the transfer.

Interactive Debugging

Sometimes the cause of a failing test is not immediately obvious, and you might want to have a look around. To do so, you can use the --interactive flag (or just -I):

brownie test --interactive

In this mode, when a test fails you’re immediately dropped into Brownie’s console mode:

The console opens at the exact moment that the test fails. All local and global variables within the test are available, as well as the usual console objects.

When you are finished, type quit() and Brownie will continue with the next test.

Evaluating Test Coverage

To explore coverage evaluation in greater detail, check out this article.

Brownie uses traces to evaluate statement and branch coverage of your tests. To generate a coverage report, use the --coverage flag (or -C):

brownie test -C

When the tests finish running you will see output similar to this:

A percentage value is shown in the console, and a detailed report is saved at reports/coverage.json. To view this report we use the Brownie GUI:

brownie gui

Within the GUI:

  • First, select a contract name from the drop-down menu in the upper right hand corner
  • Immediately left of the contract name, choose “coverage” from the reports drop-down
  • A third drop-down menu will appear. Select “branch” or “statement” to view a coverage report

The report highlights sections of code that Brownie has identified as statements or branches. For statement coverage everything is highlighted in green or red to show if the statement did or did not execute. For branches, some are additionally highlighted in yellow or orange. These colors indicate that a branch was hit, but only evaluated truthfully or falsely.

The image below shows a branch report with all four possible colors:

What’s Next

In “Part Five: Tools and Techniques for Effective Testing”, we explore some useful functionality within Brownie and pytest to help take our testing skills to the next level.

You can also follow the Brownie Twitter account, read my other Medium articles, and join us on Gitter.

--

--