An In-Depth Guide to Testing Ethereum Smart Contracts

Part Five: Tools for Effective Testing

iamdefinitelyahuman
3 min readJul 23, 2020

This article is part of a series. If you haven’t yet, check out the previous articles:

Part One: Why we Test
Part Two: Core Concepts of Testing
Part Three: Writing Basic Tests
Part Four: Running Your Tests
Part Five: Tools and Techniques for Effective Testing
Part Six: Parametrization and Property-Based Testing
Part Seven: Stateful Testing

Dealing with Tests that cannot Pass

At some point you may find yourself dealing with a test that cannot pass. Maybe you are unsure of the reason why but need to focus on something else for now, maybe you’re in the middle of a large-scale refactor and so the failure is expected. Either way it can be annoying to see the test fail repeatedly.

In these cases, pytest provides markers to help you deal with the situation:

pytest.mark.skip

@pytest.mark.skip(reason="no way of currently testing this")
def test_the_unknown():
...

The skip marker tells pytest not to run the test. You can optionally provide a reason why you are skipping it, to help remember why when you come back to it later.

pytest.mark.xfail

@pytest.mark.xfail
def test_function():
...

The xfail marker indicates that the test should still run, but the outcome is inverted. No traceback is given for the failing test, and the console output is x for “expected to fail”. If the test passes it is interpreted as a failure, or “unexpected pass”.

Time Travel

The chain object provides methods for fast forwarding the clock and mining new blocks. These are invaluable when testing time dependent behaviors.

chain.sleep

chain.sleep fast forwards the clock in the local environment. It accepts one argument — the number of seconds to advance.

chain.mine

chain.mine mines 1 or more new, empty blocks.

Revert Comments

To explore developer revert comments in greater detail, check out this article.

When testing we want to ensure that every branch is correctly hit — and the best way to do this with reverting transactions is to expect a specific error string.

Unfortunately, each revert string adds a minimum 20,000 gas to your contract deployment cost, and increases the cost for a function to execute. Including an error message for every require and revert statement is often impractical and sometimes simply not possible due to the block gas limit.

Brownie solves this issue by letting you include revert strings as source code comments. The comments are not included in the bytecode, but are still accessible via TransactionReceipt.revert_msg or the brownie.reverts context manager.

Using developer revert comments is simple. At the end of a line of code that includes a potential revert, add a comment starting with// dev: in Solidity or # dev: in Vyper. Brownie detects these comments and adds them automatically during testing.

Here is an example admin function where revert comments are used in place of actual error strings:

And here is a test that targets a specific revert comment:

What’s Next

In “Part Six: Parametrization and Property-Based Testing”, we explore how parametrization can strengthen our test cases and help us find unexpected edge cases.

You can also follow the Brownie Twitter account, read my other Medium articles, and join us on Gitter.

--

--