Keeping Tests DRY with Class Based Tests In Python

A class based approach to writing tests in Python

Tests can be a bummer to write but even a bigger nightmare to maintain. When we noticed we are putting off simple tasks just because we were afraid to update some monster test case, we started looking for more creative ways to simplify the process of writing and maintaining tests.

In this article I will describe a class based approach to writing tests.

Before we start writing code let's set some goals:

  • Extensive - We want our tests to cover as many scenarios as possible. We hope a solid platform for writing tests will make it easier for us to adapt to changes and cover more grounds.
  • Expressive - Good tests tell a story. Issues become irrelevant and documents get lost but tests must always pass - this is why we treat our tests as specs. Writing good tests can help newcomers (and future self) to understand all the edge cases and micro-decisions made during development.
  • Maintainable - As requirements and implementations change we want to adapt quickly with as little effort as possible.

Enter Class Based Tests

Articles and tutorials about testing always give simple examples such as add and sub. I rarely have the pleasure of testing such simple functions. I'll take a more realistic example and test an API endpoint that does login:

POST /api/account/login
    username: <str>,
    password: <str>

The scenarios we want to test are:

  • User logins successfully.
  • User does not exist.
  • Incorrect password.
  • Missing or malformed data.
  • User already authenticated.

The input to our test is:

  • A payload, username and password.
  • The client performing the action, anonymous or authenticated.

The output we want to test is:

  • The return value, error or payload.
  • The response status code.
  • Side effects. For example, last login date after successful login.

After properly defining the input and output, we can write a base test class:

from unittest import TestCase
import requests

class TestLogin:
    """Base class for testing login endpoint."""

    def client(self):
        return requests.Session()

    def username(self):
        raise NotImplementedError()

    def password(self):
        raise NotImplementedError()

    def payload(self):
        return {
            'username': self.username,
            'password': self.password,

    expected_status_code = 200
    expected_return_payload = {}

    def setUp(self):
        self.response ='/api/account/login', json=payload)

    def test_should_return_expected_status_code(self):
        self.assertEqual(self.response.status, self.expected_status_code)

    def test_should_return_expected_payload(self):
        self.assertEqual(self.response.json(), self.expected_return_payload)
  • We defined the input, client and payload, and the expected output expected_*.
  • We performed the login action during test setUp. To let specific test cases access the result, we kept the response on the class instance.
  • We implemented two common test cases:
    • Test the expected status code.
    • Test the expected return value.

The observant reader might notice we raise a NotImplementedError exception from the properties. This way, if the test author forgets to set one of the required values for the test, they get a useful exception.

Lets use our TestLogin class to write a test for a successful login:

class TestSuccessfulLogin(TestLogin, TestCase):
    username = 'Haki',
    password = 'correct-password'
    expected_status_code = 200
    expected_return_payload = {
        'id': 1,
        'username': 'Haki',
        'full_name': 'Haki Benita',

    def test_should_update_last_login_date_in_user_model(self):
        user = User.objects.get(['id'])

By just reading the code we can tell that a username and password are sent. We expect a response with a 200 status code, and additional data about the user. We extended the test to also check the last_login_date in our user model. This specific test might not be relevant to all test cases, so we add it only to the successful test case.

Lets test a failed login scenario:

class TestInvalidPassword(TestLogin, TestCase):
    username = 'Haki'
    password = 'wrong-password'
    expected_status_code = 401

class TestMissingPassword(TestLogin, TestCase):
    payload = {'username': 'Haki'}
    expected_status_code = 400

class TestMalformedData(TestLogin, TestCase):
    payload = {'username': [1, 2, 3]}
    expected_status_code = 400

A developer that stumbles upon this piece of code will be able to tell exactly what should happen for any type of input. The name of the class describe the scenario, and the names of the attributes describe the input. Together, the class tells a story which is easy to read and understand.

The last two tests set the payload directly (without setting username and password). This won't raise a NotImplementedError because we override the payload property directly, which is the one calling username and password.

A good test should help you find where the problem is.

Let's see the output of a failed test case:

FAIL: test_should_return_expected_status_code (tests.test_login.TestInvalidPassword)
Traceback (most recent call last):
  File "../tests/", line 28, in test_should_return_expected_status_code
    self.assertEqual(self.response.status_code, self.expected_status_code)
AssertionError: 400 != 401

Looking at the failed test report, it is clear what went wrong. When the password is invalid we expect status code 401, but we received 400.

Let's make things a bit harder, and test an authenticated user attempting to login:

class TestAuthenticatedUserLogin(TestLogin, TestCase):
    username = 'Haki'
    password = 'correct-password'

    def client(self):
        session = requests.session()
        session.auth = ('Haki', 'correct-password')
        return session

    expected_status_code = 400

This time we had to override the client property to authenticate the session.

Putting Our Test To The Test

To illustrate how resilient our new test cases are lets see how we can modify the base class as we introduce new requirements and changes:

  • We have made some refactoring and the endpoint changed to /api/user/login:
class TestLogin:
    # ...
    def setUp(self):
        self.response =
  • Someone decided it can speed things up if we use a different serialization format (msgpack, xml, yaml):
class TestLogin:
    # ...
    def setUp(self):
        self.response =
  • The product guys want to go global, and now we need to test different languages:
class TestLogin:
    language = 'en'

    # ...

    def setUp(self):
        self.response =

None of the changes above managed to break our existing tests.

Taking it a Step Further

A few things to consider when employing this technique.

Speed Things Up

setUp is executed for each test case in the class (test cases are the functions beginning with test_*). To speed things up, it is better to perform the action in setUpClass. This changes a few things. For example, the properties we used should be set as attributes on the class or as @classmethods.

Using Fixtures

When using Django with fixtures, the action should go in setUpTestData:

class TestLogin:
    fixtures = (

    def setUpTestData(cls):
        cls.response = cls.get_client().post('/api/account/login', json=payload)

Django loads fixtures at setUpTestData so by calling super the action is executed after the fixtures were loaded.

Another quick note about Django and requests. I've used the requests package but Django, and the popular Django restframework, provide their own clients. django.test.Client in Django's client, and rest_framework.test.APIClient is DRF's client.

Testing Exceptions

When a function raise an exception, we can extend the base class and wrap the action with try ... catch:

class TestLoginFailure(TestLogin):

    def expected_exception(self):
        raise NotImplementedError()

    def setUp(self):
        except Exception as e:
            self.exception = e

    def test_should_raise_expected_exception(self):

If you are familiar with the assertRaises context, I haven't used it in this case because the test should not fail during setUp.

Create Mixins

Test cases are repetitive by nature. With mixins, we can abstract common parts of tests cases and compose new ones. For example:

  • TestAnonymousUserMixin - populates the test with anonymous API client.
  • TestRemoteResponseMixin - mock response from remote service.

The later, might look something like this:

from unittest import mock

class TestRemoteServiceXResponseMixin:
    mock_response_data = None

    def setUpTestData(cls, mock_remote)
        mock_remote.return_value = cls.mock_response_data


Someone once said that duplication is cheaper than the wrong abstraction. I couldn't agree more. If your tests do not fit easily into a pattern then this solution is probably not the right one. It's important to carefully decide what to abstract. The more you abstract, the more flexible your tests are. But, as parameters pile up in base classes, tests are becoming harder to write and maintain, and we go back to square one.

Having said that, we found this technique to be useful in various situations and with different frameworks (such as Tornado and Django). Over time it has proven itself as being resilient to changes and easy to maintain. This is what we set out to achieve and we consider it a success!

Similar articles