A Practical Guide to Testing React Apps

A Practical Guide to Testing React Apps

You’ve decided to start writing tests for your React apps.

Maybe you’re new to React and want to start out this new chapter in your life with good habits.

Or maybe you just had “one of those days” with a gigantic React app you’re managing. And you’ve decided that having npm test echo Error: no test specified is just no way to live your life anymore.

In either case, welcome. Good for you. As we’ll see, testing React components is surprisingly fun.

But starting out can be daunting.

There are seemingly dozens of test harnesses out there, most of them inspired by hot café drinks.

This leads to a whole host of questions: What are they each used for? Which one is best?

And then after you tepidly make that decision: how do you test a component? This is a thing that usually lives on a DOM in a browser.

And if you figure that out, you’re still faced with the hardest testing question in any language or framework: what do you test?

And what does a guy gotta do to get a tall, double-caff, almond milk latte at 120 degrees around here??

This article will help you overcome these hurdles 1Except that last hurdle. If your coffee order is that complex, you’re on your own kid. Though I think github.com/tall-double-caff-almond-milk-latte-at-120-degrees-js is still available if you have an idea for a hot/new/up-and-coming JavaScript assertion library. and get comfortable with testing React components, fast.

We’re going to cover a lot of ground in little time. 2Note that “little” is relative. And the word I started out with when I wrote the intro. And then I wrote the article series. This article doesn’t just talk about the practice but also the theory.

The topics we’ll cover:

  • The popular tools (Jest and Enzyme) that we use to test React apps
  • “Shallow rendering” a component
  • What good React specs look
  • What tests not to write
  • Jest snapshots
  • Jest serializers
  • ….and more!

Further, because we’re covering so much ground we might not have the chance to go as deep in certain areas as I would like. Help me decide where to expand. If you want to see more (or less 😅) discussion in certain areas, you can try:

  1. Messaging me and letting me know
  2. Messaging me with specific questions! Questions are as informative as requests. Sometimes more informative.

First, a little terminology…

There are multiple ways to break down your tests, but two popular paradigms are unit tests and integration tests.

For unit tests, how one defines a “unit” depends on the language or framework. However, the idea is that you want to test that unit in isolation. So, for example, for an object-oriented language a unit test might revolve around a class. Ideally, those unit tests test only the behavior of that class and nothing else in the system.

Integration tests, on the other hand, will test two or more parts of a system together.

Guess how we break down “units” in React apps? Yep – components. So each set of unit tests will be centered around a component in isolation.

Integration tests, on the other hand, might test multiple components together. Or, in the “most integrated” form, we might write full “end to end” tests. These are tests where we boot up our app in a browser environment (like Selenium) and have automated tests that click around and interact with our full-blown app, much like a user.

The bulk of tests for a React app will usually be unit tests. You can build an effective suite of component tests with low maintenance costs.

The React testing ecosystem

As with all things React these days, the ecosystem is vast. There are a number of different libraries and tools at your disposal. Knowing which tool to use – and where it fits in with everything else – is a bit tricky at first.

There is no single “right” combination of tools to test React apps. But there is a particular toolchain that is increasingly dominant in the React community.

Let’s talk about what we need and then what libraries will fill that need.

A test runner

At the end of the day, spec files are just JavaScript. We need a tool that we can easily execute from the command line. Ideally this tool:

  • Determines where to find our spec files
  • Runs the tests in those files
  • Reports back to us, via the command line, on how things went

The React community has largely embraced Jest as its preferred testing framework. This is in no small part because Facebook created and maintains Jest.

Again, there are many popular test runners in the JavaScript testing community. If you’re already comfortable and using Mocha, for example, don’t be afraid to stick to that. All JavaScript test runners have a lot in common.

If you’re new here, just use Jest.

Jest includes a test runner. Here’s an example of using it from the command line:

We execute Jest with npm test. Jest finds and runs our spec files (more on this later). It then reports back to us how things went.

An assertion library

A spec centers around assertions. We state what we expect our code to do. And then we execute our code and compare our expectation to what actually happened.

There are many ways we can achieve this. For instance, our test files could simply raise an error if our expectation does not match reality:


// test that `sum()` ... sums
import { sum } from "./MathHelpers.js";
const [a, b] = [1, 4];
const expected = 5;
const actual = sum(a, b);
if (actual !== expected) {
  throw `sum() returned ${actual}, expected ${expected}`;
}

But we can use an assertion library to help organize our specs. As we’ll see, these frameworks make it easy to write expressive code that describes the desired behavior of your app. Using an assertion library, we can write the code block above like this:


import { sum } from "./MathHelpers.js";

describe("the `sum()` function", () => {
  it("sums!", () => {
    const expected = 5;
    const actual = sum(1, 4);
    expect(actual).toEqual(expected);
  });
});

Included in the Jest framework is also an assertion library. The functions describe, it, expect, and toEqual are all provided by Jest. Among other benefits that we’ll see, note how expressive and well-organized this spec is.

Jest’s assertion library is based on Jasmine’s. If you’ve used Jasmine, you’ll be right at home.

The Jest assertion library and test runner work hand-in-hand. As demonstrated above, we can include detailed descriptions using describe and it. When running the tests from the command line, these descriptions will help us quickly diagnose the health of our code base.

Testing utilities

While tightly associated to React, Jest is a general purpose JavaScript testing framework. You can use Jest as a test harness for Angular or Ember apps or you can use it to test your back-end Node.js web server.

When testing React components specifically, we’ll want to use additional tooling.

As I mentioned, the bulk of tests for a React app will be unit tests. That’s what we’ll be writing here.

Unit tests center around individual components. So for some component LoginModal.js, we’ll have a set of unit tests inside a file called LoginModal.test.js.

The right testing utility or helper library will help us write tests that express what a React component outputs and how it responds to a given event (like user interaction). We’ll see what this looks like in practice soon.

React comes with a testing utility called ReactTestUtils. This is available in the npm package react-dom/test-utils.

Airbnb maintains a test utility called Enzyme that leverages React’s core testing utilities. While you can use ReactTestUtils directly, Enzyme provides a lot of neat benefits. Enzyme is popular, popular enough that the React core team even recommends it on their website.

We’ll use Enzyme in this guide.

Setup

To get started with testing, we need to:

  • Install the Jest test framework and Enzyme testing utility
  • Establish a directory that will hold our spec files
  • Have a React component to test

This order doesn’t apply if you’re following TDD. I talk about TDD a bit later.

We can simplify setup by using Facebook’s create-react-app. Create React App generates the scaffold of a React app for you with sane defaults.

Create React App both includes Jest and some configuration for Jest. We’ll just need to install Enzyme.

In a future article, I’ll talk about how the testing setup of Create React App works. If you’re setting up a test framework inside of an existing React project that does not use Create React App, this will be helpful instruction. Subscribe so you know when it’s live.

Have you used an earlier version of Jest (<15.0)? If you have, a few things here might surprise you.

Jest 15 shipped new defaults for Jest. These changes were motivated by a desire to make Jest easier for new developers to begin using while maintaining Jest’s philosophy to require as little configuration as necessary.

You can read about all the changes in this blog post. Relevant to this article:

  • In addition to looking under __tests__/ for test files Jest also looks for files matching *.test.js or *.spec.js
  • Auto-mocking is disabled by default

Creating our app

I’ve created a simple cryptocurrency search app, just for us. You can view it here. To clone it and follow along:


$ git clone git@github.com:acco/crypto-search

Alternatively, you can start your own project with Create React App and follow along from scratch. I have a command to download the relevant files soon.

If working from scratch, you just need to ensure Create React App is installed:


$ npm i -g create-react-app

Then we’ll initialize our app:


$ create-react-app crypto-search

After it’s finished, change into that directory:


$ cd crypto-search

react-scripts already includes the Jest library. To see for yourself, you can check out its package.json:


# Command in bash:
$ cat node_modules/react-scripts/package.json | grep jest
"babel-jest": "18.0.0",
"jest": "18.1.0",

babel-jest is a Babel plug-in for Jest.

At the time of writing, Jest does not include Enzyme. Enzyme depends on another React package, react-test-renderer. Let’s install both:


$ npm install --save-dev enzyme react-test-renderer

We have all the packages we need installed.

Create React App has provided a sample app along with a sample test for that app:


$ ls tests/
App.test.js

To get Jest to “see” our tests, we just need to:

  • Put our tests into the tests folder
  • Ensure they have the extension .test.js(x) or .spec.js(x)

So, if you were to run npm test right now the test for App.test.js would run. Don’t worry, we’ll do this together in a bit.

At the time of writing, Create React App configures Jest to use the following two directory patterns to identify specs.

The first:


/src/**/__tests__/**/*.js?(x)

Or, in English: Any `.js` or `.jsx` file inside of a folder called `__tests__`. That `__tests__` folder **must** be nested somewhere under `src/`.

The second:


/src/**/?(*.)(spec|test).js?(x)',

Or, in English: Any file with the extension `.spec.js(x)` or `.test.js(x)`.

If you’re unable to get Jest to “pick up” on a test file that you wrote in your Create React App app, make sure it conforms to one of those two matchers.

I know you’re as eager as I am to jump in. We just need one last thing before getting started: Something to test.

From inside the top-level directory of your Create React App app, download the special React components I wrote just for us:

⚠️ The following command will overwrite files inside your project! Don’t do this unless working inside a fresh Create React App app.

# Bash
$ curl -L https://github.com/acco/crypto-search/raw/master/components.zip > src/components.zip && unzip -o src/components.zip

Using Windows/PowerShell? You may have to download and extract manually.

Don’t have curl? Assuming no one is looking, you can just visit that URL and save it manually. But for future style points, you might want to install curl.

The components

This simple search tool lets you browse cryptocurrencies. It’s very useful for retirement planning.

In addition, each row is “favoritable” or, more endearingly, “lovable.”

I’m going to breeze through the components here. Understanding all the details are not important to continue along with me. We’ll be touching on each piece of functionality as we test it.

We have two components, arranged in this hierarchy:

  • App
    • LovableFilterableTable

Because of its charmingly long name, we’ll often refer to LovableFilterableTable as LFT.

We’re using the API provided by coinmarketcap.com. Thanks guys!

Inside App.js, we fetch the coins when we mount and set the state to the results:

src/App.js

  componentDidMount() {
    this.fetchCoins();
    setInterval(this.fetchCoins, 10000);
  }

App renders LovableFilterableTable with the results.

LFT provides a search bar. As the user populates the search bar, it filters out the rows supplied by App:

src/LovableFilterableTable.js

  updateFilter = (filter, items) => {
    this.setState(() => ({
      filter: filter,
      matches: filterMatches(filter, items)
    }));
  };

  onFilterChange = e => {
    this.updateFilter(e.target.value, this.props.items);
  };

The state around which cryptocurrency has been loved is kept in App. The component maintains a loves array in state.

LFT renders the heart icon for each cryptocurrency. When you click on a heart, that currency’s id is added to loves up in App. Clicking the heart again removes it from the loves array.

We’ll be working with LFT in this article. It’s not super complex. Still, it’s a perfect candidate for study. Thanks to React’s component paradigm, this component could easily live inside a more complicated app.

Let’s get to what you presumably came here for.

Send me link to an audio recording if you can successfully say LovableFilterableTable ten times fast. If you can, I’ll post the recording and award kudos here.

Writing tests

Smoke tests

The component that ships with Create React App, App.js, is a simple example component. Let’s take a look at the current test for it, App.test.js:

src/App.test.js

import * as React from "react";
import * as ReactDOM from "react-dom";
import App from "./App";

it("renders without crashing", () => {
  const div = document.createElement("div");
  ReactDOM.render(<App />, div);
});

We discussed two types of tests earlier, integration tests and unit tests. This test is a good example of a smoke test. It doesn’t test much. But it can provide a low-res “canary in the coal mine.”

If you’re rolling out tests for an existing code base that’s starving for specs, don’t overlook the value of some good smoke tests. They take just a few minutes to write. You don’t have to convince your whole team to roll out unit tests for every component in the code base. And a smoke test only has to save you from taking production down once to have returned on its investment.

Note that this smoke test doesn’t use Enzyme. It doesn’t even use any assertions. It’s just ensuring things don’t blow up (catch fire or “smoke”) when the component is rendered.

The term “smoke test” comes from working with hardware. Literally: If we flip this thing on, will it catch fire and start producing smoke? That’s basically what we’re doing here, except with zero risk for injury #safetyfirstsoftware.

Let’s write a quick smoke test for the component we downloaded, LovableFilterableTable.js. Create a new file in src/, LovableFilterableTable.test.js. Then write the following:

src/tests/1/LovableFilterableTable.test.js

import React from "react";
import ReactDOM from "react-dom";
import LovableFilterableTable from "../../LovableFilterableTable";

describe("LovableFilterableTable", () => {
  it("renders without crashing", () => {
    const div = document.createElement("div");
    ReactDOM.render(<LovableFilterableTable />, div);
  });
});

This test is a mirror image of the smoke test for App.js.

Note that all the file paths above the code blocks refer to the file paths in the repo: https://github.com/acco/crypto-search. You can hop over to the files in that repo at any time to see the full file.

Let’s fire up Jest so it can execute this test. I’m a professional, so this is really just ceremonial because obviously —

And we're off to the races

Oh goodness.

The error came from this line of code:

src/LovableFilterableTable.js

    const { schema, onHeartClick } = this.props;
    const keys = Object.keys(schema);

It would appear Object.keys() is complaining about the argument we gave it. Indeed, schema here would be undefined.

Our component is expecting three props:

  • items
  • schema
  • onItemLoved

items is the list of cryptocurrencies. schema allows us to define the columns in the table, e.g. the header.

onItemLoved is a prop-function defined in App. We don’t need to set this prop right now in our test as LovableFilterableTable won’t explode if this prop is undefined.

With testing – as with all things in life – we fake it until we make it. For items, we can just supply an empty array. For schema, let’s cheat and import the schema from App:

src/tests/2/LovableFilterableTable.test.js

import { tableSchema } from "../App";

describe("LovableFilterableTable", () => {
  it("renders without crashing", () => {
    const items = [];

    const div = document.createElement("div");
    ReactDOM.render(
      <LovableFilterableTable items={items} schema={tableSchema} />,
      div
    );
  });

With these uninspiring but sufficient props, LovableFilterableTable is happy:

First test passes. We're done now right?

Our smoke test provides some value, but we can do better. Smoke tests only tell us things didn’t blow up. But they don’t tell us that things are working as expected.

Remember I promised we’d get into some theory? Here we go.

Recall from your early React days that a React component’s render() is deterministic. Given a set of props and state, we expect a component to always have the same output. Written mathematically:


render(props, state) = output

We can think of React unit tests as falling underneath two broad categories:

1. Given a set of props and state, assert on the output of the component. These are generally easier, so we’ll write these first.

2. Given an event (like a user interaction), assert on the behavior of the component. The behavior might be a state change or calling a prop-function supplied by a parent (propagating up an event). We can slice up assertions a few different ways here.

For #2, let’s say for instance the component should have made a state transition after some user behavior. We could either write assertions on the current state of the component (this.state) or we can write assertions on the current output of the component.

Out of these options, there’s one I tend to prefer. We’ll discuss possible approaches when we get there.

For #1, as the ancient Chinese proverb goes, “talk does not cook rice.” Let’s get to work.

Given props, assert on output

So, the TL;DR of that last transition is: After a smoke test, the next easiest type of React unit tests to write is a set of tests in this format:

Given X props/state, the component should have Y output

We’ll focus on props first. As we saw, our component is expecting two props:

  • items – the list of items to render to the table
  • onItemLoved – the prop-function to call when the user clicks on an item

We’ll deal with onItemLoved later. For now, let’s write some assertions like:

Given an items prop that looks like some array X, the component should have Y output

In these situations, it’s usually good to start with the notorious empty case. The empty case won’t tell us much about our component’s output. But like the smoke test, it will sound the alarm if a programmer forgot to account for the scenario where our component is supplied an empty array of items.

We’re going to use the shallow() function from Enzyme. We’ll see what it does in a moment. For now, import it at the top of LovableFilterableTable.spec.js:


import { shallow } from 'enzyme';

Here’s what our empty case spec looks like in full. Insert this spec below the last it block in the file. Then we’ll break it down:

src/tests/3/LovableFilterableTable.test.js

  it("should still render search box", () => {
    const items = [];

    const wrapper = shallow(
      <LovableFilterableTable items={items} schema={tableSchema} />
    );
    expect(wrapper.find("input").exists()).toBe(true);
  });

shallow() shallow renders the component. Our component is not written to a DOM. Instead, it’s essentially kept in its virtual DOM representation.

For our purposes, it’s important to know that shallow():

  • returns a JavaScript object called ShallowWrapper. This object has a special API.
  • renders one level deep. So any children component inside of render() will not actually get rendered. As we’ll see when we talk about isolation later, this is a good thing.

We’re using the Enzyme Wrapper method find(). find() expects an EnzymeSelector as an argument. For our present purposes, it’s enough to know that we can pass find() a string that corresponds to a CSS selector.

If you aren’t familiar with CSS selectors, here’s a quick jumping-off point. In brief, a CSS selector is a string that enables us to specify an element or group of elements on a page. CSS selectors are a prime candidate for learn-as-you-go, so don’t read up too much on them. They’re used in many places across the web, so you’re not learning anything Enzyme or React specific.

If you are familiar with CSS selectors, know that find() supports a subset of CSS selectors. You can’t get too fancy. If you do, find() will usually throw a helpful error.

Given Enzyme’s rich API, there are many ways to write the test above. We’ll become familiar with alternatives soon. Patience!

Our first find() spec is really testing two things:

  • That LovableFilterableTable does not blow up when supplied with empty items
  • That LovableFilterableTable still outputs the search box even when items is empty

Broadly speaking, we’re using Enzyme to assert that the output of our component matches expectations.

We should test one more thing: That LovableFilterableTable does not render any rows when supplied with empty items.

Again , there are a few ways to write this spec. We’ll use find() with a CSS selector again. We want to ensure there are no tr elements inside the table body:

src/tests/3/LovableFilterableTable.test.js

  it("should have no table rows", () => {
    const items = [];

    const wrapper = shallow(
      <LovableFilterableTable items={items} schema={tableSchema} />
    );
    expect(wrapper.find("tbody > tr").exists()).toBe(false);
  });

We’re using find() with exists() again, but this time asserting that the output does not contain something.

This looks good on the surface. But here’s the thing:

Writing assertions on the absence of elements is always tricky.

Why? Because how do you know that you didn’t just make a mistake in selecting the element in the spec? Imagine if you made a typo and screwed up the name of the component:


expect(
  wrapper.find('tbody > rt').exists() // easy spelling mistake
).toBe(false);

Later on, your test suite might falsely report that things are working a-ok:

Dog in fiery room says: Everything is fine!

The false-positive kind of spec is the worst kind; it’s worse than not having a spec at all.

To make these types of specs more robust, it’s always best to write the mirror spec for the presence of elements. We’ll see what those look like in the next section.

But, before we move on, we gotta handle something.

You Do-Not-Repeat-Yourself purists have been sweating bullets since seeing that last code block. In fact, you haven’t read anything I’ve written since then because a refactor has been screaming at you like a siracha blemish on your coworker’s pressed white Oxford.

I’ve got you. Things will be OK.

I repeat: Things will be OK.

Setting up context

4-star specs consist of two parts: the setup and the assertion. The setup is where we establish the context. Things like: “When the user is logged in” or “When the server returns an error.” For React: “When the component is supplied X props” or “When the user clicks Y button.” We then write assertions inside one or more specs given the context.

5-star specs use the testing framework’s API to achieve this.

Let’s see a 5-star solution.

Testing frameworks have functions that help establish context for your specs. You’ll see functions like these across languages: before(), beforeEach(), beforeAll(), afterEach(), etc.

Let’s refactor our empty case specs above using beforeEach(). We’ll use the beforeEach() function to establish what code we should run before each spec. We’ll discuss it below:

src/tests/4/LovableFilterableTable.test.js

describe("LovableFilterableTable", () => {
  let wrapper;
  // ...
  describe("when given empty `items`", () => {
    const items = [];

    beforeEach(() => {
      wrapper = shallow(
        <LovableFilterableTable items={items} schema={tableSchema} />
      );
    });

    it("should still render search box", () => {
      expect(wrapper.find("input").exists()).toBe(true);
    });

    it("should have no table rows", () => {
      expect(wrapper.find("tbody > tr").exists()).toBe(false);
    });
  });

So clean and so DRY. Here’s what’s up:

  1. We declare let wrapper; at the top of our outer-most describe block. We do this because we’ll be using the wrapper variable again and again in our test suite. Declaring variables “outside” of it blocks is a common JavaScript testing pattern.
  2. We added another describe block. This describe block demarcates a new context for us. Here, the context is “when supplied empty items.”
  3. We declare the items variable, an empty array.
  4. We shallow render our component inside the beforeEach() block. This beforeEach() block is limited to all the specs inside the “when supplied empty itemsdescribe block. That means this function will be invoked before each spec. We re-render our component between each spec. While not necessary for these read-only specs, this practice becomes important whenever we manipulate the component. We’ll see this later.
  5. We were able to remove the shallow render from each individual spec.

Not only is this spec file now DRY, but it’s also super readable. Given a little instruction on how Jest specs are organized, even non-coders on the team could get a general idea of what’s going on here. What’s more, this structure is easy to extend. Want to add another spec in the future for this empty case? Easy. Just throw the spec into that describe block.

5-star Amazon review

Into Behavioral-Driven Development? Me too. We won’t explicitly talk about the paradigm in this approach, but for the unacquainted: The idea is that we’re using user behavior to both organize and define our test suite.

As great as these tests are, later in this series we’ll pick up some tricks that will make them even better.

But we’re not ready for these tricks just yet. Baby steps. For now, let’s write the specs for the scenario where items is non-empty.

With populated items

Minin’ my own business since ’09
– Grandpa, OG Bitcoin Miner

Here’s our strategy for this next set of specs:

  • Write a new describe block that sets up the context where LovableFilterableTable is rendered with an array of items
  • Write a couple of specs inside this context
    • One of those will be a “mirror” spec to the 'should render no 'tr' elements' spec we just wrote

We need items to be an array of cryptocurrency objects. But we don’t need it to be super detailed. Again, fake it / make it.

We can declare partial objects like this for now:

src/tests/5/LovableFilterableTable.test.js

  describe("when given some `items`", () => {
    // Presence in this array does not indicate endorsement
    const items = [
      { id: 1, name: "Bitcoin" },
      { id: 2, name: "Ethereum" },
      { id: 3, name: "Litecoin" }
    ];

Let’s write three tests:

  • should render some tr elements
  • should render appropriate number of tr elements
  • should include the title of each item

Let’s start with the first one. This is our “mirror spec” for the same spec in the empty case that asserts that tr elements are not present:

src/tests/5/LovableFilterableTable.test.js

    it("should render corresponding number of table rows", () => {
      expect(wrapper.find("tbody > tr").length).toEqual(3);
    });

Not bad, right?

For the next spec — “should include the title of each item” — things get a little trickier.

There are a few ways we could approach this. For instance, we could just see if these titles appear anywhere in the output of LovableFilterableTable.

A slightly better approach would be to try to find matching HTML “snippets.” We’ll talk about why this is preferable later. Let’s see what this looks like:

src/tests/5/LovableFilterableTable.test.js

    it("should include the title of each item", () => {
      items.forEach(item => {
        expect(
          wrapper.containsMatchingElement(
            <td>
              {item.name}
            </td>
          )
        ).toBe(true);
      });
    });

Here, we’re using Enzyme’s containsMatchingElement. We’re using JSX in our spec! What’s interesting is that we’re basically hunting for a snippet of HTML that should be appearing in our component’s output.

This pattern of writing expectations for “bits of HTML” that should show up in our component’s output is quite common. So common, in fact, that soon we’ll see an even easier way to write specs just like this.

Again with the tease. Listen – I gotta hold your attention somehow!

Enzyme’s `contains()` matches _all_ properties of elements. This is usually more pedantic than we’d like.

For instance, if we were writing a test to assert that `LFT` was rendering our input element, a spec using `contains()` would look like this:


// Yes, but no.
expect(
  wrapper.contains(
    <input
      type="text"
      placeholder="Filter..."
      id="filterField"
      style={{ minWidth: "300px" }}
      onChange={() =>{}}
      value={'does not matter'}
    />
  )
)

So we just wrote a few tests and we’re feeling good. This is so easy! Let’s write like three more! Or six more!

No – Let’s write a hundred tests! Because when it comes to testing, the more the merrier. Right?!

“The Art of Specs” with Sun True

Hold your horses, Eager Eddie.

Let your specs run free!!!

Tests are not one-and-done. They carry a maintenance burden. That burden can rear its head a month or a decade down the road. There are diminishing returns. Write too many redundant tests and you saddle any future refactors with huge test suite refactors as well. If your tests are brittle, the smallest changes will cause you all sorts of trouble.

It’s ultimately up to your team and your team’s testing philosophy. But, it’s worth noting that unless your team is religious about full test coverage, the most sustainable solution lies well below 100% coverage.

We won’t delve too deep into these black arts in this series. But a good way to begin exploring redundancy in your own tests is by asking yourself some questions:

  • What is tested exclusively between this set of specs? That is, in what situations would one spec fail and not the other(s)?
  • What are the chances that these situations will occur?
  • Would having separate specs – and therefore, separate failures – help a future developer to identify what is wrong with significantly more speed?

Notice that a good spec does double-duty.

First, it makes an assertion on behavior, protecting expected functionality should a code change accidentally break something.

Second, in doing so, it should also be a helpful diagnostic tool. A 5-star test suite won’t just tell you something is broken; it will point fingers right at the source of the error.

Again, I want to steer our ship away from the whirlpool of test suite arts and towards the calmer waters of, say, the Enzyme API. The fact I still have your attention is semi-miraculous and I don’t want to lose it.

But let’s quickly apply these questions to the two specs we just wrote. Hopefully this will tune up your spec skepticism (spectisim? work with me).

We’ll call the first spec (testing the number of table rows) spec A. The other spec B.

What is tested exclusively between the set of specs? That is, in what situations would one spec fail and not the other(s)?

The situation where A passes and B doesn’t: We have a bug where we’re rendering table rows but not rendering data in each row.

The situation where B passes and A doesn’t: We’re doing something weird like rendering each item twice.

Which leads us to …

What are the chances that these situations will occur?

We are using – and probably always will use – map() over the array prop items. Making an error where we render each item twice seems highly unlikely.

If we ever encounter the situation where our table is, say, doubling items, it is much more likely this issue is stemming from somewhere else in the code base. Not our map() call.

So spec A is already looking a little excessive …

Would having separate specs – and therefore, separate failures – help a future developer to identify what is wrong with significantly more speed?

This is when it’s helpful to consider the complexity of the code under test. These specs are testing a single map() call. This function is relatively simple.

Remember, specs perform double-duty: they both help assert behavior and in turn diagnose the code base when things go awry.

As long as we feel confident that we have strong assertions on behavior, we should try to keep the specs around this simple function as trim as possible. Given the complexity of our component, it is unlikely that more specs will help with significantly faster diagnostics.

So, this was a long-winded way to conclude: We can add a bunch more specs, but it is most likely that whatever specs we add will just succeed and fail together. And not help with diagnostics.

We’ll keep the first spec (the “length” spec). But here’s why I love the containsMatchingElement() spec: We are asserting on what the user sees.

Let me tease this again: We’re going to explore a solution soon that will make writing tests like this even easier. It will make it so painless that you’ll probably find in most situations that you just skip my laborious (overwrought?) questions and opt instead to effortlessly pluck this low-hanging (and super juicy) fruit.

Speaking of juicy fruit:

A quick note on TDD

It’s worth noting that Test-driven development (TDD) enthusiasts enjoy the paradigm because often it helps them determine what tests to write. If you follow the methodology, you follow a “red-green” development loop. The not-doing-it-justice quick summary:

  1. Write the spec for a desired feature.
  2. Run your test suite. The spec will fail as the feature has not been implemented yet.
  3. Write the bare minimum feature you need for the spec to pass.
  4. Spec greens.

You repeat this cycle until your test suite fully expresses the feature you desire.

I’ve TDD’d in the past and loved it. I’ve not TDD’d in the past and loved it. Even TDD’s fiercest advocates admit there is a place and a time. 3Did Quora make you sign in? I am so sorry.

I’m still researching TDD practices in the React community. I’ll be writing an article soon where I TDD something and comment liberally on how it feels. Subscribe to hear about it. Or, if you TDD your React components, I’d love to hear from you!

However, all that said, chances are if you’re reading this post your team is still in the “test curious” phase. Think about the kind of ask you’re making for your team. If your team doesn’t already have a strong testing culture, asking for TDD, unit tests, integration tests, 100% coverage – that’s a huge ask. TDD is a whole different way to write code.

As Sun True once said: “O influential coder, pick your battles.” Get a Trojan horse through the door – “Oh just some smoke tests and important unit tests” – and then expand your team’s testing culture from there.

Related: I love stories about how individual contributors or team leads got their team to adopt more of a testing culture. Do you have a story? Please please tell me about it.

But what about interactivity?

We’ve just written a nice little battery of specs, had some nice conversation about them, and suffered through some high school history allusions.

But we don’t use React to just organize our HTML. We use React because it enables us to build rich user interfaces. React components change form and shape based on the behavior of our users.

We know how to test the output of a component given props. But how do we write tests for the real juice, responding to user interactions?

Simulating user interactions

In many ways, so far we’ve tested our component at rest. Now we want to stimulate our component in some way and see how it responds.

Our app as a whole has two pieces of user interactivity:

  • A user can filter the list of cryptocurrencies using the search box
  • A user can favorite/love a cryptocurrency

When filtering cryptocurrencies, the user is directly impacting the state of the component.

When loving a cryptocurrency, LovableFilterableTable calls a prop-function passed down by its parent. State is modified up there, in the parent component.

One interaction modifies the state local to LFT. The other modifies the state somewhere else.

These are two separate classes of interactions. We will deal with them separately.

Let’s start with the local state modification.

Filtering cryptocurrencies

As before, we’ll be writing our specs by first establishing context and then writing assertions given that context.

Now’s a good time to list out all the real-world user behaviors. We can then select a scenario to writes tests for:

  1. With no items in the list (empty case)…
    a. Populating the search box … doesn’t blow things up
  2. With a list of items…
    a. The search box is empty. The user types some stuff in. Their query matches some cryptocurrencies. We expect only those matching cryptocurrencies to be displayed.
    b. The search box is populated. The user backspaces, removing a few letters. We expect more matching cryptocurrencies to be displayed as they match against the shorter search string.
    c. The search box is populated. The user clears the search box. We expect all the cryptocurrencies to come back.

Per our discussion about exhaustive testing earlier, let’s be picky about what we write tests for.

For instance, 1a is valid. A robust app would not blow up here. But if the user is in a situation where there are no items in the list and they’re searching against that empty list – our app has bigger problems.

2b and 2c are basically derivatives of 2a. Let’s start with 2a. See how we feel. Go from there.

Our first “user interaction”

Because we’re 5-star test writers, we’re always looking for the context we should setup before writing our specs. The describe/beforeEach combo that comes before our it blocks.

Our context will be setup using this BDD-style lingo:

Given a list of items and an empty search box, the user types stuff in …

Before, we could get away with just having three items for our test data. But now we’re doing some filtering on search strings. It would be nice to test our search against a larger example set of cryptocurrencies.

We could get inventive and spend the next hour creating a bunch of fake cryptocurrency data. That might even be fun. 42Coinz, backed by a gold standard? The von Coinmann with a self-replicating mining algorithm? I’ll stop.

But, we’re both busy, right? So let’s just use the data returned by the coinmarketcap.com API.

It would be nice if our test didn’t rely on the coinmarketcap.com API to work. Generally a good rule for tests to minimize external dependencies. We can instead download the response and load that into our test suite.

Download the response. We’ll put it in src/:


curl https://api.coinmarketcap.com/v1/ticker/?limit=100 > sample-response.json

If we were writing for #scale, it would make sense to have a separate test helper function/file that loaded this response and generated items for LovableFilterableTable. Because we’re writing for #speed, we’ll dump this logic into LovableFilterableTable.test.js.

We’ll use the fs and path libraries to help us load the sample file:

src/tests/6/LovableFilterableTable.test.js

import fs from "fs";
import path from "path";

We’ll write a helper function that will generate a list of items based on this response. We don’t have to massage the data returned by the API at all. We’ll just add an isLoved property to each item:

src/tests/6/LovableFilterableTable.test.js

const SAMPLE_RESPONSE_FILE = path.join(__dirname, "../../sample-data.json");

const generateItems = () => {
  const response = fs.readFileSync(SAMPLE_RESPONSE_FILE);
  const json = JSON.parse(response);

  return json.slice(0, 30).map(item => ({
    ...item,
    isLoved: false
  }));
};

Now we can use this function to generate items for our next set of specs:

src/tests/6/LovableFilterableTable.test.js

  describe("user enters search query", () => {
    let items;

    beforeEach(() => {
      items = generateItems();
      wrapper = shallow(
        <LovableFilterableTable items={items} schema={tableSchema} />
      );
  // ...

After shallow rendering the component, we need to:

  1. Use Enzyme’s API to find the search box
  2. Simulate the user typing into the search box

You might be feeling pretty #confident right now – in that case, tackle 1 without peeking! The simulate is something new for us. You can either practice your documentation parsing skills or just scroll down:

src/tests/6/LovableFilterableTable.test.js

  describe("user enters search query", () => {
    let items;

    beforeEach(() => {
      items = generateItems();
      wrapper = shallow(
        <LovableFilterableTable items={items} schema={tableSchema} />
      );

      const searchBox = wrapper.find("input");
      searchBox.simulate("change", { target: { value: "coin" } });
    });

🤗 We just simulated our first event! If you’ve tested JavaScript apps before, you may be feeling a little giddiness inside. That’s completely natural. Let it happen.

The arguments passed to simulate() might strike you as a little funny. Here’s how it works:

  • The first argument is the name of the event type. All this does is map one-to-one with the event prop on the element itself. So, for buttons, the event prop is onClick. We pass simulate() the string click. For text fields, it’s onChange, hence the event name here.
  • The second argument is the event object that will be passed to our event handler. Because React events take this form ({ target: { value: 'coin' }}) we just mimic that shape here.

So, simulate() is actually a little “dumber” than you might think. There is little wizardry. We’re just telling our component to call the function specified by the given event handler.

We’re simulating the typing of the string “coin.” Remember, the search box matches sub-strings anywhere in the string. Therefore, pretty good guess that we’ll have a fair number of results for coin.

Our context is established. We’re rendering our component. We’re simulating an event. Do you feel a 5-star spec on the horizon? Because I do.

5 stars on the horizon

Continuing with what we have so far, we might write a spec like this:

src/tests/6/LovableFilterableTable.test.js

    it("should render a subset of matching `items`", () => {
      const matching = items.filter(i => i.name.match(/coin/i));

      matching.forEach(match => {
        expect(
          wrapper.containsMatchingElement(
            <td>
              {match.name}
            </td>
          )
        ).toBe(true);
      });
    });

This would work. But, can you spot the flaw?

What if our filter didn’t work at all? This would still pass!

So we’d have to pair it with a test like this:

src/tests/6/LovableFilterableTable.test.js

    it("should not render the `items` that don't match", () => {
      const notMatching = items.filter(i => !i.name.match(/coin/i));

      notMatching.forEach(match => {
        expect(
          wrapper.containsMatchingElement(
            <td>
              {match.name}
            </td>
          )
        ).toBe(false);
      });
    });

This is better, because our specs when combined are more meaningful.

But this exercise leaves me with the same feeling I get when I finish a cup of decaf coffee. A little weird and a little unfulfilled.

Let’s zoom out a bit. There’s a pattern emerging here.

What we want to do is assert that our component has an output that looks a certain way. Note that above we’re taking little HTML snippets – td elements – and matching them against the component’s output.

We’ve seen this movie before. This is basically what component testing is all about. So there has to be an easier way … right?

No, I’m not about to tease you again. I’m about to give it to you.

At last, I think you’re ready for it.

The easier way: Jest Snapshots

Jest snapshots enable us to save a “snapshot” of the output of a component. We can inspect the snapshot and ensure it is sound. Jest will then assert from then on that the output of the component matches that snapshot.

Lots of words. Let’s see what this looks like in practice.

Using Jest snapshots requires a slightly different test-writing flow.

Snapshots are not just for components. They basically give us an alternative to toEqual():


expect(getBicycleInfo("cervelo r3 ultegra")).toEqual({
  crankset: "Rotor 3D30",
  saddle: "Fizik Antares",
  frontDerailleur: "Shimano Ultegra",
  // ...
});

To write as a snapshot test, we replace toEqual() with the special toMatchSnapshot():


expect(getBicycleInfo("cervelo r3 ultegra")).toMatchSnapshot();

Let’s do one simple dummy snapshot. Then we’ll get #serious.

src/tests/7/LovableFilterableTable.test.js

    it("testing out snapshots", () => {
      const item = items[0];
      expect(item).toMatchSnapshot();
    });

The first time we run this test, Jest reports that it created a new snapshot:

We can check it out ourselves. Jest puts snapshots inside a sibling folder to our spec called __snapshots__:

src/tests/7/__snapshots__/LovableFilterableTable.test.js.snap

exports[`LovableFilterableTable user enters search query testing out snapshots 1`] = `
Object {
  "24h_volume_usd": "1364150000.0",
  "available_supply": "16405075.0",
  "id": "bitcoin",
  "isLoved": false,
  "last_updated": "1498150152",
  "market_cap_usd": "44584236378.0",
  "name": "Bitcoin",
  "percent_change_1h": "0.13",
  "percent_change_24h": "0.17",
  "percent_change_7d": "17.38",
  "price_btc": "1.0",
  "price_usd": "2717.71",
  "rank": "1",
  "symbol": "BTC",
  "total_supply": "16405075.0",
}
`;

There’s the serialized version of our object!

If we run Jest again (by hitting “Enter” while in watch mode) we’ll see that our spec passes. Our object looks exactly the same. It matches the snapshot. So that spec passes.

Let’s do something nefarious. Let’s modify the snapshot directly, corrupting it 5Think and grow rich?:


"price_usd": "100000000",

Now if we run our Jest spec again:

Jest kindly informs us that things are not as they should be.

When a snapshot test fails, we have two options:

  1. Fix whatever is broken.
  2. Tell Jest that the received value is, indeed, what we want. That is, we want to update the snapshot.

While Jest is in watch mode, we can press the “u” key to update our snapshots. Let’s punch “u” now:

Cool. If we open up our snapshot file again, we will see order is restored. price_usd is back to normal and our fantasy world has collapsed.

With our newfound knowledge of snapshots, let’s turn our attention back to our lovable and filterable table.

There are many ways to write our snapshot test. Here’s one:

src/tests/7/LovableFilterableTable.test.js

    it("should render a subset of matching `items`", () => {
      expect(wrapper.find("tbody").first().html()).toMatchSnapshot();
    });

The .first().html() is really important. .html() converts the Enzyme object into an HTML string. This HTML string works way better with the snapshotter.

Because the Enzyme ShallowWrapper object contains a lot of extra properties, if we were to serialize that into a file that file would be needlessly huge (like over 15,000 lines huge).

In fact, as we’ll see in a moment, even this snapshot isn’t terse enough. Below the last snapshot spec, write another one for comparison:


it("should filter items", () => {
  expect(
    wrapper.find("tbody > tr > .item-name").map(i => i.html())
  ).toMatchSnapshot();
});

Here, we’re getting very precise. We’re using a precise selector in find to grab all the titles rendered in the table. We’re then converting those elements to their HTML representation.

Save the file and let Jest take its snapshots.

Peering into our snapshots file, we see snapshots for our two additional specs.

The snapshot for the first spec is still big. Too big to reasonably show here. That’s because it contains all the HTML around the entire table of cryptocurrencies.

The snapshot for the second spec, however:

src/tests/7/__snapshots__/LovableFilterableTable.test.js.snap

exports[`LovableFilterableTable user enters search query should filter items 1`] = `
Array [
  "<td class=\\"item-name\\" style=\\"padding:5px;\\">Bitcoin</td>",
  "<td class=\\"item-name\\" style=\\"padding:5px;\\">Litecoin</td>",
  "<td class=\\"item-name\\" style=\\"padding:5px;\\">Siacoin</td>",
  "<td class=\\"item-name\\" style=\\"padding:5px;\\">Bytecoin</td>",
  "<td class=\\"item-name\\" style=\\"padding:5px;\\">Dogecoin</td>",
  "<td class=\\"item-name\\" style=\\"padding:5px;\\">MaidSafeCoin</td>",
]
`;

It’s still not quite ideal. It’s not super human-friendly. The escaped strings for class and style are particularly cute.

But it’s much terser. And more readable. This is desirable. We want to make efficient snapshots. Ones that cover the bare minimum details that they need to in order to do their job.

Why does efficiency or “readability” of the snapshot matter, you might ask?

And even if you wouldn’t ask, I’ll ask for you: Why does readability of the snapshot matter?

Snapshot strategy

Jest snapshots alter the way we make assertions. Instead of you explicitly hard-coding into the spec what output is expected of a component, you become an arbiter of changes. Jest presents you with a diff and you give your “yay” or “nay.”

This has its benefits, which we’ve witnessed. Specs are faster to write, your specs are “cleaner,” and specs are much easier to update in the future should expected behavior change.

Another way to think about it: Jest snapshots reduce friction. Reducing friction is generally good!

But friction reduction always has its externalities. 6Remember when you first signed up for Amazon Prime? “This is great! 2-day free shipping! With one click!” …and then your credit card bill arrived.

With Jest Snapshots, the issue is if you – the grand arbiter – miss something.

That is, imagine that you make some change to a component. So you expect the Jest snapshots will need to be updated.

After making your change, Jest will present you with a giant diff. Let’s say it’s not only giant, but a little hard to read as well. You are super important and have other work to be doing, people to be seeing, places to be visiting. And you just got some Snapchats from Sheryl who is cave diving in Tulum. And so you give it a scan, give it a 👍, and move on with your life.

Easy to see how you might miss something, yeah? You can “overlook” an unexpected change. When you’re forced to hard-code expected values into a spec file, that increase in ceremony (friction) makes committing this mistake more difficult.

So, trade-offs. They are a law of the universe.

But, generally, so is mitigation.

To reduce the chances of your Snapchat attention sabotaging your snapshot arbitration, you should always strive to keep your snapshots either minimal or readable or both. That’s why we do the “data massaging” that we do above.

Massaging your data in this manner is totally fine. But you might find yourself repeating some data mutation patterns again and again.

For instance, the html() conversion we’re doing with our Enzyme-wrapped components – we will have to do that for every snapshot test.

If you want to turn #pro, you can use snapshot serializers. A serializer is a function that takes the input and converts it into the string representation used in the .snap file. When Jest’s built-in serializer isn’t doing enough for you, you can use your own.

Snapshot serializers (when you turned #pro)

To see serializers in action, let’s use jest-serializer-enzyme which, you guessed it, is a serializer for Enzyme. (You are so sharp and beautiful)

First, install the serializer:


$ npm install --save-dev jest-serializer-enzyme

There are a few ways to use a serializer in a test suite. Here’s one. Near the top of our test file, above the global describe block:

src/tests/8/LovableFilterableTable.test.js

import serializer from "jest-serializer-enzyme";
expect.addSnapshotSerializer(serializer);

With this serializer in place, we can write a test like this:

src/tests/8/LovableFilterableTable.test.js

    it("should filter items (with serializer!)", () => {
      expect(wrapper.find("tbody")).toMatchSnapshot();
    });

And here’s a sample of the snapshot (which totals ~400 lines):

src/tests/8/__snapshots__/LovableFilterableTable.test.js.snap

// Jest Snapshot v1, https://goo.gl/fbAQLP

exports[`LovableFilterableTable user enters search query should filter items (with serializer!) 1`] = `
<tbody>
  <tr
    style={Object {}}
  >
    <td
      className="item-name"
      style={
        Object {
          "padding": "5px",
        }
      }
    >
      Bitcoin
    </td>
    <td
      className="item-symbol"
      style={
        Object {
          "padding": "5px",
        }
      }
    >
      BTC
    </td>

The snapshot is very readable. The serializer handled the Enzyme wrapper object and converted it into a minimal representation, formatting it with whitespace so that it is human-friendly. Now, the diffs will be readable as well. This is the key.

This snapshot may be good enough for your purposes. Or maybe you’d prefer to strip it down even more. Play with it, see what works for you. Just keep your Snapshots clean and readable.

If there’s demand for it, I can go deep into custom serializers in a future post.

Jest snapshot etiquette is akin to beach etiquette

Another mitigation piece: The code review!

Snapshots make viewing diffs in your version control system a cinch. So get another pair of eyes on that snapshot. At the least, you can try to pass on some of the blame in the future. 7I believe effective blame-passing was the sixth essential quality Dave Thomas cites in The Pragmatic Programmer. But my buddy Shiva told me that, so sorry on his behalf if I’m misquoting.

More interactivity: Clearing the search field

Let’s tackle one more bit of user interactivity: When the user clears the search box. We expect all the cryptocurrencies to come back.

I’m hoping to bait you down a certain path here so be careful.

We’re building up on context we’ve already established. We want to make sure that after we’ve performed some filtering (user types ‘coin’) when they clear this search all the cryptocurrencies come back.

This is a decent spec to write because (a) this is very likely behavior and (b) bugs emerging around clearing a filter are conceivable.

We can nest this describe block inside the one we just wrote. That is:


describe("LovableFilterableTable", () => {
  describe("user enters search query", () => {
    describe("user clears search query", () => { // our upcoming context

Can you sort out the setup yourself? It’s a little tricky, but you’ve made it this far in this article so I have faith in you:

src/tests/9/LovableFilterableTable.test.js

    describe("user clears search query", () => {
      beforeEach(() => {
        const searchBox = wrapper.find("input");
        searchBox.simulate("change", { target: { value: "" } });
      });

Remember: This beforeEach is called after the beforeEach where the user types in “coin.” So, the sequence:

First beforeEach block:

  • we generate the items
  • the component is shallow rendered
  • the user types in “coin”

Then in this beforeEach block:

  • the user clears the search field

All those steps will be executed before every spec that we write in this block. Neat, right?

Our assertion:

src/tests/9/LovableFilterableTable.test.js

      it("should render all the items again", () => {
        expect(wrapper.find("tbody > tr").length).toEqual(items.length);
      });

You could use a snapshot here. But think about what we care about in this specific spec. We just care that all of our items are rendered in the table again.

In the specs prior, we perform a snapshot and make sure the information in the table is sound. In this spec, we don’t have to do that again. We can be less detailed.

If there’s something wrong with the rendering of our table – we’re missing certain columns or our style unexpectedly changed – we’ll expecting our prior specs to raise the alarm.

Strictly speaking, I’m reminding you that you don’t need to snapshot everything.

More broadly, I’m pointing out that you should always keep in mind spec overlap. And the question: How can I test this specific piece of behavior? A more specific spec is more helpful in diagnostics. And if we make a change in columns or styling, it won’t break this spec. Keeping our specs focused makes things less brittle when we make updates in the future.

Programming is complex. We’re touching on the dark arts of testing again. There is no one “right” spec to write here. But the arguments I just raised are ones that I find to be compelling, not the least because I raised them. 8Choice-supportive bias? Confirmation bias? Not sure, never finished HPMOR.

So that was one piece of bait 🎣

But let me string another one in front of you.

Perhaps this business is still a little funky to you:

src/tests/9/LovableFilterableTable.test.js

    describe("user clears search query", () => {
      beforeEach(() => {
        const searchBox = wrapper.find("input");
        searchBox.simulate("change", { target: { value: "" } });
      });

You flip back through the notes you’ve been diligently taking. And you point out that at the beginning of this series, we discussed two categories that unit tests for React components generally fall into. The first:

  1. Given a set of props and state, assert on the output of the component.

And state. So we’re doing this simulate stuff. But, ultimately, we’re just modifying the state on the component.

So, what if we treated local state modification kind of like we treated props in earlier specs?

Simulating vs setState()

Ok, let’s entertain this. Instead of simulate, we could instead test the update function on LovableFilterableTable in isolation by calling it directly:


beforeEach(() => {
  items = generateItems();
  wrapper = shallow(<LovableFilterableTable items={items} schema={tableSchema} />);

  wrapper.instance().updateFilter("coin", items);
  wrapper.update();
});

instance() returns the JavaScript instance of the component. This enables us to directly access the instance’s updateFilter() function.

So, we’re unit testing the component’s function. We bypass interacting with the component altogether.

We could take the unit testing a step further by doing something like this:


it("should update the state", () => {
  expect(wrapper.state("matches").map(m => m.name)).toMatchSnapshot();
});

We use another Enzyme method, state(), which reads the state of the component. We pass state() an argument which returns that particular state property (matches). We can then either assert with toMatchSnapshot() like we do here or toEqual() and hard-code the expected value.

Getting the gist of this approach?

We’re breaking down our unit tests even further. We’re testing even smaller pieces of our component system.

If you can’t tell, I’m building a strawman so don’t get too comfortable with it.

Sometimes, folks are tempted to combine these kind of component “unit tests” with something like this:


beforeEach(() => {
  items = generateItems();
  wrapper = shallow(<LovableFilterableTable items={items} schema={tableSchema} />);

  const matchingItems = items.slice(0, 3);

  wrapper.setState({ matches: matchingItems });
});

it("should filter items", () => {
  expect(
    wrapper.find("tbody")
  ).toMatchSnapshot();
});

The thinking is:

  • We’re unit testing component methods in isolation
  • Then we’re unit testing state changes just like we do with props. Given X state, assert Y output – right?

Here’s my gripe with this overall approach.

The first is that we end up with a lot more specs. Remember our discussion earlier around not letting your test suite become cumbersome. Sure, we are breaking down and testing each discrete part of the workflow for this interaction. But we end up with a lot of tests for a relatively simple piece of functionality. Our test suite complexity is getting ahead of our component complexity.

The second is that we are coupling our specs to the component’s internal representation of state.

In general, our original spec – where we simulate a user behavior and then assert on the output – is good enough. We test the full pipeline, from the input component through to the rendered table rows.

Here’s a helpful thing to keep in mind: The user doesn’t care what the shape of the state tree inside our component looks like. They don’t care which method updates the state. They care that when they insert some text into the search box, the cryptocurrency is filtered instantly.

So because the user doesn’t care, we have to ask ourselves if we should care. If this component was wildly complex, we might need more specs to help with diagnostics. If we were trying to perform a big refactor of a “legacy” React codebase, it might behoove us to thoroughly spec the beast of a component.

In refactors, specs are helpful because sometimes you know the what but not necessarily the why. So you can pin down a component’s current behavior while perhaps missing the nuances being cryptically implemented by the “cruft.”

But because things are still simple and relatively greenfield, let’s not get ahead of ourselves.

Our first spec verifies the behavior the user ultimately cares about. Should a significant bug be introduced, it is likely to throw a red flag. And, given the relative complexity of our component, it’s good enough to get us started with diagnostics.

A spec where we simulate user behavior and then assert on what the user sees – that’s a 5-star, Michellin-candidate test.

Upcoming: A quick guide on what not to test. Often just as helpful as what to test. Subscribe to hear about it.

Further, for most teams, writing tests isn’t just about writing the best test suite. It’s about writing tests in the first place!

There might be some ideal the team should hit. But compliance is key.

So, yeah, eating nothing but kale and almond salads and running 10ks every morning might be “your best” gameplan for losing weight in the world where you’re an emotionless automaton. But you’re not. So choosing a sustainable gameplan, one you can comply to is incredibly important.

Listen, you should be thanking me. I just told you you can keep your steak & eggs breakfast. You can get away with writing a lot less tests than you thought. And your team – and future self – will be happier for it too.

You can start with “good enough” tests. Let users and bugs drive your test suite’s maturation. Was a bug allowed to ship due to a hole in your test suite? Patch the bug then patch the test suite.

Given all this, we can look at those two categories a different way:

  1. Given a set of props and state, assert on the output of the component
  2. Given an event (like a user interaction), assert on the output of the component

Again: If you stick to testing on a component’s output, each unit test will cover a bit more ground. You’ll have to write less of them. But most important: it is likely you will end up with happier users. 9To do at a later time: a post trying to derive the mathematical formula behind this metric. I imagine it’s some function of developer happiness, developer productivity, and code reliability.

Cool. Enough life lessons.

We have specs that cover the most important details of our component:

  • That it doesn’t blow up when we render it
  • That it displays the items passed in as props
  • That it filters those items based on the user typing stuff into the search field
  • That after clearing the filter, all the items return

The filtering demonstrated what tests look like for handling local state modifications.

But one glaring spec we’re missing: loving a cryptocurrency. This causes a state modifiction up in the parent.

We’ll leave this to the next part of the series.

Putting it all together (so far)

I’ve taken one for the team. Using all the concepts we’ve learned so far, I went back and cleaned up our test suite.

I know, I’m a champ. 🤗

Let’s take a look at it in full and then discuss the pertinent updates:

src/tests/10/LovableFilterableTable.test.js

import React from "react";
import ReactDOM from "react-dom";
import LovableFilterableTable from "../../LovableFilterableTable";
import { shallow } from "enzyme";

import { tableSchema } from "../App";

import fs from "fs";
import path from "path";

const SAMPLE_RESPONSE_FILE = path.join(__dirname, "../../sample-data.json");

const generateItems = (n = 30) => {
  const response = fs.readFileSync(SAMPLE_RESPONSE_FILE);
  const json = JSON.parse(response);

  return json.slice(0, n).map(item => ({
    ...item,
    isLoved: false
  }));
};

import serializer from "jest-serializer-enzyme";
expect.addSnapshotSerializer(serializer);

describe("LovableFilterableTable", () => {
  let wrapper;

  it("renders without crashing", () => {
    const items = [];

    const div = document.createElement("div");
    ReactDOM.render(
      <LovableFilterableTable items={items} schema={tableSchema} />,
      div
    );
  });

  describe("when given empty `items`", () => {
    const items = [];

    beforeEach(() => {
      wrapper = shallow(
        <LovableFilterableTable items={[]} schema={tableSchema} />
      );
    });

    it("should render an empty table", () => {
      expect(wrapper).toMatchSnapshot();
    });
  });

  describe("when given some `items`", () => {
    beforeEach(() => {
      const items = generateItems(3);
      wrapper = shallow(
        <LovableFilterableTable items={items} schema={tableSchema} />
      );
    });

    it("should render each item in the table", () => {
      expect(wrapper).toMatchSnapshot();
    });
  });

  describe("user enters search query", () => {
    let items;

    beforeEach(() => {
      items = generateItems();
      wrapper = shallow(
        <LovableFilterableTable items={items} schema={tableSchema} />
      );

      const searchBox = wrapper.find("input");
      searchBox.simulate("change", { target: { value: "coin" } });
    });

    it("should filter items", () => {
      expect(
        wrapper.find("tbody > tr > .item-name").map(n => n.text())
      ).toMatchSnapshot();
    });

    describe("user clears search query", () => {
      beforeEach(() => {
        const searchBox = wrapper.find("input");
        searchBox.simulate("change", { target: { value: "" } });
      });

      it("should render all the items again", () => {
        expect(wrapper.find("tbody > tr").length).toEqual(items.length);
      });
    });
  });
});

Here’s what I love about this test suite:

  1. I wrote it.

  2. Check out the two “baseline” specs:

src/tests/10/LovableFilterableTable.test.js

  describe("when given empty `items`", () => {
    const items = [];

    beforeEach(() => {
      wrapper = shallow(
        <LovableFilterableTable items={[]} schema={tableSchema} />
      );
    });

    it("should render an empty table", () => {
      expect(wrapper).toMatchSnapshot();
    });
  });
src/tests/10/LovableFilterableTable.test.js

  describe("when given some `items`", () => {
    beforeEach(() => {
      const items = generateItems(3);
      wrapper = shallow(
        <LovableFilterableTable items={items} schema={tableSchema} />
      );
    });

    it("should render each item in the table", () => {
      expect(wrapper).toMatchSnapshot();
    });
  });

We don’t do any massaging. We let our snapshot serializer do the work of making the serialized version of our component’s output look readable. Ultimate laziness.

Now, for the “when given some items” spec, we generate only 3 items:

src/tests/10/LovableFilterableTable.test.js

      const items = generateItems(3);

This means we can (a) happily snapshot the full table while (b) keeping our snapshot relatively trim. Sure enough, our entire snapshot file is only about 350 lines.

For testing the filtering logic, we want to work with a lot of coins. But for testing how the table looks – that it has all the columns, that the columns are populated, etc – we only need a few coins to get the idea.

  1. Check out the assertions in the interaction specs:
src/tests/10/LovableFilterableTable.test.js

    it("should filter items", () => {
      expect(
        wrapper.find("tbody > tr > .item-name").map(n => n.text())
      ).toMatchSnapshot();
    });
src/tests/10/LovableFilterableTable.test.js

      it("should render all the items again", () => {
        expect(wrapper.find("tbody > tr").length).toEqual(items.length);
      });

Here we’re getting pickier.

We already have a spec that snapshots the table as a whole. So for the “should filter items” spec, we get very specific – we just want to test that the table only contains the cryptocurrency names that match the search string.

As a result, this is what that corresponding snapshot looks like:

src/tests/10/__snapshots__/LovableFilterableTable.test.js.snap

exports[`LovableFilterableTable user enters search query should filter items 1`] = `
Array [
  "Bitcoin",
  "Litecoin",
  "Siacoin",
  "Bytecoin",
  "Dogecoin",
  "MaidSafeCoin",
]
`;

This spec – and this snapshot – are purposely limited in scope. Same goes for “should render all the items again.” We’re just testing the number of tr elements in the table.

Feeling good? I am.

That’s all for now. Want more? In my book we get into even more stuff like Jest mock functions and working with APIs.

See you next time.