This site runs best with JavaScript enabled.

Stop mocking fetch


Why you shouldn't mock fetch or your API Client in your tests and what to do instead.

What's wrong with this test?

1// __tests__/checkout.js
2import React from 'react'
3import {render, screen} from '@testing-library/react'
4import userEvent from '@testing-library/user-event'
5import {client} from '../../utils/api-client'
6
7jest.mock('../../utils/api-client')
8
9test('clicking "confirm" submits payment', async () => {
10 const shoppingCart = buildShoppingCart()
11 render(<Checkout shoppingCart={shoppingCart} />)
12
13 client.mockResolvedValueOnce(() => ({success: true}))
14
15 userEvent.click(screen.getByRole('button', {name: /confirm/i}))
16
17 expect(client).toHaveBeenCalledWith('checkout', {data: shoppingCart})
18 expect(client).toHaveBeenCalledTimes(1)
19 expect(await screen.findByText(/success/i)).toBeInTheDocument()
20})

This is a bit of a trick question. Without knowing the actual API and requirements of Checkout as well as the /checkout endpoint, you can't really answer. So, sorry about that. But, one issue with this is because you're mocking out the client, how do you really know the client is being used correctly in this case? Sure, the client could be unit tested to make sure it's calling window.fetch properly, but how do you know that client didn't just recently change its API to accept a body instead of data? Oh, you're using TypeScript so you've eliminated a category of bugs. Good! But there are definitely some business logic bugs that can slip in because we're mocking the client here. Sure you could lean on your E2E tests to give you that confidence, but wouldn't it be better to just call into the client and get that confidence here at this lower level where you have a tighter feedback loop? If it's not much more difficult then sure!

But we don't want to actually make fetch requests right? So let's mock out window.fetch:

1// __tests__/checkout.js
2import React from 'react'
3import {render, screen} from '@testing-library/react'
4import userEvent from '@testing-library/user-event'
5
6beforeAll(() => jest.spyOn(window, 'fetch'))
7// assuming jest's resetMocks is configured to "true" so
8// we don't need to worry about cleanup
9// this also assumes that you've loaded a fetch polyfill like `whatwg-fetch`
10
11test('clicking "confirm" submits payment', async () => {
12 const shoppingCart = buildShoppingCart()
13 render(<Checkout shoppingCart={shoppingCart} />)
14
15 window.fetch.mockResolvedValueOnce(() => ({
16 ok: true,
17 json: async () => ({success: true}),
18 }))
19
20 userEvent.click(screen.getByRole('button', {name: /confirm/i}))
21
22 expect(window.fetch).toHaveBeenCalledWith(
23 '/checkout',
24 expect.objectContaining({
25 method: 'POST',
26 body: JSON.stringify(shoppingCart),
27 }),
28 )
29 expect(window.fetch).toHaveBeenCalledTimes(1)
30 expect(await screen.findByText(/success/i)).toBeInTheDocument()
31})

This will give you a bit more confidence that a request is actually being issued, but another thing that this test is lacking is an assertion that the headers has a Content-Type of application/json. Without that, how can you be certain that the server is going to recognize that request you're making? Oh, and how do you ensure that the correct authentication information is being sent along as well?

I hear you, "but we've verified that in our client unit tests, Kent. What more do you want from me!? I don't want to copy/paste assertions everywhere!" I definitely feel you there. But what if there were a way to avoid all the extra work on assertions everywhere, but also get that confidence in every test? Keep reading.

One thing that really bothers me about mocking things like fetch is that you end up re-implementing your entire backend... everywhere in your tests. Often in multiple tests. It's super annoying, especially when it's like: "in this test, we just assume the normal backend responses," but you have to mock those out all over the place. In those cases it's really just setup noise that gets between you and the thing you're actually trying to test.

What inevitably happens is one of these scenarios:

  1. We mock out the client (like in our first test) and rely on the some E2E tests to give us a little confidence that at least the most important parts are using the client correctly. This results in reimplementing our backend anywhere we test things that touch the backend. Often duplicating work.
  2. We mock out window.fetch (like in our second test). This is a little better, but it suffers from some of the same problems as #1.
  3. We put all of our stuff in small functions and unit test it all in isolation (not really a bad thing by itself) and not bother testing them in integration (not a great thing).

Ultimately, we have less confidence, a slower feedback loop, lots of duplicate code, or any combination of those.

One thing that ended up working pretty well for me for a long time is to mock fetch in one function which is basically a re-implementation of all the parts of my backend I have tested. I did a form of this at PayPal and it worked really well. You can think of it like this:

1// add this to your setupFilesAfterEnv config in jest so it's imported for every test file
2import * as users from './users'
3
4async function mockFetch(url, config) {
5 switch (url) {
6 case '/login': {
7 const user = await users.login(JSON.parse(config.body))
8 return {
9 ok: true,
10 status: 200,
11 json: async () => ({user}),
12 }
13 }
14 case '/checkout': {
15 const isAuthorized = user.authorize(config.headers.Authorization)
16 if (!isAuthorized) {
17 return Promise.reject({
18 ok: false,
19 status: 401,
20 json: async () => ({message: 'Not authorized'}),
21 })
22 }
23 const shoppingCart = JSON.parse(config.body)
24 // do whatever other things you need to do with this shopping cart
25 return {
26 ok: true,
27 status: 200,
28 json: async () => ({success: true}),
29 }
30 }
31 default: {
32 throw new Error(`Unhandled request: ${url}`)
33 }
34 }
35}
36
37beforeAll(() => jest.spyOn(window, 'fetch'))
38beforeEach(() => window.fetch.mockImplementation(mockFetch))

Now my test can look like this:

1// __tests__/checkout.js
2import React from 'react'
3import {render, screen} from '@testing-library/react'
4import userEvent from '@testing-library/user-event'
5
6test('clicking "confirm" submits payment', async () => {
7 const shoppingCart = buildShoppingCart()
8 render(<Checkout shoppingCart={shoppingCart} />)
9
10 userEvent.click(screen.getByRole('button', {name: /confirm/i}))
11
12 expect(await screen.findByText(/success/i)).toBeInTheDocument()
13})

My happy-path test doesn't need to do anything special. Maybe I would add a fetch mock for a failure case, but I was pretty happy with this.

What's great about this is I only increase in my confidence and I have even less test code to write for the majority of cases.

Then I discovered msw

msw is short for "Mock Service Worker". Now, service workers don't work in Node, they're a browser feature. However, msw supports Node anyway for testing purposes.

The basic idea is this: create a mock server that intercepts all requests and handle it just like you would if it were a real server. In my own implementation, this means I make a "database" either out of json files to "seed" the database, or "builders" using something like faker or test-data-bot. Then I make server handlers (similar to the express API) and interact with that mock database. This makes my tests fast and easy to write (once you have things set up).

You may have used something like nock to do this sort of thing before. But the cool thing about msw (and something I may write about later), is that you can also use all the exact same "server handlers" in the browser during development as well. This has a few great benefits:

  1. If the endpoint isn't ready
  2. If the endpoint is broken
  3. If your internet connection is slow or non-existent

You might have heard of Mirage which does much of the same thing. However (currently) mirage does not use a service worker in the client and I really like that the network tab works the same whether I have msw installed or not. Learn more about their differences.

Example

So with that intro, here's how we'd do our above example with msw backing our mock server:

1// server-handlers.js
2// this is put into here so I can share these same handlers between my tests
3// as well as my development in the browser. Pretty sweet!
4import {rest} from 'msw' // msw supports graphql too!
5import * as users from './users'
6
7const handlers = [
8 rest.get('/login', async (req, res, ctx) => {
9 const user = await users.login(JSON.parse(req.body))
10 return res(ctx.json({user}))
11 }),
12 rest.post('/checkout', async (req, res, ctx) => {
13 const user = await users.login(JSON.parse(req.body))
14 const isAuthorized = user.authorize(req.headers.Authorization)
15 if (!isAuthorized) {
16 return res(ctx.status(401), ctx.json({message: 'Not authorized'}))
17 }
18 const shoppingCart = JSON.parse(req.body)
19 // do whatever other things you need to do with this shopping cart
20 return res(ctx.json({succes: true}))
21 }),
22]
23
24export {handlers}
1// test/server.js
2import {rest} from 'msw'
3import {setupServer} from 'msw/node'
4import {handlers} from './server-handlers'
5
6const server = setupServer(...handlers)
7export {server, rest}
1// test/setup-env.js
2// add this to your setupFilesAfterEnv config in jest so it's imported for every test file
3import {server} from './server.js'
4
5beforeAll(() => server.listen())
6// if you need to add a handler after calling setupServer for some specific test
7// this will remove that handler for the rest of them
8// (which is important for test isolation):
9afterEach(() => server.resetHandlers())
10afterAll(() => server.close())

Now my test can look like this:

1// __tests__/checkout.js
2import React from 'react'
3import {render, screen} from '@testing-library/react'
4import userEvent from '@testing-library/user-event'
5
6test('clicking "confirm" submits payment', async () => {
7 const shoppingCart = buildShoppingCart()
8 render(<Checkout shoppingCart={shoppingCart} />)
9
10 userEvent.click(screen.getByRole('button', {name: /confirm/i}))
11
12 expect(await screen.findByText(/success/i)).toBeInTheDocument()
13})

I'm happier with this solution than mocking fetch because:

  1. I don't have to worry about the implementation details of fetch response properties and headers.
  2. If I get something wrong with the way I call fetch, then my server handler won't be called and my test (correctly) fails, which would save me from shipping broken code.
  3. I can reuse these exact same server handlers in my development!

Colocation and error/edge case testing

One reasonable concern about this approach is that you end up putting all of your server handlers in one place and then the tests that rely on those server handlers end up in entirely different files, so you lose the benefits of colocation.

First off, I'd say that you want to only colocate the things that are important and unique to your test. You wouldn't want to have to duplicate all the setup in every test. Only the parts that are unique. So the "happy path" stuff is typically better to just include in your setup file, removed from the test itself. Otherwise you have too much noise and it's hard to isolate what's actually being tested.

But what about edge cases and errors? For those, MSW has the ability for you to add additional server handlers at runtime (within a test) and then reset the server to the original handlers (effectively removing the runtime handlers) to preserve test isolation. Here's an example:

1// __tests__/checkout.js
2import React from 'react'
3import {server, rest} from 'test/server'
4import {render, screen} from '@testing-library/react'
5import userEvent from '@testing-library/user-event'
6
7// happy path test, no special server stuff
8test('clicking "confirm" submits payment', async () => {
9 const shoppingCart = buildShoppingCart()
10 render(<Checkout shoppingCart={shoppingCart} />)
11
12 userEvent.click(screen.getByRole('button', {name: /confirm/i}))
13
14 expect(await screen.findByText(/success/i)).toBeInTheDocument()
15})
16
17// edge/error case, special server stuff
18// note that the afterEach(() => server.resetHandlers()) we have in our
19// setup file will ensure that the special handler is removed for other tests
20test('shows server error if the request fails', async () => {
21 const testErrorMessage = 'THIS IS A TEST FAILURE'
22 server.use(
23 rest.post('/checkout', async (req, res, ctx) => {
24 return res(ctx.status(500), ctx.json({message: testErrorMessage}))
25 }),
26 )
27 const shoppingCart = buildShoppingCart()
28 render(<Checkout shoppingCart={shoppingCart} />)
29
30 userEvent.click(screen.getByRole('button', {name: /confirm/i}))
31
32 expect(await screen.findByRole('alert')).toHaveTextContent(testErrorMessage)
33})

So you can have colocation where it's needed, and abstraction where abstraction is sensible.

Conclusion

There's definitely more to do with msw, but let's just wrap up for now. If you want to see msw in action, my 4 part workshop "Build React Apps" (included in EpicReact.Dev) uses it and you can find all the material on GitHub.

One really cool aspect of this method of testing is that because you're so far away from implementation details, you can make significant refactorings and your tests can give you confidence that you didn't break the user experience. That's what tests are for!! Love it when this happens:

Good luck!

Discuss on TwitterEdit post on GitHub

Share article
TestingJavaScript.com

Your Essential Guide to Flawless Testing

Jump on this self-paced workshop and learn the smart, efficient way to test any JavaScript application.

Start Now

Write well tested JavaScript.

Kent C. Dodds

Kent C. Dodds is a JavaScript software engineer and teacher. He's taught hundreds of thousands of people how to make the world a better place with quality software development tools and practices. He lives with his wife and four kids in Utah.

Join the Newsletter



Kent C. Dodds