Integration tests should be honest. When a service talks to Postgres, the test should talk to Postgres too — not a mock that pretends to be Postgres but lacks constraints, triggers, and the dozen behaviors that actually matter in production.
The blocker has never been philosophy. It has been plumbing. Starting a container, waiting for it to be ready, passing the connection details to the test, stopping it after — and doing all of that reliably when a test throws — is tedious boilerplate that sits between you and the test you actually want to write.
We solved this with @playwright-labs/fixture-testcontainers.
What was wrong with the existing approach
The Testcontainers Node.js library is excellent. The missing piece was ergonomic integration with Playwright's test lifecycle. The typical pattern before this package looked like:
import { GenericContainer, StartedTestContainer } from "testcontainers";
import { test } from "@playwright/test";
let pg: StartedTestContainer;
test.beforeAll(async () => {
pg = await new GenericContainer("postgres:16")
.withEnvironment({ POSTGRES_PASSWORD: "secret" })
.withExposedPorts(5432)
.start();
});
test.afterAll(async () => {
await pg?.stop();
});
test("insert and query", async () => {
// finally, the actual test
});
This pattern has several failure modes:
- Shared state across tests. Multiple tests modify the same database instance, producing order-dependent failures that are nearly impossible to reproduce in isolation.
- Leaking containers. If
beforeAll throws after partially initializing, afterAll may receive undefined for pg and fail to clean up.
- File-level coupling. Adding a second container means another
let, another beforeAll call, another afterAll call. The infrastructure setup grows in proportion to your test file, not to your test logic.
The fixture model fixes all three
Playwright's fixture system already manages this lifecycle for browsers, pages, and any other shared state. Fixtures are set up on demand, scoped to the test (or file, or worker), and torn down after the test ends — even if the test throws.
@playwright-labs/fixture-testcontainers brings that same model to Docker containers:
import { test } from "@playwright-labs/fixture-testcontainers";
import { Wait } from "testcontainers";
test("insert and query", async ({ useContainer }) => {
const pg = await useContainer("postgres:16", {
ports: 5432,
environment: { POSTGRES_PASSWORD: "secret" },
waitStrategy: Wait.forLogMessage("ready to accept connections"),
});
// this container is scoped to this test
// it stops automatically when the test ends
const port = pg.getMappedPort(5432);
});
Each test that needs a database gets its own container. No shared state, no cleanup code, no coupling between tests.
Realistic usage: building domain fixtures
The most powerful use of this package is not the raw useContainer fixture — it is using useContainer as a building block for your own fixtures.
// test-helpers/fixtures.ts
import { test as base } from "@playwright-labs/fixture-testcontainers";
import { Wait } from "testcontainers";
import { Pool } from "pg";
type Fixtures = {
db: Pool;
redisUrl: string;
};
export const test = base.extend<Fixtures>({
db: async ({ useContainer }, use) => {
const container = await useContainer("postgres:16", {
ports: 5432,
environment: { POSTGRES_PASSWORD: "secret" },
waitStrategy: Wait.forLogMessage("ready to accept connections"),
});
const pool = new Pool({
host: container.getHost(),
port: container.getMappedPort(5432),
password: "secret",
database: "postgres",
user: "postgres",
});
await use(pool);
await pool.end();
},
redisUrl: async ({ useContainer }, use) => {
const container = await useContainer("redis:8", { ports: 6379 });
await use(`redis://${container.getHost()}:${container.getMappedPort(6379)}`);
},
});
Tests import from fixtures.ts and receive ready-to-use clients:
import { test } from "./test-helpers/fixtures";
import { expect } from "@playwright-labs/fixture-testcontainers";
test("user record persists", async ({ db }) => {
await db.query(`INSERT INTO users (name) VALUES ($1)`, ["Alice"]);
const { rows } = await db.query(`SELECT name FROM users WHERE name = $1`, ["Alice"]);
expect(rows[0].name).toBe("Alice");
});
The test has no idea containers are involved. It receives a Pool. The infrastructure is an implementation detail of the fixture.
ContainerOpts maps one-to-one with GenericContainer.with* methods. There is nothing new to learn — if you know Testcontainers, you know the options:
| Option | Testcontainers method |
ports | withExposedPorts |
environment | withEnvironment |
waitStrategy | withWaitStrategy |
healthCheck | withHealthCheck |
network | withNetwork |
bindMounts | withBindMounts |
copyFiles | withCopyFilesToContainer |
pullPolicy | withPullPolicy |
resourcesQuota | withResourcesQuota |
| ... and 20 more | |
For cases where you need the full builder, pass a pre-configured GenericContainer:
import { GenericContainer } from "testcontainers";
const container = await useContainer(
new GenericContainer("postgres:16")
.withEnvironment({ POSTGRES_PASSWORD: "secret" })
.withExposedPorts(5432)
.withReuse(),
);
Container assertions
The package exports an extended expect with matchers specific to StartedTestContainer:
import { expect } from "@playwright-labs/fixture-testcontainers";
// state
await expect(container).toBeContainerRunning();
await expect(container).toBeContainerHealthy();
await expect(container).not.toBeContainerStopped();
// logs
await expect(container).toMatchContainerLogMessage("ready to accept connections");
await expect(container).not.toMatchContainerLogMessage("FATAL");
// ports
expect(container).toBeContainerPort(5432);
expect(container).toMatchContainerPortInRange(5432, { min: 1024, max: 65535 });
// metadata
expect(container).toHaveContainerLabel("env", "test");
expect(container).toHaveContainerNetwork("app-net");
await expect(container).toHaveContainerUser("postgres");
These matchers turn container state inspection from raw Docker API calls into readable assertions that belong in test output.
Getting started
npm install -D @playwright-labs/fixture-testcontainers testcontainers
Requirements: @playwright/test >= 1.57.0, testcontainers >= 10.0.0, Docker available in your environment.
The package is part of playwright-labs, an open-source collection of Playwright utilities.
If you maintain a Playwright test suite and have been using mocks for anything that should really be a database or a cache, this is a practical path to replacing them without rewriting your entire test setup.