Network Error Monitor
- Network Error Monitor
Built-in Sentry for Playwright Tests
Automatically detects and reports HTTP 4xx/5xx errors during test execution, ensuring no silent failures slip through your test suite.
Inspired by Checkly's network monitoring pattern.
Why Use This?
Traditional Playwright tests focus on UI assertions and user interactions. But what about API errors happening in the background? A test might pass visually while critical backend services are returning 500 errors.
Network Error Monitor acts like Sentry for your tests:
- Catches HTTP 4xx/5xx responses automatically
- Fails tests that pass UI checks but have backend errors
- Provides structured JSON artifacts for debugging
- Zero boilerplate - automatically enabled for all tests
- Smart opt-out for tests expecting errors (validation testing)
- Respects test status (won't suppress actual test failures)
Quick Start
import { test } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures'
// That's it! Network monitoring is automatically enabled
test('my test', async ({ page }) => {
await page.goto('/dashboard')
// If any HTTP 4xx/5xx errors occur, the test will fail
})Real-World Example
Before network error monitoring:
test('load dashboard', async ({ page }) => {
await page.goto('/dashboard')
await expect(page.locator('h1')).toContainText('Dashboard')
// ✅ Test passes - but background API calls are returning 500 errors!
})After network error monitoring:
import { test } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures'
test('load dashboard', async ({ page }) => {
await page.goto('/dashboard')
await expect(page.locator('h1')).toContainText('Dashboard')
// ❌ Test fails with clear error:
// Network errors detected: 2 request(s) failed.
// Failed requests:
// GET 500 https://api.example.com/users
// POST 503 https://api.example.com/metrics
})Usage
As a Fixture
The simplest way to use network error monitoring is via the fixture:
import { test } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures'
test('my test', async ({ page }) => {
await page.goto('/dashboard')
// Network monitoring is automatically enabled
})Integration with Merged Fixtures (Recommended)
The recommended pattern is to merge the network error monitor into your project's main fixture:
// support/fixtures/merged-fixtures.ts
import { mergeTests } from '@playwright/test'
import { test as networkErrorMonitorFixture } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures'
import { test as authFixture } from './auth-fixture'
// ... other fixtures
export const test = mergeTests(
authFixture,
apiRequestFixture,
networkErrorMonitorFixture // Add here
// ... other fixtures
)
export { expect } from '@playwright/test'Now all tests automatically get network monitoring:
import { test } from 'support/fixtures/merged-fixtures'
test('my test', async ({ page, authUser, apiRequest }) => {
// All fixtures work together
// Network monitoring happens automatically
await page.goto('/protected-page')
})Opt-Out for Validation Tests
Some tests intentionally trigger 4xx/5xx errors (e.g., validation testing). Use the skipNetworkMonitoring annotation:
test(
'validation returns 400',
{ annotation: [{ type: 'skipNetworkMonitoring' }] },
async ({ page }) => {
await page.goto('/api/invalid-request')
// Test expects 400 errors - network monitor is disabled
await expect(page.locator('.error')).toContainText('Bad Request')
}
)Features
1. Automatic Activation
No need to set up listeners or afterEach hooks. The fixture uses Playwright's auto: true pattern to run automatically for every test.
2. Deduplication
Errors are deduplicated by status code + URL combination. If the same endpoint returns 404 multiple times, it's only reported once:
// Only 1 error reported, not 3
await page.goto('/dashboard') // triggers 3x GET 404 /api/missing3. Structured Artifacts
When errors are detected, a network-errors.json artifact is attached to the test report:
[
{
"url": "https://api.example.com/users",
"status": 500,
"method": "GET",
"timestamp": "2025-11-10T12:34:56.789Z"
},
{
"url": "https://api.example.com/metrics",
"status": 503,
"method": "POST",
"timestamp": "2025-11-10T12:34:57.123Z"
}
]Output:
❌ Test failed: Error: Test failed for other reasons
⚠️ Network errors also detected (1 request(s)):
GET 404 https://api.example.com/missing4. Respects Test Status
The monitor respects final test statuses to avoid suppressing important test outcomes:
failed: Network errors logged as additional context, not throwntimedOut: Network errors logged as additional contextskipped: Network errors logged, skip status preservedinterrupted: Network errors logged, interrupted status preservedpassed: Network errors throw and fail the test
This ensures tests that use test.skip() (e.g., feature flag checks) maintain their skip status:
test('feature gated test', async ({ page }) => {
const featureEnabled = await checkFeatureFlag()
test.skip(!featureEnabled, 'Feature not enabled')
// If skipped, network errors won't turn this into a failure
await page.goto('/new-feature')
})Excluding Legitimate Errors
Some endpoints legitimately return 4xx/5xx responses. Configure exclusions using the factory function:
import { test as base } from '@playwright/test'
import { createNetworkErrorMonitorFixture } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures'
export const test = base.extend(
createNetworkErrorMonitorFixture({
excludePatterns: [
/email-cluster\/ml-app\/has-active-run/, // ML service returns 404 when no active run
/idv\/session-templates\/list/, // IDV service returns 404 when not configured
/sentry\.io\/api/ // External Sentry errors should not fail tests
]
})
)For merged fixtures:
import { mergeTests } from '@playwright/test'
import { createNetworkErrorMonitorFixture } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures'
const networkErrorMonitor = base.extend(
createNetworkErrorMonitorFixture({
excludePatterns: [/analytics\.google\.com/, /cdn\.example\.com/]
})
)
export const test = mergeTests(authFixture, networkErrorMonitor)Preventing Domino Effect
When a backend service fails, it can cause dozens of tests to fail with the same error. Use maxTestsPerError to prevent this:
Note:
maxTestsPerErrorrequiresexcludePatternsto be specified. This enforces a best practice where domino effect prevention is paired with known exclusion patterns.
import { createNetworkErrorMonitorFixture } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures'
const networkErrorMonitor = base.extend(
createNetworkErrorMonitorFixture({
excludePatterns: [], // Required when using maxTestsPerError
maxTestsPerError: 1 // Only first test fails per error pattern, rest just log
})
)How it works:
When /api/v2/case-management/cases returns 500:
- First test encountering this error: ❌ FAILS with clear error message
- Subsequent tests encountering same error: ✅ PASS but log warning
Error patterns are grouped by method + status + base path:
GET /api/v2/case-management/cases/123→ Pattern:GET:500:/api/v2/case-managementGET /api/v2/case-management/quota→ Pattern:GET:500:/api/v2/case-management(same group!)POST /api/v2/case-management/cases→ Pattern:POST:500:/api/v2/case-management(different group!)
This prevents 17 tests failing when case management backend is down - only first test fails.
Why include HTTP method? A GET 404 vs POST 404 on the same endpoint might represent different issues:
GET 404 /api/users/123→ User not found (expected in some tests)POST 404 /api/users→ Endpoint doesn't exist (critical error)
Example output for subsequent tests:
⚠️ Network errors detected but not failing test (maxTestsPerError limit reached):
GET 500 https://api.example.com/api/v2/case-management/casesRecommended configuration:
createNetworkErrorMonitorFixture({
excludePatterns: [...], // Required - known broken endpoints (can be empty [])
maxTestsPerError: 1 // Stop domino effect (requires excludePatterns)
})Understanding worker-level state:
Error pattern counts are stored in worker-level global state that persists for all test files run by that worker. This is intentional behavior for domino effect prevention:
// test-file-1.spec.ts (runs first in Worker 1)
test('test A', () => {
/* triggers GET:500:/api/v2/cases */
}) // ❌ Fails
// test-file-2.spec.ts (runs later in Worker 1)
test('test B', () => {
/* triggers GET:500:/api/v2/cases */
}) // ✅ Passes (limit reached)
// test-file-3.spec.ts (runs in Worker 2 - different worker)
test('test C', () => {
/* triggers GET:500:/api/v2/cases */
}) // ❌ Fails (fresh worker)Key points:
- Each Playwright worker has its own error pattern count state
- State persists across all test files in the worker's lifetime
- Parallel workers have independent state (no cross-contamination)
- This prevents 17 tests in same worker from all failing when backend is down
Real-World Example: Different Error Patterns on Same Endpoint
This example demonstrates how different HTTP methods and status codes create distinct error patterns, even on the same endpoint:
import {
test,
expect
} from '@seontechnologies/playwright-utils/network-error-monitor/fixtures'
// Configure with maxTestsPerError to limit domino effect
const testWithLimit = test.extend(
createNetworkErrorMonitorFixture({
excludePatterns: [],
maxTestsPerError: 1
})
)
test.describe('Complex error patterns', () => {
testWithLimit.fail('GET 404 on user endpoint', async ({ page }) => {
// Pattern: GET:404:/api/users
// First test with this pattern - will fail
await page.goto('/users/search')
await page.getByRole('button', { name: 'Search' }).click()
// GET 404 /api/users/999 (user not found)
expect(true).toBe(true)
})
testWithLimit.fail('POST 500 on user endpoint', async ({ page }) => {
// Pattern: POST:500:/api/users (DIFFERENT from above)
// Also first test with this pattern - will fail
await page.goto('/users/create')
await page.getByRole('button', { name: 'Submit' }).click()
// POST 500 /api/users/create (server error)
expect(true).toBe(true)
})
testWithLimit('GET 404 on same endpoint again', async ({ page }) => {
// Pattern: GET:404:/api/users (same as first test)
// Second test with this pattern - will PASS (limit reached)
await page.goto('/users/search')
await page.getByRole('button', { name: 'Search' }).click()
// This test passes because GET:404:/api/users already failed once
expect(true).toBe(true)
})
})Key Insights:
GET 404 /api/users/999andPOST 500 /api/users/createare different patterns- Both tests fail because they're the first occurrence of their respective patterns
- Third test passes because
GET:404:/api/userspattern already hit the limit - This fine-grained distinction helps identify whether it's a missing resource vs. a server error
Troubleshooting
Test fails with network errors but I don't see them in my app
The errors might be happening during page load or in background polling. Check the network-errors.json artifact in your test report for full details including timestamps.
False positives from external services
Configure exclusion patterns as shown in the "Excluding Legitimate Errors" section above.
Network errors not being caught
Ensure you're importing the test from the correct fixture:
// ✅ Correct
import { test } from '@seontechnologies/playwright-utils/network-error-monitor/fixtures'
// ❌ Wrong - this won't have network monitoring
import { test } from '@playwright/test'Implementation Details
How It Works
- Fixture Extension: Uses Playwright's
base.extend()withauto: true - Response Listener: Attaches
page.on('response')listener at test start - Multi-Page Monitoring: Automatically monitors popups and new tabs via
context.on('page') - Error Collection: Captures 4xx/5xx responses, checking exclusion patterns
- Try/Finally: Ensures error processing runs even if test fails early
- Status Check: Only throws errors if test hasn't already reached final status
- Artifact: Attaches JSON file to test report for debugging
Performance
The monitor has minimal performance impact:
- Event listener overhead: ~0.1ms per response
- Memory: ~200 bytes per unique error
- No network delay (observes responses, doesn't intercept them)
Comparison to Alternatives
| Approach | Network Error Monitor | Manual afterEach |
|---|---|---|
| Setup Required | Zero (auto-enabled) | Every test file |
| Catches Silent Failures | ✅ Yes | ✅ Yes (if configured) |
| Structured Artifacts | ✅ JSON attached | ⚠️ Custom impl |
| Test Failure Safety | ✅ Try/finally | ⚠️ afterEach may not run |
| Opt-Out Mechanism | ✅ Annotation | ⚠️ Custom logic |
| Status Aware | ✅ Respects skip/failed | ❌ No |
Credit
This implementation is inspired by Stefan Judis/Checkly's network monitoring example, with enhancements including:
Potential Future Enhancements
- Custom error handlers (e.g., send to Sentry)
- Warn-only mode (log errors without failing tests)
- Pattern-based error expectations (expect 404 for specific URLs)
- Integration with test retries (track errors across attempts)