Playwright E2E Testing Documentation¶
Overview¶
This document explains the architecture and organization of the Playwright end-to-end testing infrastructure for the Delta platform. It covers how tests are structured, the purpose of each component, and the testing strategies employed.
Testing Philosophy¶
Our E2E testing approach focuses on:
- User Journey Testing: Simulating real user interactions through complete workflows
- Authentication State Management: Efficient handling of logged-in states across tests
- Parallel Execution: Running tests concurrently for faster feedback
- Comprehensive Reporting: Detailed test results with screenshots and traces on failures
- Mobile-First Testing: Ensuring functionality across desktop and mobile viewports
Test Organization¶
Directory Structure¶
The E2E testing infrastructure is organized into several key directories, each serving a specific purpose:
Tests Directory¶
The main test suite is organized by feature area. Each subdirectory contains tests related to a specific domain:
- Authentication tests: Cover login, logout, registration, and authentication-related navigation flows
- Project tests: Focus on project-related functionality and user workflows
- Smoke tests: Critical path tests that verify core functionality is working end-to-end
Fixtures Directory¶
Custom test fixtures provide different testing contexts:
- Authenticated fixtures: Pre-configured with logged-in state for tests requiring authentication
- Non-authenticated fixtures: Clean state for testing public pages and authentication flows
- These fixtures handle the complexity of state management, allowing tests to focus on functionality
Helpers Directory¶
Reusable utility functions that abstract common operations:
- Authentication helpers: Functions for logging in, logging out, and managing user sessions
- Database helpers: Utilities for test data setup and cleanup
- Mobile interaction helpers: Touch-specific interactions for mobile testing
- Phone input helpers: Specialized functions for handling international phone number inputs
- Test credentials: Centralized management of test user accounts and data
Global Setup¶
The global setup and teardown scripts handle:
- One-time authentication before all tests run
- Saving authentication state for reuse across tests
- Environment preparation and cleanup
- This approach significantly speeds up test execution by avoiding repeated logins
Configuration Strategy¶
Authentication State Management¶
The testing framework implements a sophisticated authentication state management system:
- Global Authentication: A single login is performed before all tests run, and the authentication state is saved
- State Reuse: Tests automatically inherit the saved authentication state, eliminating redundant login operations
- Selective Clean State: Specific tests can opt out of the shared state when testing login/registration flows
- Performance Benefits: This approach reduces test execution time by 70% compared to logging in for each test
Environment Configuration¶
The test environment is configured through dedicated environment files that:
- Define the base URL for the application under test
- Set up test user credentials and phone numbers
- Configure timeouts and execution parameters
- Manage feature flags for test mode (like OTP bypass)
Test Data Management¶
Test data is managed through a centralized system that:
- Provides consistent test user profiles across all tests
- Manages phone numbers that work in test mode
- Ensures data isolation between test runs
- Maintains predictable test data for reliable assertions
Test Coverage Areas¶
Authentication Testing¶
The authentication test suite comprehensively covers:
- Login Flow: Tests for valid and invalid phone numbers, form validation, and successful authentication
- Registration Flow: New user signup, duplicate prevention, and data validation
- Logout Functionality: Proper session termination and redirect behavior
- Navigation Guards: Protected route access and authentication redirects
- Session Persistence: Verification that authentication state persists across page refreshes
- Rate Limiting: Security measures to prevent brute force attacks
Transaction Testing¶
The transaction test suite uses a hybrid approach with both mocked and real AI processing:
Mocked Tests (Staging/PR Validation)¶
- Test location:
e2e/tests/transactions/transaction-creation.spec.ts - Mock infrastructure: MSW (Mock Service Worker) in
e2e/mocks/handlers.ts - Coverage: 3 test scenarios (upload without warning, with warning + continue, with warning + cancel)
- Execution time: ~30s per test locally, ~60s per test in staging CI
- Mocked endpoints:
*/core/chat- AI document classification*/v2/publish/*- Queue job submission
Mock Configuration:
- Activation: HTTP header
x-mock-enabled: true(set in playwright.config.ts) - Transaction type: HTTP header
x-mock-transaction-type(TRANSACTION or OTHER) - Mock data: JSON files in
e2e/mocks/data/
Smoke Tests (Production Validation)¶
- Test location:
e2e/tests/smoke/transaction-creation.spec.ts - Real AI processing: Uses actual AI Hub for document classification
- Execution time: 60-180+ seconds (depends on actual AI processing)
- When run: Production deployments only (not on staging)
- Purpose: Validate real AI Hub integration in production environment
This hybrid approach ensures fast feedback during development (mocked tests) while maintaining confidence in production AI integration (smoke tests).
Project Navigation Testing¶
Project-related tests ensure:
- Users can navigate between different projects
- Project-specific data is correctly displayed
- Access controls are properly enforced
- Navigation state is maintained correctly
Critical User Paths¶
Smoke tests verify end-to-end functionality for:
- Complete user registration and first login
- Creating and accessing projects
- Key workflows that represent typical user journeys
- Both authenticated and non-authenticated user experiences
Mobile-Specific Testing¶
Dedicated mobile tests ensure:
- Touch interactions work correctly
- Responsive layouts function properly
- Mobile-specific UI elements are accessible
- Performance is acceptable on mobile viewports
Execution Strategy¶
Running Tests¶
The test suite provides multiple execution modes:
- Headless Mode: Default mode for CI/CD pipelines, runs tests without visible browser
- Headed Mode: Shows browser during test execution, useful for debugging
- UI Mode: Interactive mode with time-travel debugging and test exploration
- Debug Mode: Step-through debugging with breakpoints
- Parallel Execution: Tests run concurrently across multiple workers for speed
Reporting Options¶
Two reporting systems provide comprehensive test results:
- Playwright HTML Reporter: Built-in reporter with screenshots, videos, and traces
- Allure Reports: Advanced reporting with detailed test history, trends, and categorization
Test Selection¶
Tests can be executed selectively:
- Run all tests across all browsers
- Run specific test files or directories
- Filter tests by tags or grep patterns
- Run only failed tests from previous runs
Running Tests Locally¶
Prerequisites¶
Before running E2E tests locally, ensure you have:
- Development Environment Running: The application must be accessible at the configured base URL
- Test Database: Local Supabase instance with test data
- Environment Variables: Properly configured
.env.e2efile in the e2e directory - Playwright Browsers: Installed via the Playwright CLI
Initial Setup¶
- Install Playwright browsers (one-time setup):
cd packages/app
npx playwright install
- Start the development environment:
# From the root directory
yarn dev
- Verify environment variables: Ensure the
.env.e2efile exists inpackages/app/e2e/with proper test credentials
Running Tests¶
Navigate to the app package directory:
cd packages/app
Run All Tests¶
yarn e2e
Run Tests in UI Mode (Recommended for Development)¶
yarn e2e:ui
This opens an interactive UI where you can:
- Select specific tests to run
- See real-time test execution
- Debug failures with time-travel debugging
- Inspect DOM snapshots at each step
Run Tests with Visible Browser¶
yarn e2e:headed
Run Specific Test File¶
yarn e2e e2e/tests/auth/login.spec.ts
Run Tests Matching a Pattern¶
yarn e2e -g "login"
Debug a Specific Test¶
yarn e2e:debug e2e/tests/auth/login.spec.ts
Viewing Test Results¶
HTML Report¶
After tests complete, view the HTML report:
yarn e2e:report
Allure Report¶
For detailed test analytics:
yarn e2e:test-allure
This runs tests with Allure reporter and automatically opens the report.
Troubleshooting Common Issues¶
- Authentication Failures:
- Clear the
.authdirectory and run tests again -
Verify test user credentials in
.env.e2e -
Timeout Errors:
- Ensure the development server is running
- Check if the base URL is accessible
-
Increase timeout in specific tests if needed
-
Browser Not Installed:
- Run
npx playwright installto install required browsers -
For specific browser:
npx playwright install chromium -
Port Conflicts:
- Ensure the application is running on the expected port
-
Update
PLAYWRIGHT_BASE_URLin.env.e2eif using a different port -
Test Data Issues:
- Run global teardown manually if needed
- Check for orphaned test data in database
- Verify no tests are using hardcoded IDs
-
Use the manual cleanup function if necessary
-
Storage Issues:
- Project cleanup handles storage automatically
- Files are removed when projects are deleted
-
Check Supabase storage policies if issues persist
-
Notification Test Failures:
- Ensure notification templates are created during setup
- Check that notification deliveries have correct status
- Use
resetNotificationStates()helper for cross-browser consistency
Tips for Efficient Local Testing¶
- Use UI Mode: Best for debugging and understanding test failures
- Run Specific Tests: Focus on the area you're working on
- Watch Mode: Keep tests running while developing
- Check Traces: Playwright saves traces on failure for detailed debugging
- Parallel Execution: Utilize multiple workers for faster feedback
Test Environment Setup¶
Overview of setupE2ETestEnvironment¶
The setupE2ETestEnvironment function is a critical utility that prepares a clean, consistent test environment for E2E tests. Located in /packages/app/e2e/helpers/database-helpers.ts, it's called during the global setup phase of Playwright tests.
Test Environment Setup Process¶
1. Cleanup Phase¶
- Calls
cleanupAllE2EProjects()to remove any existing E2E test projects - Calls
teardownE2ETestEnvironment()to ensure a completely clean slate - Cleans up notification test data from previous runs
- Prevents test data accumulation and ensures test isolation
2. User Creation Phase¶
Creates two test users:
Main Test User:
- Phone:
+1 212 555 1234(US number) - Full Phone:
12125551234(with country code) - OTP:
333333 - Name: E2E TestUser
- Country: United States
- Purpose: Primary user for authenticated test scenarios
Wrong OTP Test User:
- Phone:
+64 111 111 1114(NZ number) - Full Phone:
641111111114(with country code) - OTP:
333333 - Name: WrongOTP TestUser
- Country: New Zealand
- Purpose: Used for testing authentication error scenarios
3. Project Creation Phase¶
Creates a default test project:
- Title: "E2E Test Project"
- Owner: Main test user
- Status: PreProduction
- Currency: USD
- Team Setup: Automatically handled by database triggers
- Creates
project_relationshiprecord - Creates
user_accessrecord - Assigns Project Owner role to the creator
4. Notification Test Data¶
Creates test-specific notification data:
- Custom notification category:
e2e_test - Custom notification sender:
e2e-test@farmcove.com - Custom notification template:
e2e-test-notification - Two unread notifications for testing notification features
Database Tables Affected¶
User-Related Tables:
auth.users- Supabase authentication recordspublic.user- Application user recordspublic.person- Person records linked to userspublic.user_setting- User settings with default notification preferences
Project-Related Tables:
public.project- Project recordspublic.project_relationship- Links people to projects with rolespublic.user_access- User access grants to projects and organizationspublic.role_definition- Role definitions for permissionspublic.attachment- File attachments (for project posters)
Notification-Related Tables:
public.notification_template_category- E2E test categorypublic.notification_sender- E2E test senderpublic.notification_template- E2E test templatepublic.notification- Test notificationspublic.notification_delivery- Delivery records
Storage:
filesbucket - For project files and posters- Project folders:
projects/{project_code}/
Test Data Characteristics¶
Predictable Data:
- Phone numbers are consistent across test runs
- OTP codes are hardcoded in Supabase config for test numbers
- User names follow a clear pattern
- Notification templates use consistent keys
Dynamic Data:
- User IDs (UUIDs) are generated fresh each run
- Project IDs are generated fresh each run
- Project codes are auto-generated by database triggers
- Timestamps reflect actual creation time
Notification System Integration¶
Test users are created with default notification settings:
{
"email_enabled": false,
"whatsapp_enabled": true,
"in_app_enabled": true,
"digest_frequency": "immediate",
"quiet_hours_start": null,
"quiet_hours_end": null,
"template_overrides": {}
}
This allows testing of:
- WhatsApp notifications (enabled by default)
- In-app notifications (enabled by default)
- Email notifications (disabled by default)
- Notification preference updates
- Template-specific overrides
- Cross-browser notification state management
Global Test Flow¶
1. Global Setup (/e2e/global-setup.ts)¶
- Calls
setupE2ETestEnvironment()to create test data - Stores test data IDs in environment variables:
E2E_TEST_USER_IDE2E_TEST_USER_PHONEE2E_TEST_PROJECT_IDWRONG_OTP_TEST_USER_IDWRONG_OTP_TEST_USER_PHONE- Logs in as the main test user
- Saves authentication state to
e2e/.auth/user.json
2. Test Execution¶
- Tests run with pre-authenticated state
- Can access test data via environment variables
- Individual tests can create additional data as needed
- Tests can reset notification states using
resetNotificationStates()helper
3. Global Teardown (/e2e/global-teardown.ts)¶
- Retrieves test data IDs from environment variables
- Calls
teardownE2ETestEnvironment()with all IDs - Cleans up ALL E2E projects (including those created during tests)
- Removes test users and associated data
- Cleans up all notification test data
Manual Test Data Cleanup¶
In some cases, you may need to manually clean up E2E test data from your local Supabase instance. A database function is available for this purpose.
Steps to Clean Up Test Data¶
- Open Supabase Studio:
- Navigate to http://localhost:54323 (default local Supabase Studio URL)
-
Go to the SQL Editor section
-
Run the Cleanup Function:
SELECT * FROM cleanup_e2e_test_data();
- Review the Results: The function returns a summary of what was deleted:
- Number of auth.users records deleted
- Number of project records deleted
- Number of user_access records deleted
- Number of project_relationship records deleted
- Number of person records deleted
What Gets Cleaned¶
The cleanup_e2e_test_data() function removes:
- Test users with phone numbers: '641111111113', '641111111114', '12125551234'
- The "E2E Test Project" and all its relationships
- Associated person records
- All related team member and role data
When to Use Manual Cleanup¶
Manual cleanup might be needed when:
- Automated cleanup fails during test runs
- You need to reset test data between debugging sessions
- Test data gets corrupted or partially created
- You want to ensure a completely clean state before running tests
Note: The E2E test setup automatically calls cleanup before creating new test data, so manual cleanup is typically only needed for troubleshooting purposes.
Technical Implementation¶
Authentication Architecture¶
The testing framework implements a sophisticated authentication state management:
- One-Time Setup: Authentication occurs once before all tests begin
- State Persistence: Authentication tokens and cookies are saved to disk
- Automatic Injection: Each test automatically receives the saved authentication state
- Selective Override: Specific tests can opt for clean state when needed
This architecture enables:
- Fast test execution by eliminating redundant logins
- Realistic testing with actual authentication tokens
- Flexibility for both authenticated and non-authenticated test scenarios
Test Fixture System¶
Custom fixtures extend Playwright's capabilities:
- Authenticated Context: Provides pre-authenticated browser state for protected route testing
- Clean Context: Offers pristine browser state for authentication flow testing
- Mobile Context: Configures touch-enabled viewports and interactions
- Custom Configurations: Allows test-specific browser settings
Phone Authentication Handling¶
The WhatsApp OTP challenge is solved through:
- Test Mode Recognition: Backend identifies test phone numbers
- OTP Bypass: Fixed verification codes work for test accounts
- No External Dependencies: Tests run without actual SMS/WhatsApp services
- Realistic Flow: UI flow remains identical to production
Test Dependency Management¶
Tests are organized with intelligent dependencies:
- Setup Phase: Establishes prerequisites like authentication state
- Parallel Groups: Independent tests run concurrently
- Serial Groups: Dependent tests run in sequence
- Teardown Phase: Cleanup operations run after all tests
Best Practices¶
Test Design Principles¶
- User-Centric: Write tests from the user's perspective, not implementation details
- Independent: Each test should run successfully in isolation
- Deterministic: Tests should produce consistent results
- Focused: One test should verify one specific behavior
- Fast: Optimize for quick feedback loops
Test Data Management Best Practices¶
- Use Existing Test Data: Leverage the pre-created users and project when possible
- Clean Up Custom Data: If tests create additional data, clean it up in
afterEach - Mark Test Data: Include "E2E" in titles/names for easy identification
- Avoid Phone Conflicts: Don't use the predefined test phone numbers for new users
- Storage Cleanup: The system automatically cleans up project files and folders
- Reset Notification States: Use
resetNotificationStates()for consistent cross-browser testing
Element Selection Strategy¶
- Semantic Locators First: Prefer role-based and label-based selectors
- Data Test IDs: Use dedicated test attributes for complex elements
- Avoid Fragile Selectors: Don't rely on CSS classes or DOM structure
- Accessibility-Driven: Good accessibility makes for good testability
Test Organization¶
- Feature-Based Structure: Group related tests by user feature
- Shared Utilities: Extract common operations into helper functions
- Fixture Inheritance: Use custom fixtures for different test contexts
- Clear Naming: Test names should describe the scenario being tested
Debugging Strategies¶
- Progressive Debugging: Start with UI mode for visual debugging
- Trace Analysis: Use Playwright traces to understand test failures
- Screenshot Comparison: Compare expected vs actual screenshots
- Video Playback: Review test execution videos for timing issues
- Network Inspection: Analyze API calls and responses
Maintenance Guidelines¶
Keeping Tests Healthy¶
- Regular Execution: Run tests frequently to catch issues early
- Flaky Test Management: Investigate and fix intermittent failures immediately
- Refactor Proactively: Update tests when application changes
- Performance Monitoring: Track test execution times and optimize slow tests
Scaling Considerations¶
- Parallel Execution: Maximize concurrency for faster feedback
- Smart Test Selection: Run relevant tests based on code changes
- Resource Management: Monitor memory and CPU usage during test runs
- Data Management: Implement strategies for test data growth
Future Roadmap¶
Planned Enhancements¶
- Visual Testing: Implement screenshot comparison for UI consistency
- Accessibility Automation: Add WCAG compliance checks
- Performance Testing: Measure and track application performance metrics
- API Contract Testing: Validate API responses against schemas
- Cross-Browser Coverage: Extend testing to more browser/OS combinations
Continuous Improvement¶
The E2E testing infrastructure is designed to evolve with the application. Regular reviews and updates ensure the test suite remains effective and maintainable as the platform grows.
Timeout Strategy¶
Environment-Aware Timeouts¶
The E2E test suite uses conditional timeouts to balance speed and reliability:
- Local Development: Fast timeouts for quick feedback (3s for toasts, 5s for navigation)
- CI Environment: 2x multiplier for stability (6s for toasts, 10s for navigation)
This approach ensures tests run fast locally while providing a stability buffer in CI without excessive waits.
Timeout Helper Usage¶
All E2E tests should use the timeout helpers from e2e/helpers/timeout-helpers.ts:
import { getTimeout, TimeoutConstants } from '../helpers/timeout-helpers';
// Using predefined constants (recommended)
await toast.waitFor({ state: 'visible', timeout: TimeoutConstants.TOAST_WAIT });
await dialog.waitFor({
state: 'visible',
timeout: TimeoutConstants.DIALOG_WAIT,
});
await element.waitFor({
state: 'visible',
timeout: TimeoutConstants.ELEMENT_VISIBLE,
});
// Using custom timeout with multiplier
await page.waitForURL(pattern, { timeout: getTimeout(5000) }); // 5s local, 10s CI
// Fixed delays (use sparingly - prefer conditional waits)
await page.waitForTimeout(getFixedDelay(500)); // Same in all environments
Common Timeout Values¶
| Operation | Local | CI | Constant |
|---|---|---|---|
| Toast/notification | 3s | 6s | TimeoutConstants.TOAST_WAIT |
| Dialog/modal | 2s | 4s | TimeoutConstants.DIALOG_WAIT |
| Element visibility | 5s | 10s | TimeoutConstants.ELEMENT_VISIBLE |
| Page navigation | 5s | 10s | TimeoutConstants.NAVIGATION |
| Button/interactive | 2s | 4s | TimeoutConstants.BUTTON_WAIT |
| Form submission | 3s | 6s | TimeoutConstants.FORM_SUBMIT |
| Network idle | 10s | 20s | TimeoutConstants.NETWORK_IDLE |
Troubleshooting Timeout Issues¶
Test timing out in CI:
- Verify timeout uses
getTimeout()orTimeoutConstants(not hard-coded values) - Check element selector is correct and specific
- Review CI logs for network delays or database issues
- Consider increasing CI multiplier for specific operation:
getTimeout(5000, 3)for 3x
Test too slow locally:
- Reduce base timeout value if safe
- Replace
page.waitForTimeout()with conditional waits (waitFor, waitForURL) - Verify selectors are specific (avoid broad selectors that match multiple elements)
- Check for unnecessary sequential waits that could run in parallel
Flaky timeout failures:
- Use
TimeoutConstantsinstead of hard-coded values - Prefer waiting for specific conditions over fixed delays
- Add retry logic for network-dependent operations
- Check for race conditions between page load and element visibility
Related Files¶
Core Test Infrastructure¶
/packages/app/e2e/helpers/database-helpers.ts- Database utilities and test setup/packages/app/e2e/helpers/test-credentials.ts- Test user credentials/packages/app/e2e/helpers/auth-helpers.ts- Authentication utilities/packages/app/e2e/helpers/notification-helpers.ts- Notification state management/packages/app/e2e/global-setup.ts- Global test setup/packages/app/e2e/global-teardown.ts- Global test cleanup/packages/app/playwright.config.ts- Test configuration
Test Suites¶
/packages/app/e2e/tests/auth/- Authentication tests/packages/app/e2e/tests/projects/- Project-related tests/packages/app/e2e/tests/user/- User navigation and notification tests/packages/app/e2e/tests/smoke/- Critical path tests
Configuration¶
/packages/app/e2e/.env.e2e- Test environment variables/packages/app/e2e/.auth/- Saved authentication state