33 KiB
Test Performance Optimization
Last Updated: 2026-01-28
Status: ✅ Active optimization program
Executive Summary
This document provides a comprehensive overview of test performance optimizations, risk assessments, and future opportunities. The test suite execution time has been reduced through systematic analysis and targeted optimizations.
Current Performance Metrics
| Metric | Value |
|---|---|
Total Execution Time (without :slow tests) |
~368 seconds (~6.1 minutes) |
| Total Tests | 1,336 tests (+ 25 doctests) |
| Async Execution | 163.5 seconds |
| Sync Execution | 281.5 seconds |
| Slow Tests Excluded | 25 tests (tagged with @tag :slow) |
| Top 50 Slowest Tests | 121.9 seconds (27.4% of total time) |
Optimization Impact Summary
| Optimization | Tests Affected | Time Saved | Status |
|---|---|---|---|
| Seeds tests reduction | 13 → 4 tests | ~10-16s | ✅ Completed |
| Performance tests tagging | 9 tests | ~3-4s per run | ✅ Completed |
| Critical test query filtering | 1 test | ~8-10s | ✅ Completed |
| Full test suite via promotion | 25 tests | ~77s per run | ✅ Completed |
| Total Saved | ~98-107s |
Completed Optimizations
1. Seeds Test Suite Optimization
Date: 2026-01-28
Status: ✅ Completed
What Changed
- Reduced test count: From 13 tests to 4 tests (69% reduction)
- Reduced seeds executions: From 8-10 times to 5 times per test run
- Execution time: From 24-30 seconds to 13-17 seconds
- Time saved: ~10-16 seconds per test run (40-50% faster)
Removed Tests (9 tests)
Tests were removed because their functionality is covered by domain-specific test suites:
"at least one member has no membership fee type assigned"→ Covered bymembership_fees/*_test.exs"each membership fee type has at least one member"→ Covered bymembership_fees/*_test.exs"members with fee types have cycles with various statuses"→ Covered bycycle_generator_test.exs"creates all 5 authorization roles with correct permission sets"→ Covered byauthorization/*_test.exs"all roles have valid permission_set_names"→ Covered byauthorization/permission_sets_test.exs"does not change role of users who already have a role"→ Merged into idempotency test"role creation is idempotent"(detailed) → Merged into general idempotency test
Retained Tests (4 tests)
Critical deployment requirements are still covered:
- ✅ Smoke Test: Seeds run successfully and create basic data
- ✅ Idempotency Test: Seeds can be run multiple times without duplicating data
- ✅ Admin Bootstrap: Admin user exists with Admin role (critical for initial access)
- ✅ System Role Bootstrap: Mitglied system role exists (critical for user registration)
Risk Assessment
| Removed Test Category | Alternative Coverage | Risk Level |
|---|---|---|
| Member/fee type distribution | membership_fees/*_test.exs |
⚠️ Low |
| Cycle status variations | cycle_generator_test.exs |
⚠️ Low |
| Detailed role configs | authorization/*_test.exs |
⚠️ Very Low |
| Permission set validation | permission_sets_test.exs |
⚠️ Very Low |
Overall Risk: ⚠️ Low - All removed tests have equivalent or better coverage in domain-specific test suites.
2. Full Test Suite via Promotion (@tag :slow)
Date: 2026-01-28
Status: ✅ Completed
What Changed
Tests with low risk and execution time >1 second are now tagged with @tag :slow and excluded from standard test runs. These tests are important but not critical for every commit and are run via promotion before merging to main.
Tagging Criteria
Tagged as @tag :slow when:
- ✅ Test execution time >1 second
- ✅ Low risk (not critical for catching regressions in core business logic)
- ✅ UI/Display tests (formatting, rendering)
- ✅ Workflow detail tests (not core functionality)
- ✅ Edge cases with large datasets
NOT tagged when:
- ❌ Core CRUD operations (Member/User Create/Update/Destroy)
- ❌ Basic Authentication/Authorization
- ❌ Critical Bootstrap (Admin user, system roles)
- ❌ Email Synchronization
- ❌ Representative tests per Permission Set + Action
Identified Tests for Full Test Suite (25 tests)
1. Seeds Tests (2 tests) - 18.1s
"runs successfully and creates basic data"(9.0s)"is idempotent when run multiple times"(9.1s)- Note: Critical bootstrap tests remain in fast suite
2. UserLive.ShowTest (3 tests) - 10.8s
"mounts successfully with valid user ID"(4.2s)"displays linked member when present"(2.4s)"redirects to user list when viewing system actor user"(4.2s)
3. UserLive.IndexTest (5 tests) - 25.0s
"displays users in a table"(1.0s)"initially sorts by email ascending"(2.2s)"can sort email descending by clicking sort button"(3.4s)"select all automatically checks when all individual users are selected"(2.0s)"displays linked member name in user list"(1.9s)
4. MemberLive.IndexCustomFieldsDisplayTest (3 tests) - 4.9s
"displays custom field with show_in_overview: true"(1.6s)"formats date custom field values correctly"(1.5s)"formats email custom field values correctly"(1.8s)
5. MemberLive.IndexCustomFieldsEdgeCasesTest (3 tests) - 3.6s
"displays custom field column even when no members have values"(1.1s)"displays very long custom field values correctly"(1.4s)"handles multiple custom fields with show_in_overview correctly"(1.2s)
6. RoleLive Tests (7 tests) - 7.7s
role_live_test.exs:"mounts successfully"(1.5s),"deletes non-system role"(2.1s)role_live/show_test.exs: 5 tests >1s (mount, display, navigation)
7. MemberAvailableForLinkingTest (1 test) - 1.5s
"limits results to 10 members even when more exist"(1.5s)
8. Performance Tests (1 test) - 3.8s
"boolean filter performance with 150 members"(3.8s)
Total: 25 tests, ~77 seconds saved
Execution Commands
Fast Tests (Default):
just test-fast
# or
mix test --exclude slow
Slow Tests Only:
just test-slow
# or
mix test --only slow
All Tests:
just test
# or
mix test
CI/CD Integration
- Standard CI (
check-fast): Runsmix test --exclude slow --exclude uifor faster feedback loops (~6 minutes) - Full Test Suite (
check-full): Triggered via promotion before merge, executesmix test(all tests, including slow and UI) for comprehensive coverage (~7.4 minutes) - Pre-Merge: Full test suite (
mix test) runs via promotion before merging to main - Manual Execution: Promote build to
productionin Drone CI to trigger full test suite
Risk Assessment
Risk Level: ✅ Very Low
- All tagged tests have low risk - they don't catch critical regressions
- Core functionality remains tested (CRUD, Auth, Bootstrap)
- Standard test runs are faster (~6 minutes vs ~7.4 minutes)
- Full test suite runs via promotion before merge ensures comprehensive coverage
- No functionality is lost, only execution timing changed
Critical Tests Remain in Fast Suite:
- Core CRUD operations (Member/User Create/Update/Destroy)
- Basic Authentication/Authorization
- Critical Bootstrap (Admin user, system roles)
- Email Synchronization
- Representative Policy tests (one per Permission Set + Action)
3. Critical Test Optimization
Date: 2026-01-28
Status: ✅ Completed
Problem Identified
The test test respects show_in_overview config was the slowest test in the suite:
- Isolated execution: 4.8 seconds
- In full test run: 14.7 seconds
- Difference: 9.9 seconds (test isolation issue)
Root Cause
The test loaded all members from the database, not just the 2 members from the test setup. In full test runs, many members from other tests were present in the database, significantly slowing down the query.
Solution Implemented
Query Filtering: Added search query parameter to filter to only the expected member.
Code Change:
# Before:
{:ok, _view, html} = live(conn, "/members")
# After:
{:ok, _view, html} = live(conn, "/members?query=Alice")
Results
| Execution | Before | After | Improvement |
|---|---|---|---|
| Isolated | 4.8s | 1.1s | -77% (3.7s saved) |
| In Module | 4.2s | 0.4s | -90% (3.8s saved) |
| Expected in Full Run | 14.7s | ~4-6s | -65% to -73% (8-10s saved) |
Risk Assessment
Risk Level: ✅ Very Low
- Test functionality unchanged - only loads expected data
- All assertions still pass
- Test is now faster and more isolated
- No impact on test coverage
3. Full Test Suite Analysis and Categorization
Date: 2026-01-28
Status: ✅ Completed
Analysis Methodology
A comprehensive analysis was performed to identify tests suitable for the full test suite (via promotion) based on:
- Execution time: Tests taking >1 second
- Risk assessment: Tests that don't catch critical regressions
- Test category: UI/Display, workflow details, edge cases
Test Categorization
🔴 CRITICAL - Must Stay in Fast Suite:
- Core Business Logic (Member/User CRUD)
- Authentication & Authorization Basics
- Critical Bootstrap (Admin user, system roles)
- Email Synchronization
- Representative Policy Tests (one per Permission Set + Action)
🟡 LOW RISK - Moved to Full Test Suite (via Promotion):
- Seeds Tests (non-critical: smoke test, idempotency)
- LiveView Display/Formatting Tests
- UserLive.ShowTest (core functionality covered by Index/Form)
- UserLive.IndexTest UI Features (sorting, checkboxes, navigation)
- RoleLive Tests (role management, not core authorization)
- MemberLive Custom Fields Display Tests
- Edge Cases with Large Datasets
Risk Assessment Summary
| Category | Tests | Time Saved | Risk Level | Rationale |
|---|---|---|---|---|
| Seeds (non-critical) | 2 | 18.1s | ⚠️ Low | Critical bootstrap tests remain |
| UserLive.ShowTest | 3 | 10.8s | ⚠️ Low | Core CRUD covered by Index/Form |
| UserLive.IndexTest (UI) | 5 | 25.0s | ⚠️ Low | UI features, not core functionality |
| Custom Fields Display | 6 | 8.5s | ⚠️ Low | Formatting tests, visible in code review |
| RoleLive Tests | 7 | 7.7s | ⚠️ Low | Role management, not authorization |
| Edge Cases | 1 | 1.5s | ⚠️ Low | Edge case, not critical path |
| Performance Tests | 1 | 3.8s | ✅ Very Low | Explicit performance validation |
| Total | 25 | ~77s | ⚠️ Low |
Overall Risk: ⚠️ Low - All moved tests have low risk and don't catch critical regressions. Core functionality remains fully tested.
Tests Excluded from Full Test Suite
The following tests were NOT moved to full test suite (via promotion) despite being slow:
- Policy Tests: Medium risk - kept in fast suite (representative tests remain)
- UserLive.FormTest: Medium risk - core CRUD functionality
- Tests <1s: Don't meet execution time threshold
- Critical Bootstrap Tests: High risk - deployment critical
Current Performance Analysis
Top 20 Slowest Tests (without :slow)
After implementing the full test suite via promotion, the remaining slowest tests are:
| Rank | Test | File | Time | Category |
|---|---|---|---|---|
| 1 | test Critical bootstrap invariants Mitglied system role exists |
seeds_test.exs |
6.7s | Critical Bootstrap |
| 2 | test Critical bootstrap invariants Admin user has Admin role |
seeds_test.exs |
5.0s | Critical Bootstrap |
| 3 | test normal_user permission set can read own user record |
user_policies_test.exs |
2.6s | Policy Test |
| 4 | test normal_user permission set can create member |
member_policies_test.exs |
2.5s | Policy Test |
| 5-20 | Various Policy and LiveView tests | Multiple files | 1.5-2.4s each | Policy/LiveView |
Total Top 20: ~44 seconds (12% of total time without :slow)
Note: Many previously slow tests (UserLive.IndexTest, UserLive.ShowTest, Display/Formatting tests) are now tagged with @tag :slow and excluded from standard runs.
Performance Hotspots Identified
1. Seeds Tests (~16.2s for 4 tests)
Status: ✅ Optimized (reduced from 13 tests)
Remaining Optimization Potential: 3-5 seconds
Opportunities:
- Settings update could potentially be moved to
setup_all(if sandbox allows) - Seeds execution could be further optimized (less data in test mode)
- Idempotency test could be optimized (only 1x seeds instead of 2x)
2. User LiveView Tests (~35.5s for 10 tests)
Status: ⏳ Identified for optimization
Optimization Potential: 15-20 seconds
Files:
test/mv_web/user_live/index_test.exs(3 tests, ~10.2s)test/mv_web/user_live/form_test.exs(4 tests, ~15.0s)test/mv_web/user_live/show_test.exs(3 tests, ~10.3s)
Patterns:
- Many tests create user/member data
- LiveView mounts are expensive
- Form submissions with validations are slow
Recommended Actions:
- Move shared fixtures to
setup_all - Reduce test data volume (3-5 users instead of 10+)
- Optimize setup patterns for recurring patterns
3. Policy Tests (~8.7s for 3 tests)
Status: ⏳ Identified for optimization
Optimization Potential: 5-8 seconds
Files:
test/mv/membership/member_policies_test.exs(2 tests, ~6.1s)test/mv/accounts/user_policies_test.exs(1 test, ~2.6s)
Pattern:
- Each test creates new roles/users/members
- Roles are identical across tests
Recommended Actions:
- Create roles in
setup_all(shared across tests) - Reuse common fixtures
- Maintain test isolation while optimizing setup
Future Optimization Opportunities
Priority 1: User LiveView Tests Optimization
Estimated Savings: 14-22 seconds
Status: 📋 Analysis Complete - Ready for Implementation
Analysis Summary
Analysis of User LiveView tests identified significant optimization opportunities:
- Framework functionality over-testing: ~30 tests test Phoenix/Ash/Gettext core features
- Redundant test data creation: Each test creates users/members independently
- Missing shared fixtures: No
setup_allusage for common data
Current Performance
Top 20 Slowest Tests (User LiveView):
index_test.exs: ~10.2s for 3 tests in Top 20form_test.exs: ~15.0s for 4 tests in Top 20show_test.exs: ~10.3s for 3 tests in Top 20- Total: ~35.5 seconds for User LiveView tests
Optimization Opportunities
1. Remove Framework Functionality Tests (~30 tests, 8-12s saved)
- Remove translation tests (Gettext framework)
- Remove navigation tests (Phoenix LiveView framework)
- Remove validation tests (Ash framework)
- Remove basic HTML rendering tests (consolidate into smoke test)
- Remove password storage tests (AshAuthentication framework)
2. Implement Shared Fixtures (3-5s saved)
- Use
setup_allfor common test data inindex_test.exsandshow_test.exs - Share users for sorting/checkbox tests
- Share common users/members across tests
- Note:
form_test.exsusesasync: false, preventingsetup_allusage
3. Consolidate Redundant Tests (~10 tests → 3-4 tests, 2-3s saved)
- Merge basic display tests into smoke test
- Merge navigation tests into integration test
- Reduce sorting tests to 1 integration test
4. Optimize Test Data Volume (1-2s saved)
- Use minimum required data (2 users for sorting, 2 for checkboxes)
- Share data across tests via
setup_all
Tests to Keep (Business Logic)
Index Tests:
initially sorts by email ascending- Tests default sortcan sort email descending by clicking sort button- Tests sort functionalityselect all automatically checks when all individual users are selected- Business logicdoes not show system actor user in list- Business ruledisplays linked member name in user list- Business logic- Edge case tests
Form Tests:
creates user without password- Business logiccreates user with password when enabled- Business logicadmin sets new password for user- Business logicselecting member and saving links member to user- Business logic- Member linking/unlinking workflow tests
Show Tests:
displays password authentication status- Business logicdisplays linked member when present- Business logicredirects to user list when viewing system actor user- Business rule
Implementation Plan
Phase 1: Remove Framework Tests (1-2 hours, ⚠️ Very Low Risk)
- Remove translation, navigation, validation, and basic HTML rendering tests
- Consolidate remaining display tests into smoke test
Phase 2: Implement Shared Fixtures (2-3 hours, ⚠️ Low Risk)
- Add
setup_alltoindex_test.exsandshow_test.exs - Update tests to use shared fixtures
- Verify test isolation maintained
Phase 3: Consolidate Tests (1-2 hours, ⚠️ Very Low Risk)
- Merge basic display tests into smoke test
- Merge navigation tests into integration test
- Reduce sorting tests to 1 integration test
Risk Assessment: ⚠️ Low
- Framework functionality is tested by framework maintainers
- Business logic tests remain intact
- Shared fixtures maintain test isolation
- Consolidation preserves coverage
Priority 2: Policy Tests Optimization
Estimated Savings: 5.5-9 seconds
Status: 📋 Analysis Complete - Ready for Decision
Analysis Summary
Analysis of policy tests identified significant optimization opportunities:
- Redundant fixture creation: Roles and users created repeatedly across tests
- Framework functionality over-testing: Many tests verify Ash policy framework behavior
- Test duplication: Similar tests across different permission sets
Current Performance
Policy Test Files Performance:
member_policies_test.exs: 24 tests, ~66s (top 20)user_policies_test.exs: 30 tests, ~66s (top 20)custom_field_value_policies_test.exs: 20 tests, ~66s (top 20)- Total: 74 tests, ~152s total
Top 20 Slowest Policy Tests: ~66 seconds
Framework vs. Business Logic Analysis
Framework Functionality (Should NOT Test):
- Policy evaluation (how Ash evaluates policies)
- Permission lookup (how Ash looks up permissions)
- Scope filtering (how Ash applies scope filters)
- Auto-filter behavior (how Ash auto-filters queries)
- Forbidden vs NotFound (how Ash returns errors)
Business Logic (Should Test):
- Permission set definitions (what permissions each role has)
- Scope definitions (what scopes each permission set uses)
- Special cases (custom business rules)
- Permission set behavior (how our permission sets differ)
Optimization Opportunities
1. Remove Framework Functionality Tests (~22-34 tests, 3-4s saved)
- Remove "cannot" tests that verify error types (Forbidden, NotFound)
- Remove tests that verify auto-filter behavior (framework)
- Remove tests that verify permission evaluation (framework)
- Risk: ⚠️ Very Low - Framework functionality is tested by Ash maintainers
2. Consolidate Redundant Tests (~6-8 tests → 2-3 tests, 1-2s saved)
- Merge similar tests across permission sets
- Create integration tests that cover multiple permission sets
- Risk: ⚠️ Low - Same coverage, fewer tests
3. Share Admin User Across Describe Blocks (1-2s saved)
- Create admin user once in module-level
setup - Reuse admin user in helper functions
- Note:
async: falsepreventssetup_all, but module-levelsetupworks - Risk: ⚠️ Low - Admin user is read-only in tests, safe to share
4. Reduce Test Data Volume (0.5-1s saved)
- Use minimum required data
- Share fixtures where possible
- Risk: ⚠️ Very Low - Still tests same functionality
Test Classification Summary
Tests to Remove (Framework):
member_policies_test.exs: ~10 tests (cannot create/destroy/update, auto-filter tests)user_policies_test.exs: ~16 tests (cannot read/update/create/destroy, auto-filter tests)custom_field_value_policies_test.exs: ~8 tests (similar "cannot" tests)
Tests to Consolidate (Redundant):
user_policies_test.exs: 6 tests → 2 tests (can read/update own user record)
Tests to Keep (Business Logic):
- All "can" tests that verify permission set behavior
- Special case tests (e.g., "user can always READ linked member")
- AshAuthentication bypass tests (our integration)
Implementation Plan
Phase 1: Remove Framework Tests (1-2 hours, ⚠️ Very Low Risk)
- Identify all "cannot" tests that verify error types
- Remove tests that verify Ash auto-filter behavior
- Remove tests that verify permission evaluation (framework)
Phase 2: Consolidate Redundant Tests (1-2 hours, ⚠️ Low Risk)
- Identify similar tests across permission sets
- Create integration tests that cover multiple permission sets
- Remove redundant individual tests
Phase 3: Share Admin User (1-2 hours, ⚠️ Low Risk)
- Add module-level
setupto create admin user - Update helper functions to accept admin user parameter
- Update all
setupblocks to use shared admin user
Risk Assessment: ⚠️ Low
- Framework functionality is tested by Ash maintainers
- Business logic tests remain intact
- Admin user sharing maintains test isolation (read-only)
- Consolidation preserves coverage
Priority 3: Seeds Tests Further Optimization
Estimated Savings: 3-5 seconds
Actions:
- Investigate if settings update can be moved to
setup_all - Introduce seeds mode for tests (less data in test mode)
- Optimize idempotency test (only 1x seeds instead of 2x)
Risk Assessment: ⚠️ Low to Medium
- Sandbox limitations may prevent
setup_allusage - Seeds mode would require careful implementation
- Idempotency test optimization needs to maintain test validity
Priority 4: Additional Test Isolation Improvements
Estimated Savings: Variable (depends on specific tests)
Actions:
- Review tests that load all records (similar to the critical test fix)
- Add query filters where appropriate
- Ensure proper test isolation in async tests
Risk Assessment: ⚠️ Very Low
- Similar to the critical test optimization (proven approach)
- Improves test isolation and reliability
Estimated Total Optimization Potential
| Priority | Optimization | Estimated Savings |
|---|---|---|
| 1 | User LiveView Tests | 14-22s |
| 2 | Policy Tests | 5.5-9s |
| 3 | Seeds Tests Further | 3-5s |
| 4 | Additional Isolation | Variable |
| Total Potential | 22.5-36 seconds |
Projected Final Time: From ~368 seconds (fast suite) to ~332-345 seconds (~5.5-5.8 minutes) with remaining optimizations
Note: Detailed analysis documents available:
- User LiveView Tests: See "Priority 1: User LiveView Tests Optimization" section above
- Policy Tests: See "Priority 2: Policy Tests Optimization" section above
Risk Assessment Summary
Overall Risk Level: ⚠️ Low
All optimizations maintain test coverage while improving performance:
| Optimization | Risk Level | Mitigation |
|---|---|---|
| Seeds tests reduction | ⚠️ Low | Coverage mapped to domain tests |
| Performance tests tagging | ✅ Very Low | Tests still executed, just separately |
| Critical test optimization | ✅ Very Low | Functionality unchanged, better isolation |
| Future optimizations | ⚠️ Low | Careful implementation with verification |
Monitoring Plan
Success Criteria
- ✅ Seeds tests execute in <20 seconds consistently
- ✅ No increase in seeds-related deployment failures
- ✅ No regression in authorization or membership fee bugs
- ✅ Top 20 slowest tests: < 60 seconds (currently ~44s)
- ✅ Total execution time (without
:slow): < 10 minutes (currently 6.1 min) - ⏳ Slow tests execution time: < 2 minutes (currently ~1.3 min)
What to Watch For
-
Production Seeds Failures:
- Monitor deployment logs for seeds errors
- If failures increase, consider restoring detailed tests
-
Authorization Bugs After Seeds Changes:
- If role/permission bugs appear after seeds modifications
- May indicate need for more seeds-specific role validation
-
Test Performance Regression:
- Monitor test execution times in CI
- Alert if times increase significantly
-
Developer Feedback:
- If developers report missing test coverage
- Adjust based on real-world experience
Benchmarking and Analysis
How to Benchmark Tests
ExUnit Built-in Benchmarking:
The test suite is configured to show the slowest tests automatically:
# test/test_helper.exs
ExUnit.start(
slowest: 10 # Shows 10 slowest tests at the end of test run
)
Run Benchmark Analysis:
# Show slowest tests
mix test --slowest 20
# Exclude slow tests for faster feedback
mix test --exclude slow --slowest 20
# Run only slow tests
mix test --only slow --slowest 10
# Benchmark specific test file
mix test test/mv_web/member_live/index_member_fields_display_test.exs --slowest 5
Benchmarking Best Practices
- Run benchmarks regularly (e.g., monthly) to catch performance regressions
- Compare isolated vs. full runs to identify test isolation issues
- Monitor CI execution times to track trends over time
- Document significant changes in test performance
Test Suite Structure
Test Execution Modes
Fast Tests (Default):
- Excludes slow tests (
@tag :slow) - Used for standard development workflow
- Execution time: ~6 minutes
- Command:
mix test --exclude sloworjust test-fast
Slow Tests:
- Tests tagged with
@tag :slowor@describetag :slow(25 tests) - Low risk, >1 second execution time
- UI/Display tests, workflow details, edge cases, performance tests
- Execution time: ~1.3 minutes
- Command:
mix test --only sloworjust test-slow - Excluded from standard CI runs
Full Test Suite (via Promotion):
- Triggered by promoting a build to
productionin Drone CI - Runs all tests (
mix test) for comprehensive coverage - Execution time: ~7.4 minutes
- Required before merging to
main(enforced via branch protection)
All Tests:
- Includes both fast and slow tests
- Used for comprehensive validation (pre-merge)
- Execution time: ~7.4 minutes
- Command:
mix testorjust test
Test Organization
Tests are organized to mirror the lib/ directory structure:
test/
├── accounts/ # Accounts domain tests
├── membership/ # Membership domain tests
├── membership_fees/ # Membership fees domain tests
├── mv/ # Core application tests
│ ├── accounts/ # User-related tests
│ ├── membership/ # Member-related tests
│ └── authorization/ # Authorization tests
├── mv_web/ # Web layer tests
│ ├── controllers/ # Controller tests
│ ├── live/ # LiveView tests
│ └── components/ # Component tests
└── support/ # Test helpers
├── conn_case.ex # Controller test setup
└── data_case.ex # Database test setup
Best Practices for Test Performance
When Writing New Tests
- Use
async: truewhen possible (for parallel execution) - Filter queries to only load necessary data
- Share fixtures in
setup_allwhen appropriate - Tag performance tests with
@tag :slowif they use large datasets - Keep test data minimal - only create what's needed for the test
When Optimizing Existing Tests
- Measure first - Use
mix test --slowestto identify bottlenecks - Compare isolated vs. full runs - Identify test isolation issues
- Optimize setup - Move shared data to
setup_allwhere possible - Filter queries - Only load data needed for the test
- Verify coverage - Ensure optimizations don't reduce test coverage
Test Tagging Guidelines
Tag as @tag :slow when:
-
Performance Tests:
- Explicitly testing performance characteristics
- Using large datasets (50+ records)
- Testing scalability or query optimization
- Validating N+1 query prevention
-
Low-Risk Tests (>1s):
- UI/Display/Formatting tests (not critical for every commit)
- Workflow detail tests (not core functionality)
- Edge cases with large datasets
- Show page tests (core functionality covered by Index/Form tests)
- Non-critical seeds tests (smoke tests, idempotency)
Do NOT tag as @tag :slow when:
- ❌ Test is slow due to inefficient setup (fix the setup instead)
- ❌ Test is slow due to bugs (fix the bug instead)
- ❌ Core CRUD operations (Member/User Create/Update/Destroy)
- ❌ Basic Authentication/Authorization
- ❌ Critical Bootstrap (Admin user, system roles)
- ❌ Email Synchronization
- ❌ Representative Policy tests (one per Permission Set + Action)
- ❌ It's an integration test (use
@tag :integrationinstead)
Changelog
2026-01-28: Initial Optimization Phase
Completed:
- ✅ Reduced seeds tests from 13 to 4 tests
- ✅ Tagged 9 performance tests with
@tag :slow - ✅ Optimized critical test with query filtering
- ✅ Created slow test suite infrastructure
- ✅ Updated CI/CD to exclude slow tests from standard runs
- ✅ Added promotion-based full test suite pipeline (
check-full)
Time Saved: ~21-30 seconds per test run
2026-01-28: Full Test Suite via Promotion Implementation
Completed:
- ✅ Analyzed all tests for full test suite candidates
- ✅ Identified 36 tests with low risk and >1s execution time
- ✅ Tagged 25 tests with
@tag :slowfor full test suite (via promotion) - ✅ Categorized tests by risk level and execution time
- ✅ Documented tagging criteria and guidelines
Tests Tagged:
- 2 Seeds tests (non-critical) - 18.1s
- 3 UserLive.ShowTest tests - 10.8s
- 5 UserLive.IndexTest tests - 25.0s
- 3 MemberLive.IndexCustomFieldsDisplayTest tests - 4.9s
- 3 MemberLive.IndexCustomFieldsEdgeCasesTest tests - 3.6s
- 7 RoleLive tests - 7.7s
- 1 MemberAvailableForLinkingTest - 1.5s
- 1 Performance test (already tagged) - 3.8s
Time Saved: ~77 seconds per test run
Total Optimization Impact:
- Before: ~445 seconds (7.4 minutes)
- After (fast suite): ~368 seconds (6.1 minutes)
- Time saved: ~77 seconds (17% reduction)
Next Steps:
- ⏳ Monitor full test suite execution via promotion in CI
- ⏳ Optimize remaining slow tests (Policy tests, etc.)
- ⏳ Further optimize Seeds tests (Priority 3)
References
- Testing Standards:
CODE_GUIDELINES.md- Section 4 (Testing Standards) - CI/CD Configuration:
.drone.yml - Test Helper:
test/test_helper.exs - Justfile Commands:
Justfile(test-fast, test-slow, test-all)
Questions & Answers
Q: What if seeds create wrong data and break the system?
A: The smoke test will fail if seeds raise errors. Domain tests ensure business logic is correct regardless of seeds content.
Q: What if we add a new critical bootstrap requirement?
A: Add a new test to the "Critical bootstrap invariants" section in test/seeds_test.exs.
Q: How do we know the removed tests aren't needed?
A: Monitor for 2-3 months. If no seeds-related bugs appear that would have been caught by removed tests, they were redundant.
Q: Should we restore the tests for important releases?
A: Consider running the full test suite (including slow tests) before major releases. Daily development uses the optimized suite.
Q: How do I add a new performance test?
A: Tag it with @tag :slow for individual tests or @describetag :slow for describe blocks. Use @describetag instead of @moduletag to avoid tagging unrelated tests. Include measurable performance assertions (query counts, timing with tolerance, etc.). See "Performance Test Guidelines" section above.
Q: Can I run slow tests locally?
A: Yes, use just test-slow or mix test --only slow. They're excluded from standard runs for faster feedback.
Q: What is the "full test suite"?
A: The full test suite runs all tests (mix test), including slow and UI tests. Tests tagged with @tag :slow or @describetag :slow are excluded from standard CI runs (check-fast) for faster feedback, but are included when promoting a build to production (check-full) before merging to main.
Q: Which tests should I tag as :slow?
A: Tag tests with @tag :slow if they: (1) take >1 second, (2) have low risk (not critical for catching regressions), and (3) test UI/Display/Formatting or workflow details. See "Test Tagging Guidelines" section for details.
Q: What if a slow test fails in the full test suite?
A: If a test in the full test suite fails, investigate the failure. If it indicates a critical regression, consider moving it back to the fast suite. If it's a flaky test, fix the test itself. The merge will be blocked until all tests pass.