A Jest test reporter client for the Fern platform that captures and sends test execution data to fern-reporter, enabling comprehensive test reporting and analysis across your JavaScript/TypeScript test suites.
The Fern Jest Client provides seamless integration between Jest testing framework and the Fern platform's test reporting infrastructure. It automatically captures test results, execution metadata, and CI/CD information, then sends this data to the fern-reporter service for storage and analysis.
- Automatic Test Reporting: Captures Jest test results and sends them to Fern platform
- Git Integration: Automatically extracts git branch, commit SHA, and repository information
- CI/CD Support: Built-in support for GitHub Actions, GitLab CI, Jenkins, and CircleCI
- Test Tagging: Extract tags from test names and descriptions for categorization
- Retry Logic: Robust error handling with configurable retry mechanisms
- TypeScript Support: Full TypeScript support with comprehensive type definitions
- Zero Configuration: Works out of the box with sensible defaults
npm install --save-dev fern-jest-clientyarn add --dev fern-jest-clientAdd the Fern reporter to your Jest configuration:
// jest.config.js
module.exports = {
// ... other Jest configuration
reporters: [
'default',
['fern-jest-client', {
projectId: 'your-project-uuid', // UUID from fern-platform
projectName: 'my-project', // Optional: human-readable name
baseUrl: 'http://localhost:8080'
}]
]
};Set the required environment variables:
export FERN_PROJECT_ID="my-project"
export FERN_REPORTER_BASE_URL="http://localhost:8080"npm testTest results will automatically be sent to your Fern platform instance, where you can view comprehensive analytics and insights.
// jest.config.js
module.exports = {
reporters: [
'default',
['fern-jest-client', {
projectId: 'your-project-uuid', // Required: Your project UUID
projectName: 'my-project', // Optional: Project name (defaults to projectId)
baseUrl: 'http://localhost:8080', // Optional: Fern reporter URL
timeout: 30000, // Optional: Request timeout (ms)
enabled: true // Optional: Enable/disable reporting
}]
]
};| Variable | Description | Default |
|---|---|---|
FERN_PROJECT_ID |
Project UUID from fern-platform | Required |
FERN_PROJECT_NAME |
Human-readable project name | Uses FERN_PROJECT_ID |
FERN_REPORTER_BASE_URL |
Fern reporter service URL | http://localhost:8080 |
FERN_ENABLED |
Enable/disable reporting | true |
FERN_DEBUG |
Enable debug logging | false |
FERN_FAIL_ON_ERROR |
Fail tests if reporting fails | false |
GIT_REPO_PATH |
Custom git repository path | Current directory |
The client automatically detects and integrates with various CI/CD platforms:
# .github/workflows/test.yml
env:
FERN_PROJECT_ID: "my-project"
FERN_REPORTER_BASE_URL: "https://fern-reporter.example.com"# .gitlab-ci.yml
variables:
FERN_PROJECT_ID: "my-project"
FERN_REPORTER_BASE_URL: "https://fern-reporter.example.com"environment {
FERN_PROJECT_ID = 'my-project'
FERN_REPORTER_BASE_URL = 'https://fern-reporter.example.com'
}The Fern Jest Client supports test tagging for better categorization and filtering. Tags can be extracted from test names using various patterns:
describe('Math operations [unit]', () => {
it('should add numbers @fast', () => {
expect(2 + 2).toBe(4);
});
it('should multiply correctly #math #arithmetic', () => {
expect(3 * 4).toBe(12);
});
it('should handle edge cases [integration,edge-case]', () => {
// Test implementation
});
});- Bracket notation:
[unit],[integration,slow] - At notation:
@fast,@slow,@unit - Hash notation:
#math,#edge-case
import { createFernClient } from 'fern-jest-client';
const client = createFernClient('my-project', {
baseUrl: 'http://localhost:8080',
timeout: 30000
});
// Test connectivity
const isHealthy = await client.ping();
console.log('Fern API healthy:', isHealthy);import { createFernClient, mapJestResultsToTestRun } from 'fern-jest-client';
import { getGitInfo, getCIInfo } from 'fern-jest-client';
const client = createFernClient('my-project');
// Get test results from Jest (this would typically be done automatically)
const jestResults = /* ... Jest aggregated results ... */;
// Get metadata
const [gitInfo, ciInfo] = await Promise.all([
getGitInfo(),
getCIInfo()
]);
// Convert and send
const testRun = mapJestResultsToTestRun(jestResults, 'my-project', gitInfo, ciInfo);
await client.report(testRun);import { FernReporter } from 'fern-jest-client';
class CustomFernReporter extends FernReporter {
async onRunComplete(contexts: Set<any>, results: AggregatedResult): Promise<void> {
// Custom logic before reporting
console.log('Custom processing before Fern reporting');
// Call parent implementation
await super.onRunComplete(contexts, results);
// Custom logic after reporting
console.log('Custom processing after Fern reporting');
}
}
export default CustomFernReporter;The main client for communicating with the Fern reporter API.
class FernApiClient {
constructor(projectId: string, options?: FernClientOptions);
async report(testRun: TestRun): Promise<void>;
async ping(): Promise<boolean>;
getBaseUrl(): string;
getProjectId(): string;
}Jest custom reporter that integrates with the Fern platform.
class FernReporter extends BaseReporter {
constructor(globalConfig: Config.GlobalConfig, options: FernReporterConfig);
async onRunComplete(contexts: Set<any>, results: AggregatedResult): Promise<void>;
async testConnection(): Promise<boolean>;
}// Get git repository information
async function getGitInfo(): Promise<GitInfo>;
// Get CI/CD environment information
function getCIInfo(): CIInfo;
// Find git repository root
function findGitRoot(startPath?: string): string | null;// Convert Jest results to Fern format
function mapJestResultsToTestRun(
results: JestAggregatedResult,
projectId: string,
gitInfo: GitInfo,
ciInfo: CIInfo
): TestRun;
// Generate test summary
function generateTestSummary(testRun: TestRun): string;interface TestRun {
test_project_id: string;
test_seed: number;
start_time: string;
end_time: string;
git_branch: string;
git_sha: string;
build_trigger_actor: string;
build_url: string;
suite_runs: SuiteRun[];
}interface SuiteRun {
suite_name: string;
start_time: string;
end_time: string;
spec_runs: SpecRun[];
}interface SpecRun {
spec_description: string;
status: string;
message: string;
tags: Tag[];
start_time: string;
end_time: string;
}// jest.config.js
module.exports = {
preset: 'ts-jest',
testEnvironment: 'node',
reporters: [
'default',
['fern-jest-client', {
projectId: process.env.FERN_PROJECT_ID || 'my-project',
baseUrl: process.env.FERN_REPORTER_BASE_URL || 'http://localhost:8080',
enabled: process.env.CI === 'true' || process.env.FERN_ENABLED === 'true'
}]
],
testMatch: [
'**/__tests__/**/*.ts',
'**/?(*.)+(spec|test).ts'
]
};// calculator.test.ts
describe('Calculator [unit] @math', () => {
describe('Basic operations #arithmetic', () => {
it('should add two numbers', () => {
expect(2 + 2).toBe(4);
});
it('should subtract numbers @fast', () => {
expect(5 - 3).toBe(2);
});
});
describe('Edge cases [integration] #edge-case', () => {
it('should handle division by zero', () => {
expect(() => 10 / 0).not.toThrow();
expect(10 / 0).toBe(Infinity);
});
it.skip('should handle very large numbers', () => {
// This test is skipped for demonstration
expect(Number.MAX_VALUE + 1).toBeGreaterThan(Number.MAX_VALUE);
});
});
});name: Test with Fern Reporting
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:13
env:
POSTGRES_PASSWORD: postgres
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '18'
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run tests with Fern reporting
env:
FERN_PROJECT_ID: my-project
FERN_REPORTER_BASE_URL: http://localhost:8080
FERN_ENABLED: true
run: npm testFor organizations with multiple projects, configure different project IDs:
// Project-specific configuration
const getJestConfig = (projectName) => ({
reporters: [
'default',
['fern-jest-client', {
projectId: `${process.env.COMPANY_PREFIX}-${projectName}`,
baseUrl: process.env.FERN_ENTERPRISE_URL,
enabled: process.env.NODE_ENV !== 'test'
}]
]
});
module.exports = getJestConfig('user-service');Configure different reporting strategies for different environments:
// jest.config.js
const getFernConfig = () => {
const baseConfig = {
projectId: process.env.FERN_PROJECT_ID,
timeout: 30000
};
if (process.env.NODE_ENV === 'production') {
return {
...baseConfig,
baseUrl: 'https://fern-reporter.company.com',
enabled: true
};
}
if (process.env.NODE_ENV === 'staging') {
return {
...baseConfig,
baseUrl: 'https://fern-reporter-staging.company.com',
enabled: true
};
}
return {
...baseConfig,
enabled: false // Disable in development/test
};
};
module.exports = {
reporters: [
'default',
['fern-jest-client', getFernConfig()]
]
};Implement sophisticated test categorization:
// tests/utils/testCategories.ts
export const TestCategories = {
// Functional categories
UNIT: '[unit]',
INTEGRATION: '[integration]',
E2E: '[e2e]',
// Performance categories
FAST: '@fast',
SLOW: '@slow',
PERFORMANCE: '#performance',
// Feature categories
API: '#api',
UI: '#ui',
DATABASE: '#database',
AUTH: '#auth',
// Risk categories
CRITICAL: '[critical]',
HIGH_RISK: '[high-risk]',
FLAKY: '@flaky-prone'
};
// usage in tests
describe(`User Authentication ${TestCategories.UNIT} ${TestCategories.AUTH}`, () => {
it(`should validate token ${TestCategories.FAST}`, () => {
// test implementation
});
it(`should handle expired tokens ${TestCategories.CRITICAL}`, () => {
// test implementation
});
});Generate tags programmatically based on test context:
// tests/utils/dynamicTags.ts
export const createDynamicTags = (testContext: any) => {
const tags = [];
if (testContext.executionTime < 100) tags.push('@fast');
if (testContext.executionTime > 5000) tags.push('@slow');
if (testContext.usesDatabase) tags.push('#database');
if (testContext.usesAPI) tags.push('#api');
return tags.join(' ');
};
// usage
describe(`Order Processing ${createDynamicTags(orderTestContext)}`, () => {
// tests
});name: Comprehensive Testing with Fern
on:
push:
branches: [main, develop]
pull_request:
branches: [main]
jobs:
test:
strategy:
matrix:
node-version: [16, 18, 20]
os: [ubuntu-latest, windows-latest, macos-latest]
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # Get full git history for better analytics
- name: Setup Node.js ${{ matrix.node-version }}
uses: actions/setup-node@v4
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
- name: Install dependencies
run: npm ci
- name: Run tests with Fern reporting
env:
FERN_PROJECT_ID: my-app-${{ matrix.node-version }}-${{ matrix.os }}
FERN_REPORTER_BASE_URL: ${{ secrets.FERN_REPORTER_URL }}
FERN_API_KEY: ${{ secrets.FERN_API_KEY }}
FERN_ENABLED: true
FERN_DEBUG: ${{ runner.debug == '1' }}
# Additional context for analytics
FERN_BUILD_MATRIX: "node-${{ matrix.node-version }}-${{ matrix.os }}"
FERN_PR_NUMBER: ${{ github.event.number }}
FERN_BRANCH_NAME: ${{ github.head_ref || github.ref_name }}
run: |
npm test -- --coverage
npm run test:e2e
- name: Upload test artifacts
if: failure()
uses: actions/upload-artifact@v3
with:
name: test-results-${{ matrix.node-version }}-${{ matrix.os }}
path: |
coverage/
test-results/
screenshots/# .gitlab-ci.yml
stages:
- test
- report
variables:
FERN_PROJECT_ID: "$CI_PROJECT_NAME"
FERN_REPORTER_BASE_URL: "$FERN_REPORTER_URL"
FERN_ENABLED: "true"
.test_template: &test_template
stage: test
before_script:
- npm ci
script:
- npm test
artifacts:
reports:
junit: test-results.xml
coverage_report:
coverage_format: cobertura
path: coverage/cobertura-coverage.xml
paths:
- coverage/
- test-results/
expire_in: 1 week
test:unit:
<<: *test_template
variables:
FERN_PROJECT_ID: "$CI_PROJECT_NAME-unit"
script:
- npm run test:unit
test:integration:
<<: *test_template
variables:
FERN_PROJECT_ID: "$CI_PROJECT_NAME-integration"
script:
- npm run test:integration
test:e2e:
<<: *test_template
variables:
FERN_PROJECT_ID: "$CI_PROJECT_NAME-e2e"
script:
- npm run test:e2e
fern_analytics_report:
stage: report
dependencies:
- test:unit
- test:integration
- test:e2e
script:
- echo "Test analytics available at $FERN_REPORTER_BASE_URL/projects/$CI_PROJECT_NAME"
only:
- main
- developTrack custom performance metrics:
// tests/utils/performanceTracker.ts
import { performance } from 'perf_hooks';
export class PerformanceTracker {
private metrics: Map<string, number> = new Map();
startTimer(name: string): void {
this.metrics.set(`${name}_start`, performance.now());
}
endTimer(name: string): number {
const start = this.metrics.get(`${name}_start`);
if (!start) throw new Error(`Timer ${name} was not started`);
const duration = performance.now() - start;
this.metrics.set(name, duration);
return duration;
}
getMetrics(): Record<string, number> {
return Object.fromEntries(this.metrics);
}
}
// usage in tests
describe('API Performance Tests [performance]', () => {
const tracker = new PerformanceTracker();
beforeEach(() => {
tracker.startTimer('api_call');
});
afterEach(() => {
const duration = tracker.endTimer('api_call');
console.log(`API call took ${duration.toFixed(2)}ms`);
// Add performance tags based on duration
if (duration > 1000) {
console.log('Performance warning: API call exceeded 1000ms @slow #performance-issue');
}
});
it('should respond within performance threshold @performance', async () => {
const response = await apiClient.getData();
expect(response.status).toBe(200);
const duration = tracker.getMetrics().api_call;
expect(duration).toBeLessThan(500); // 500ms threshold
});
});// tests/utils/memoryMonitor.ts
export class MemoryMonitor {
private initialMemory: NodeJS.MemoryUsage;
constructor() {
this.initialMemory = process.memoryUsage();
}
getCurrentUsage(): NodeJS.MemoryUsage {
return process.memoryUsage();
}
getMemoryDelta(): {
heapUsed: number;
heapTotal: number;
external: number;
} {
const current = this.getCurrentUsage();
return {
heapUsed: current.heapUsed - this.initialMemory.heapUsed,
heapTotal: current.heapTotal - this.initialMemory.heapTotal,
external: current.external - this.initialMemory.external
};
}
}
// usage
describe('Memory Usage Tests [unit] #memory', () => {
let memoryMonitor: MemoryMonitor;
beforeEach(() => {
memoryMonitor = new MemoryMonitor();
});
afterEach(() => {
const delta = memoryMonitor.getMemoryDelta();
if (delta.heapUsed > 10 * 1024 * 1024) { // 10MB
console.log('Memory warning: High memory usage detected @memory-leak #performance-issue');
}
});
it('should not leak memory during bulk operations @memory', () => {
// test implementation
const delta = memoryMonitor.getMemoryDelta();
expect(delta.heapUsed).toBeLessThan(5 * 1024 * 1024); // 5MB limit
});
});// tests/utils/testOwnership.ts
export const TestOwners = {
AUTH: '@auth-team',
API: '@backend-team',
UI: '@frontend-team',
DATABASE: '@data-team',
PERFORMANCE: '@performance-team'
};
// usage
describe(`Authentication Service ${TestOwners.AUTH} [critical]`, () => {
it('should validate JWT tokens', () => {
// When this test fails, @auth-team gets notified
});
});// tests/utils/customReporter.ts
import { FernReporter } from 'fern-jest-client';
export class TeamCustomReporter extends FernReporter {
async onRunComplete(contexts: Set<any>, results: any): Promise<void> {
// Custom team-specific logic
await this.generateTeamReport(results);
await this.notifyStakeholders(results);
// Call parent to send to Fern
await super.onRunComplete(contexts, results);
// Post-processing
await this.updateDashboards(results);
}
private async generateTeamReport(results: any): Promise<void> {
// Generate custom reports for different teams
const criticalFailures = this.extractCriticalFailures(results);
if (criticalFailures.length > 0) {
await this.alertOnCallTeam(criticalFailures);
}
}
private extractCriticalFailures(results: any): any[] {
return results.testResults
.flatMap((suite: any) => suite.testResults)
.filter((test: any) =>
test.status === 'failed' &&
test.fullName.includes('[critical]')
);
}
}- Check Configuration: Ensure
FERN_PROJECT_IDis set - Verify URL: Confirm
FERN_REPORTER_BASE_URLis correct - Check Connectivity: Test if the Fern reporter service is accessible
- Enable Debug: Set
FERN_DEBUG=truefor detailed logging
- Check Git Repository: Ensure you're running tests in a git repository
- Set Git Path: Use
GIT_REPO_PATHenvironment variable if needed - CI Environment: Verify CI environment variables are set correctly
- Disable Failure on Error: Set
FERN_FAIL_ON_ERROR=false - Check Network: Ensure network connectivity to Fern reporter
- Increase Timeout: Adjust the
timeoutoption in reporter configuration
Enable debug logging to troubleshoot issues:
FERN_DEBUG=true npm testThis will output detailed information about:
- Configuration loading
- Git information extraction
- API requests and responses
- Error details
Once your test data is sent to the fern-reporter service, you gain access to powerful analytics and insights through the Fern platform ecosystem. The fern-jest-client is designed to work seamlessly with the complete Fern testing analytics platform.
The fern-reporter service provides comprehensive test analytics including:
- Test Run Trends: Historical view of test execution patterns over time
- Success/Failure Rates: Track test reliability and identify problematic areas
- Performance Metrics: Test execution duration analysis and performance regression detection
- Flaky Test Detection: Automatically identify tests that pass/fail inconsistently
- Branch Comparison: Compare test results across different git branches
- CI/CD Integration Analytics: Track test performance across different CI environments
The Jest client automatically captures and sends the following metrics to fern-reporter:
- Total test count, passed, failed, and skipped tests
- Individual test execution times and suite durations
- Test start/end timestamps with millisecond precision
- Test retry counts and failure patterns
- Git branch, commit SHA, and repository information
- CI/CD environment details (GitHub Actions, GitLab CI, Jenkins, CircleCI)
- Build trigger information (manual, scheduled, PR-triggered)
- Node.js version and Jest configuration details
- Test file structure and suite hierarchy
- Custom tags for test categorization and filtering
- Test descriptions and failure messages
- Stack traces for failed tests
π Success Rate Trends
βββ Last 30 days: 94.2% β 96.1%
βββ Test Coverage: 87% β 89%
βββ Avg Duration: 2.3s β 1.9s
βββ Flaky Tests: 12 β 8
- Slowest Tests: Identify performance bottlenecks
- Duration Regression: Detect tests that are getting slower over time
- Parallel Execution Analysis: Optimize test parallelization strategies
- Memory Usage Tracking: Monitor test memory consumption patterns
- Test Stability Score: Measure test reliability over time
- Flaky Test Reports: Detailed analysis of unstable tests
- Code Coverage Integration: Correlate test results with coverage data
- Technical Debt Indicators: Identify tests that need maintenance
Web Dashboard - fern-ui
Access the modern React-based dashboard to:
- View real-time test execution results
- Analyze historical trends and patterns
- Filter and search test results by tags, branches, or time periods
- Export test reports in multiple formats (PDF, CSV, JSON)
- Set up alerts for test failures or performance regressions
Query test data programmatically:
query GetTestRuns($projectId: String!, $branch: String) {
testRuns(projectId: $projectId, branch: $branch) {
id
startTime
endTime
status
totalTests
passedTests
failedTests
suiteRuns {
suiteName
duration
specRuns {
description
status
tags
}
}
}
}Access data via REST endpoints:
# Get test run summary
curl -X GET "http://fern-reporter.example.com/api/v1/test-runs?project_id=my-project"
# Get detailed test run data
curl -X GET "http://fern-reporter.example.com/api/v1/test-runs/123/details"
# Get test analytics
curl -X GET "http://fern-reporter.example.com/api/v1/projects/my-project/analytics"-
Set up fern-reporter: Deploy the backend service following the fern-reporter setup guide
-
Configure the Jest client: Use this fern-jest-client to start sending test data
-
Access the dashboard: Open the fern-ui web interface to view your analytics
-
Customize your views: Set up custom dashboards and alerts based on your team's needs
π― Project: my-awesome-app
βββ π Success Rate: 96.3% (β2.1%)
βββ β±οΈ Avg Duration: 3.2m (β15s)
βββ π§ͺ Total Tests: 1,247 (+23)
βββ π Flaky Tests: 5 (-2)
βββ π
Last Run: 2 minutes ago
Recent Trends:
βββββββββββββββββββββββββββββββββββββββ
β Success Rate (Last 30 days) β
β ββββββββββββββββββββββββββββββ 96.3%β
β Duration (Last 30 days) β
β ββββββββββββββββββββββββββββ 3.2m β
βββββββββββββββββββββββββββββββββββββββ
π Test Suites Performance
βββ π’ auth.test.ts β 45/45 (1.2s)
βββ π’ api.test.ts β 78/78 (2.1s)
βββ π‘ integration.test.ts β 32/34 (4.5s) β οΈ 2 flaky
βββ π’ utils.test.ts β 156/156 (0.8s)
βββ π΄ e2e.test.ts β 12/15 (8.2s) β 3 failed
Set up automated Slack alerts for test failures:
// webhook integration example
const webhook = new FernWebhook({
url: process.env.SLACK_WEBHOOK_URL,
events: ['test.failed', 'test.flaky']
});Automatically create Jira tickets for failing tests:
// Jira integration example
const jiraIntegration = new FernJiraIntegration({
projectKey: 'TEST',
createTicketsFor: ['failed', 'flaky'],
assignTo: 'test-team'
});- π Bug Prevention: Identify flaky tests before they reach production
- β‘ Performance Optimization: Spot slow tests and optimize build times
- π― Quality Insights: Track test coverage and code quality metrics
- π Continuous Improvement: Data-driven decisions for test strategy
- π Team Productivity: Measure and improve development velocity
- π― Quality Gates: Enforce quality standards with automated reporting
- π Trend Analysis: Long-term visibility into team and project health
- π° Cost Optimization: Identify and eliminate wasteful test patterns
- π CI/CD Optimization: Improve pipeline reliability and speed
- π Build Analysis: Deep insights into build failures and patterns
β οΈ Early Warning: Proactive alerts for quality degradation- π Compliance: Automated reporting for audit and compliance needs
"After implementing Fern analytics, we reduced our flaky test count by 75% and improved our build reliability from 85% to 98%."
- Senior DevOps Engineer, Tech Startup
"Fern's trend analysis helped us identify performance regressions 3 weeks earlier than traditional methods, saving us significant debugging time."
- Principal Engineer, Fortune 500 Company
"The team visibility features in Fern transformed how we approach test ownership and collaboration. Our mean time to resolution for test failures dropped by 60%."
- Engineering Manager, SaaS Platform
- Consistent Tagging: Use standardized tags across your test suites
- Meaningful Project IDs: Structure project IDs for easy filtering and grouping
- Regular Review: Schedule weekly analytics reviews with your team
- Action-Oriented: Set up alerts and workflows to act on insights
- Continuous Refinement: Regularly update your analytics strategy based on learnings
- Test Success Rate: Aim for >95% success rate on main branch
- Build Duration: Monitor and optimize for <10 minute feedback loops
- Flaky Test Ratio: Keep flaky tests <2% of total test suite
- Coverage Trends: Track coverage changes with each release
- Team Velocity: Measure tests written vs. bugs found in production
For detailed information about analytics capabilities and setup:
- fern-reporter Documentation: Complete setup guide, API reference, and deployment instructions
- fern-ui Setup Guide: Dashboard configuration, customization, and user management
- Analytics API Documentation: Comprehensive API reference with examples
- Deployment Guide: Production deployment best practices and scaling strategies
- Integration Examples: Real-world integration patterns and success stories
- Analytics Best Practices: Comprehensive guide to maximizing value from test analytics
- Troubleshooting Guide: Common issues and solutions
- Community Forum: Get help from the community and share experiences
- fern-reporter: π The powerful backend service that receives, stores, and analyzes test data with advanced analytics capabilities
- fern-ui: π Modern React-based web interface for viewing test results, trends, and comprehensive analytics dashboards
- fern-ginkgo-client: πΉ Similar client for Go/Ginkgo tests with identical analytics integration
- fern-junit-client: β Client for JUnit-based testing frameworks (Java, Kotlin, Scala)
- fern-platform: πΏ Complete testing analytics platform with AI-powered insights and recommendations
- Fork the repository
- Create a feature branch:
git checkout -b feature/new-feature - Make your changes and add tests
- Run tests:
npm test - Run linting:
npm run lint - Commit your changes:
git commit -m 'Add new feature' - Push to the branch:
git push origin feature/new-feature - Submit a pull request
# Clone the repository
git clone https://github.com/your-org/fern-platform.git
cd fern-platform/fern-jest-client
# Install dependencies
npm install
# Build the project
npm run build
# Run tests
npm test
# Run with the Fern platform
make test-with-fernThis project is licensed under the Apache License 2.0 - see the LICENSE file for details.
- Documentation: Check this README and inline code documentation
- Issues: Report bugs and feature requests via GitHub Issues
- Discussions: Join the community discussions for questions and support