Official repo for adobe.com/express
- Install the Helix CLI:
sudo npm install -g @adobe/helix-cli - Run
aem upthis repo's folder. (opens your browser athttp://localhost:3000) - Open this repo's folder in your favorite editor and start coding.
npm run testor:
npm run test:watchThis will give you several options to debug tests. Note: coverage may not be accurate.
To run a specific test file with debugging enabled:
npx wtr --config ./web-test-runner.config.js --node-resolve --port=2000 "**/pricing.test.js" --debugReplace pricing.test.js with your specific test file name. The --debug flag enables debugging capabilities.
-
Ensure you have run
npm installin the project root. -
You may also need to run
npx playwright installto install all playwright browsers -
Nala tests can be executed using the following command:
npm run nala <env> [options]
# env: [main | stage | etc ] # options: - browser=<chrome|firefox|webkit> # Browser to use (default: chrome) - device=<desktop|mobile> # Device (default: desktop) - test=<.test.cjs> # Specific test file to run (runs all tests in the file) - -g, --g=<@tag> # Tag to filter tests by annotations ex: @test1, @ax-columns, @ax-marquee - mode=<headless|ui|debug|headed> # Mode (default: headless) - config=<config-file> # Configuration file (default: Playwright default) - project=<project-name> # Project configuration (default: express-live-chromium) - milolibs=<main|stage|feat-branch|> # Milolibs?=<env>
-
Examples:
- npm run nala stage @ax-columns # Run ax-columns block tests on express stage env on chrome browser - npm run nala stage @ax-columns browser=firefox # Run ax-columns block tests on express stage env on firefox browser - npm run nala stage milolibs=stage # Run express tests on stage env with Milo Stage libs
To view examples of how to use Nala commands with various options, you can run
npm run nala help-
Debug and UI Mode Caution: When using
debugoruimode, it is recommended to run only a single test using annotations (e.g.,@test1). Running multiple tests in these modes (e.g.,npm run nala stage mode=debugormode=ui) will launch a separate browser or debugger window for each test, which can quickly become resource-intensive and challenging to manage. -
Tip: To effectively watch or debug, focus on one test at a time to avoid opening excessive browser instances or debugger windows.
- Run accessibility tests using following command:
- npm run a11y <env|url> [path] [options]# env: [main | stage | branch-name | full URL]
# path: Optional relative path to append to the env or URL
# options:
-f, --file <filePath> # Path to a file containing multiple URLs (one per line)
-s, --scope <scope> # DOM scope to test (default: body)
-t, --tags <tag1,tag2> # WCAG tags to include (e.g., wcag2a,wcag21aa)
-m, --max-violations <number> # Max allowed violations before test fails (default: 0)
-o, --output-dir <directory> # Directory to save HTML reports (default: ./test-a11y-results)
- Examples:
- npm run a11y https://adobe.com
- npm run a11y stage /drafts/nala/blocks/ax-columns/ax-column-highlight
- npm run a11y https://adobe.com -- -t 'wcag2a'
- npm run a11y main -f urls.txtFor detailed guides and documentation on Nala, please visit the Nala GitHub Wiki.
This project uses automated workflows for merging to stage and main branches. Understanding the label system is crucial for getting your changes deployed.
- Purpose: Indicates PR is ready for code review
- Effect: PR will be reviewed by team members
- Requirement: Must be removed before auto-merge to stage
- Usage: Add when PR is complete and ready for feedback
- Purpose: Marks PR as ready for merge to stage environment
- Effect: PR will be included in next automated merge batch to stage
- Requirements:
- Must have 2+ approvals (configurable)
- Must pass all required checks
- Must NOT have "Ready for Review" label
- Usage: Add after addressing all review feedback
- Purpose: Alternative to "Ready for Stage" indicating QA has tested the changes
- Effect: PR will be included in next automated merge batch to stage
- Requirements: Same as "Ready for Stage"
- Usage: Add when QA team has verified functionality
- Purpose: Indicates PR should be merged from stage to main (production)
- Effect: Stage-to-main PR will be auto-merged during next cycle
- Requirements:
- Must be on stage-to-main PR
- Must have sufficient approvals
- Must pass all checks including PSI (PageSpeed Insights)
- Usage: Automatically managed by workflows
- Purpose: Marks changes as having no functional impact (docs, tests, etc.)
- Effect:
- Can be merged alongside other PRs without file conflict concerns
- Bypasses some restrictions in batch merging
- Usage: Add for documentation, test, or configuration changes that don't affect functionality. Always add "Ready for QA" when reviews are complete.
- Purpose: Prevents PR from being auto-merged
- Effect: PR will be skipped by all automated merge workflows
- Usage: Add when PR needs to be held back for any reason
- Purpose: Triggers automated Nala test execution
- Effect: Runs comprehensive end-to-end test suite against PR
- Usage: Add when you want to run full test coverage
- Frequency: Runs every 4 hours
- Batch Size: Up to 8 PRs per batch (configurable)
- Selection Criteria:
- Has "Ready for Stage" OR "QA Approved" label
- Does NOT have "Ready for Review" label
- Has 2+ approvals
- All checks passing
- No file conflicts with other PRs in batch
- Frequency: Runs every 4 hours
- Target: Stage-to-main PRs only
- Requirements:
- Stage-to-main PR exists
- Has sufficient approvals
- All checks passing (including PSI)
- Within RCP (Release Control Period) guidelines
You can manually trigger workflows either via the GitHub UI, or repository dispatch:
https://github.com/adobecom/express-milo/actions
Dispatch:
# Trigger merge to stage
curl -X POST \
-H "Authorization: token YOUR_TOKEN" \
-H "Accept: application/vnd.github.v3+json" \
https://api.github.com/repos/adobecom/express-milo/dispatches \
-d '{"event_type":"merge-to-stage"}'
# Trigger merge to main
curl -X POST \
-H "Authorization: token YOUR_TOKEN" \
-H "Accept: application/vnd.github.v3+json" \
https://api.github.com/repos/adobecom/express-milo/dispatches \
-d '{"event_type":"merge-to-main"}'- Start with "Ready for Review" when PR is complete
- Address all feedback before removing "Ready for Review"
- Add "Ready for Stage" only when truly ready for staging
- Use "Zero Impact" appropriately for non-functional changes
- Check file conflicts - similar files edited in same batch may conflict
- Use "QA Approved" instead of "Ready for Stage" always.
- Test on staging environment before approving stage-to-main
- Consider "Run Nala" for comprehensive test coverage
- Remove "Ready for Review" after thorough review
- Add "Do Not Merge" if changes need to be held
- Verify PageSpeed impact for performance-critical changes
- Check required labels are present
- Verify all checks are passing
- Ensure sufficient approvals (2+)
- Check for file conflicts with other PRs
- Remove "Ready for Review" if still present
- Verify PageSpeed Insights checks are passing
- Check if within RCP (Release Control Period)
- Ensure stage-to-main PR has proper approvals
- Check workflow logs in Actions tab
- Look for file conflicts between PRs
- Verify individual PR requirements
- Check if batch size limit exceeded
This project includes an intelligent development assistant powered by Cursor AI with specialized rules for AEM Edge Delivery Services development.
The project contains specialized AI rules located in .cursor/rules/ that provide expert guidance for:
- Performance optimization following AEM Three-Phase Loading principles
- Block development with author-first design patterns
- CSS variable enforcement and design system consistency
- Unit testing standards based on Adobe best practices
- Code review guidelines for quality assurance
These rules are automatically applied to every query:
- Core Web Vitals Standards - Performance optimization guidance
- Resource Loading Strategy - AEM Three-Phase Loading (E-L-D) patterns
- DOM Manipulation Best Practices - Efficient DOM operations
- AEM Markup Sections & Blocks - Structure and authoring guidelines
- Image Optimization Requirements - Responsive image best practices
- Event Handling Performance - Efficient event management
Activates unit testing guidance:
Write my tests for the hero-marquee block
- Complete test file structure with proper mocking
- Async block testing patterns for AEM utilities
- Performance-critical path validation
- Coverage guidelines and anti-patterns
Activates code review and quality standards:
Code review this grid-marquee implementation
- CSS variable linting enforcement
- Block pattern compliance checking
- Performance optimization recommendations
- Accessibility and SEO validation
Activate diagnostic rules for PageSpeed issues:
PageSpeed score is 75, LCP is 5+ seconds
- Lighthouse performance troubleshooting
- CSS render-blocking diagnosis
- Resource loading optimization
- AEM transformation speed improvements
Get block-specific guidance:
Create a new pricing table block
- Author-first design principles
- Section metadata integration
- Progressive enhancement patterns
- Content preservation best practices
Activates E2E test generation:
Generate Nala tests for the grid-marquee block
- Cross-browser test scenarios
- Accessibility validation tests
- Visual regression test setups
- Performance and responsive testing
- Page object model patterns
Example Generated Nala Test:
// Auto-generated hero-marquee.test.js
import { test, expect } from '@playwright/test';
test.describe('Hero Marquee Block', () => {
test('should display headline and CTA correctly', async ({ page }) => {
await page.goto('/express/');
const heroBlock = page.locator('.hero-marquee');
await expect(heroBlock).toBeVisible();
const headline = heroBlock.locator('h1');
await expect(headline).toBeVisible();
await expect(headline).toContainText('Adobe Express');
const cta = heroBlock.locator('.cta a').first();
await expect(cta).toBeVisible();
await expect(cta).toHaveAttribute('href');
});
test('should be responsive across devices', async ({ page }) => {
// Mobile viewport
await page.setViewportSize({ width: 375, height: 667 });
await page.goto('/express/');
const heroBlock = page.locator('.hero-marquee');
await expect(heroBlock).toBeVisible();
// Desktop viewport
await page.setViewportSize({ width: 1200, height: 800 });
await page.reload();
await expect(heroBlock).toBeVisible();
});
});- Be specific: "Code review the grid-marquee ratings variant" vs "check my code"
- Include context: "PageSpeed 75, LCP 5s, grid-marquee with 4 cards"
- Use trigger phrases: "Write my tests", "Code review", performance metrics
- Always mention specific PageSpeed/Lighthouse scores
- Include LCP timing and CWV metrics
- Specify which blocks or components are involved
- Reference the AEM Three-Phase Loading when asking about performance
- Describe the author experience and content structure
- Mention responsive requirements and device support
- Include accessibility and SEO considerations
- Reference existing blocks for consistency patterns
The AI rules can be customized by editing files in .cursor/rules/:
- Always-on rules: Marked with
APPLY: EVERY QUERY - Triggered rules: Marked with specific activation phrases
- Performance rules: Activated by PageSpeed/performance issues
- Testing rules: Activated by "Write my tests" commands
- AEM Edge Delivery Services - Official AEM EDS documentation
- Keeping it 100 - PageSpeed optimization guide
- Helix Block Design - Block development best practices
- Unit Testing in AEM - Adobe's unit testing guide
- Nala Testing Framework - End-to-end testing documentation
- Web Test Runner - JavaScript testing framework
- Adobe Experience Platform WebSDK - Client-side SDK documentation
- Adobe Analytics - Analytics implementation guide
- Adobe Target - Personalization and testing platform
- Adobecom Discussions - Start discussions with the larger group
- Adobe Developer Community - Developer forums and resources
- Experience League - Adobe's learning platform
- Core Web Vitals - Google's performance metrics guide
- Lighthouse - Performance auditing tool
- PageSpeed Insights - Real-world performance analysis
- GitHub Actions - CI/CD automation documentation
- Husky - Git hooks for code quality
- Adobe Code Sync Bot - Automated deployment tooling