- Table of Contents
- Description
- Technologies
- Setup
- Docker and Docker Compose
- API Usage and Supported Features
- Fragments UI Testing Web App
A scalable fragments management microservice API that supports creating, retrieving, converting, updating, and deleting fragments in various MIME types (e.g., text/plain, application/json, image/png). The API securely authenticates users via AWS Cognito, ensuring only authorized access to the fragments. A CI/CD pipeline is implemented using GitHub Actions to automate linting, testing, building and pushing Docker images, and deploying to AWS ECS with the pre-built ECR Docker image. The project also leveraged AWS services such as DynamoDB for fragment metadata, S3 for fragment data content, and CloudWatch for monitoring and logging.
- Backend Framework & Language: JavaScript, Express.js
- Authentication & Security: AWS Cognito, Passport, Helmet
- Database & Storage: AWS DynamoDB, AWS S3, In-Memory Storage (fallback to in-memory storage if environment variables are not provided)
- Containerization, Deployment & Orchestration: Docker, Docker Hub, AWS ECR, AWS ECS
- CI/CD: GitHub Actions
- Testing: Jest, Supertest, Hurl
- Other Utilities: dotenv, sharp, markdown-it, pino, eslint, etc.
-
Clone the repository
Navigate to the directory where you want to clone the repository, then run:git clone https://github.com/zlinzz/fragments.git
-
Navigate to the fragments directory
cd fragments -
Install dependencies
npm install
-
Add environment files
Ensure you have the required environment variables set up. Create.envand any necessary configuration files (e.g., .htpasswd) as needed.Note: You can edit the .htpasswd file to set your own username and password for HTTP Basic Auth. This will allow you to customize the authentication credentials for your development environment.
- Example 1 - use In-Memory Storage & HTTP Basic Auth strategy:
- PORT, LOG_LEVEL, HTPASSWD_FILE
- Example 2 - run in production-like env, use AWS DynamoDB and S3 & AWS Cognito Auth (follow step 8):
- PORT, LOG_LEVEL
- AWS_COGNITO_POOL_ID, AWS_COGNITO_CLIENT_ID, API_URL, AWS_REGION
- AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN
- Example 1 - use In-Memory Storage & HTTP Basic Auth strategy:
-
Start the development server
npm start
-
Run unit tests
npm run test -
Run integration tests
npm run test:integration
-
Run the application in a production-like environment
To run the application with Docker containers, follow the instructions in the ,Docker and Docker Compose section.
This project includes a Docker image available on Docker Hub that you can pull and run easily. Follow these steps to get started:
- Pull the Docker image from Docker Hub:
docker pull zlinzz/fragments:latest
- Use your own
.envfile (refer to step 4 example 1 in the Setup section for guidance) and run:Or use the provideddocker run --env-file .env -p 8080:8080 zlinzz/fragments:latest
env.jestfile:docker run --env-file env.jest -e LOG_LEVEL=debug -p 8080:8080 zlinzz/fragments:latest
- You are now ready to send requests! Refer to the Curl Command Examples section for guidance.
- To stop the container:
docker ps docker kill <id>
This project includes a multi-container setup managed by Docker Compose. The setup allows for a full functional-offline and testing environment with separate containers for each service using docker-compose.yml or docker-compose.local.yml.
Use the docker-compose.yml file to set up the environment with DynamoDB local as the DynamoDB backend and LocalStack as the S3 backend.
- Start the containers by running:
docker compose up
Note: if you make changes to your fragments source code, and want to rebuild your Docker image, you can use the --build flag to force a rebuild: docker compose up --build. You should see log messages from all three services:
fragments,dynamodb-local, andlocalstack. - Make sure you can access all three services:
$ curl localhost:8080 {"status":"ok","author":"David Humphrey <[email protected]>","githubUrl":"https://github.com/humphd/fragments","version":"0.8.0"} $ curl localhost:8000 {"__type":"com.amazonaws.dynamodb.v20120810#MissingAuthenticationToken","Message":"Request must contain either a valid (registered) AWS access key ID or X.509 certificate."} $ curl localhost:4566/_localstack/health {"services": {"acm": "available", "apigateway": "available", "cloudformation": "available", "cloudwatch": "available", "config": "available", "dynamodb": "available", "dynamodbstreams": "available", "ec2": "available", "es": "available", "events": "available", "firehose": "available", "iam": "available", "kinesis": "available", "kms": "available", "lambda": "available", "logs": "available", "opensearch": "available", "redshift": "available", "resource-groups": "available", "resourcegroupstaggingapi": "available", "route53": "available", "route53resolver": "available", "s3": "available", "s3control": "available", "secretsmanager": "available", "ses": "available", "sns": "available", "sqs": "available", "ssm": "available", "stepfunctions": "available", "sts": "available", "support": "available", "swf": "available", "transcribe": "available"}, "version": "2.0.0.dev"}Note:: use
localhost:4566/_localstack/healthroute for accessing LocalStack healthcheck endpoint. - Install the AWS cli, which we'll use in the next step to run commands against the local AWS services.
- Make the
local-aws-setup.shscript executable and try running it. It should be able to create the S3 Bucket and DynamoDB Table:$ chmod +x ./scripts/local-aws-setup.sh $ docker compose up -d $ ./scripts/local-aws-setup.sh Setting AWS environment variables for LocalStack AWS_ACCESS_KEY_ID=test AWS_SECRET_ACCESS_KEY=test AWS_SESSION_TOKEN=test AWS_DEFAULT_REGION=us-east-1 Waiting for LocalStack S3... LocalStack S3 Ready Creating LocalStack S3 bucket: fragments { "Location": "/fragments" } Creating DynamoDB-Local DynamoDB table: fragments { "TableDescription": { "AttributeDefinitions": [ { "AttributeName": "ownerId", "AttributeType": "S" }, { "AttributeName": "id", "AttributeType": "S" } ], "TableName": "fragments", "KeySchema": [ { "AttributeName": "ownerId", "KeyType": "HASH" }, { "AttributeName": "id", "KeyType": "RANGE" } ], "TableStatus": "ACTIVE", "CreationDateTime": "2022-03-22T11:13:15.952000-04:00", "ProvisionedThroughput": { "LastIncreaseDateTime": "1969-12-31T19:00:00-05:00", "LastDecreaseDateTime": "1969-12-31T19:00:00-05:00", "NumberOfDecreasesToday": 0, "ReadCapacityUnits": 10, "WriteCapacityUnits": 5 }, "TableSizeBytes": 0, "ItemCount": 0, "TableArn": "arn:aws:dynamodb:ddblocal:000000000000:table/fragments" } }
- Now, you are good to send requests!
You can use curl or any HTTP client to interact with the services running locally. For example, to create a fragment using curl:
curl -i -X POST -u [email protected]:fakepassword -H "Content-Type: text/plain" -d "This is a fragment" http://localhost:8080/v1/fragments
- To exit, run:
docker compose down
Use the docker-compose.local.yml file to set up the environment with MinIO as the S3 storage backend.
- Start your containers:
cd fragments docker compose -f docker-compose.local.yml up -dNote: We are using a different filename for our
docker-compose.yml, so we indicate that with the-fflag. - Log in to the MinIO console (similar to the AWS S3 Console) by going to http://localhost:9001, and using the
MINIO_ROOT_USERandMINIO_ROOT_PASSWORDvalues you entered in the docker-compose.local.yml file. - Create a new bucket by clicking the Create Bucket button.
- Choose a name for your bucket, for example
fragments(which we set as the default indocker-compose.local.ymlabove) and click Create Bucket. - Add a file to your bucket by clicking the Upload button and choosing a file to upload.
- Look at the
minio/datadirectory on your host, and you should see a new folder with the same name as the bucket you just created, and the file you uploaded. Anything you put in this bucket will get stored in this location (i.e., outside of the container). - Add the
minio/directory to your.gitignorefile. - Stop your containers
docker compose -f docker-compose.local.yml down- Restart your containers:
docker compose -f docker-compose.local.yml up -d- Log in to the MinIO Console at http://localhost:9001 using the same username and password as before, and confirm that the bucket and object are still there.
You can now use S3 locally and have your data survive starting/stopping the containers. This is an ideal setup for local development, since you also get a console for viewing your data.
- Fragments API:
http://localhost:8080 - DynamoDB Local Console:
http://localhost:8000 - LocalStack Healthcheck:
http://localhost:4566/_localstack/health- Use this URL to check the status of the LocalStack services.
- MinIO:
- API Console:
http://localhost:9000 - Web Console:
http://localhost:9001(for managing MinIO buckets and objects in web UI)
- API Console:
| Route | Method | Description |
|---|---|---|
/ |
GET | Health check route to confirm the API is running. |
v1/fragments |
POST | Creates a new fragment with the provided fragment data in request body and fragment type in Content-Type. |
v1/fragments |
GET | Retrieves all fragments belonging to the current user (i.e., authenticated user). The response includes a fragments array of ids. |
v1/fragments/?expand=1 |
GET | Retrieves all fragments belonging to the current user (i.e., authenticated user), expanded to include a full representation of the fragments' metadata (i.e., not just id). |
v1/fragments/:id |
GET | Gets an authenticated user's fragment data (i.e., raw binary data) with the given id. |
v1/fragments/:id.ext |
GET | Converts the fragment data to the type associated with the extension (ext refers to the extension, e.g., .txt or .png). |
v1/fragments/:id |
PUT | Updates the data for the authenticated user's existing fragment with the specified id (Content-Type should be the same). |
v1/fragments/:id/info |
GET | Gets the metadata for one of the authenticated user's existing fragments with the specified id. |
v1/fragments/:id |
DELETE | Deletes one of the authenticated user's existing fragments with the given id. |
Note: routes starting with
/v1require user authentication.
The API supports creating fragments in the following MIME types:
text/plaintext/plain; charset=utf-8text/markdowntext/htmltext/csvapplication/jsonapplication/yamlimage/pngimage/jpegimage/webpimage/avifimage/gif
Note: we store the entire Content-Type (i.e., with the charset if present), but also allow using only the media type prefix (e.g., text/html vs. text/html; charset=iso-8859-1).
This is the current list of valid conversions for each fragment type (others may be added in the future):
| Type | Valid Conversion Extensions |
|---|---|
text/plain |
.txt |
text/markdown |
.md, .html, .txt |
text/html |
.html, .txt |
text/csv |
.csv, .txt, .json |
application/json |
.json, .yaml, .yml, .txt |
application/yaml |
.yaml, .txt |
image/png |
.png, .jpg, .webp, .gif, .avif |
image/jpeg |
.png, .jpg, .webp, .gif, .avif |
image/webp |
.png, .jpg, .webp, .gif, .avif |
image/avif |
.png, .jpg, .webp, .gif, .avif |
image/gif |
.png, .jpg, .webp, .gif, .avif |
- To create a new txt/plain fragment (POST v1/fragments):
curl -i -X POST -u [email protected]:fakepassword -H "Content-Type: text/plain" -d "This is a fragment" http(s)://fragments-api.com/v1/fragments
- To POST a binary file using
--data-binary <filename>(POST v1/fragments):curl -i -X POST -u [email protected]:fakepassword -H "Content-Type: image/png" --data-binary @filepath http(s)://fragments-api.com/v1/fragments
- To retrieve all fragments (GET v1/fragments):
curl -i -u [email protected]:fakepassword http(s)://fragments-api.com/v1/fragments
- To retrieve all fragments expanded (GET v1/fragments/?expand=1):
curl -i -u [email protected]:fakepassword http(s)://fragments-api.com/v1/fragments?expand=1
- To retrieve a specific fragment with id (GET v1/fragments/:id):
curl -i -u [email protected]:fakepassword http(s)://fragments-api.com/v1/fragments/<id>
- To retrieve and store a fragment's data in local (GET v1/fragments/:id):
curl -u [email protected]:fakepassword -o filepath http(s)://fragments-api.com/v1/fragments/<id>
Note: When you get and store image, don't include -i, which will include response headers in the output and the stored file will not be the the same binary.
- To convert a fragment to html type (GET v1/fragments/:id.ext):
curl -i -u [email protected]:fakepassword http(s)://fragments-api.com/v1/fragments/<id>.html
- To update a text/plain fragment's data with id (PUT v1/fragments/:id):
curl -i -X PUT -u [email protected]:fakepassword -H "Content-Type: text/plain" -d "This is updated data" http(s)://fragments-api.com/v1/fragments/<id>
- To get a fragment's metadata with id (GET v1/fragments/:id/info):
curl -i -u [email protected]:fakepassword http(s)://fragments-api.com/v1/fragments/<id>.info
- To delete a specific fragment with id (DELETE v1/fragments/:id)
curl -i -X DELETE -u [email protected]:fakepassword http(s)://fragments-api.com/v1/fragments/<id>
You can find the UI testing repository for the API's front-end on: https://github.com/zlinzz/fragments-ui