A microservice that provides time-related functionality, deployed on AWS infrastructure using Terraform.
This project consists of two main components:
- A time service application
- AWS infrastructure managed by Terraform
The infrastructure is deployed in AWS using Terraform, with state management in S3 and DynamoDB for state locking.
Before you begin, ensure you have the following installed:
- Terraform (>= 1.11.1)
- AWS CLI
- kubectl (if you need to interact with the Kubernetes cluster)
- Node.js (for local development)
- Docker (for containerized local development)
The infrastructure is deployed with the following architecture:
- A VPC with:
- 2 public subnets for internet-facing resources
- 2 private subnets for internal resources
- Load balancer deployed in public subnets for external access
- EKS nodes deployed in private subnets for security
- An EKS cluster deployed within the VPC
- EKS service resource to run the application container
- Node groups configured in private subnets
- S3 bucket for Terraform state management
- DynamoDB table for state locking
- State management details:
- Bucket:
terraform-state-522814697098-us-east-2 - Key:
simple-time-service/terraform.tfstate - Region:
us-east-2 - State locking is managed via DynamoDB table:
terraform-locks-522814697098
- Bucket:
-
Navigate to the app directory:
cd app -
Install dependencies:
npm install
-
Start the application:
npm start
The application will start on
http://localhost:3000by default.
-
Build the Docker image:
docker build -t simple-time-service . -
Run the container:
docker run -p 3000:3000 simple-time-service
The application will be available at
http://localhost:3000.
Once the application is running, you can test it by:
- Opening a web browser and navigating to
http://localhost:3000 - Using curl:
curl http://localhost:3000
The service will return the current time and your IP address.
To deploy the infrastructure, you need to authenticate with AWS. There are two ways to do this:
- Install the AWS CLI
- Run
aws configure - Enter your AWS Access Key ID, Secret Access Key, default region (us-east-2), and output format (json)
Set the following environment variables:
export AWS_ACCESS_KEY_ID="your_access_key_id"
export AWS_SECRET_ACCESS_KEY="your_secret_access_key"
export AWS_DEFAULT_REGION="us-east-2"-
Navigate to the terraform directory:
cd terraform -
Initialize Terraform:
terraform init
-
Review the planned changes:
terraform plan
-
Apply the infrastructure:
terraform apply
-
After the infrastructure is deployed, Terraform will output the load balancer endpoint. You can access the application using:
http://<load-balancer-endpoint>
For example, if the load balancer endpoint is
my-loadbalancer-1234567890.us-east-2.elb.amazonaws.com, you would access the application at:http://my-loadbalancer-1234567890.us-east-2.elb.amazonaws.comYou can also get the load balancer endpoint at any time by running:
terraform output load_balancer_endpoint
Note: The load balancer may take 2-5 minutes to become fully operational after deployment. During this time, you might receive connection errors or timeouts. Please wait a few minutes and try accessing the endpoint again.
The project includes a GitHub Actions workflow that automates the deployment process. The pipeline consists of three main jobs:
-
Docker Image Build & Push
- Builds the application Docker image
- Pushes the image to Docker Hub
- Tags the image with version v1.0.0
-
Terraform Plan
- Initializes Terraform
- Validates the configuration
- Creates and saves a plan
- Uploads the plan as an artifact
-
Terraform Apply
- Downloads the saved plan
- Applies the infrastructure changes
- Only runs on main branch or manual trigger
- Automatic: On push to main branch
- Manual: Through GitHub Actions interface
The pipeline requires the following secrets to be configured in your GitHub repository:
DOCKERHUB_USERNAME: Your Docker Hub usernameDOCKERHUB_TOKEN: Your Docker Hub access tokenAWS_ACCESS_KEY_ID: AWS access keyAWS_SECRET_ACCESS_KEY: AWS secret key
- Ensure all required secrets are configured in your GitHub repository
- Push changes to the main branch to trigger automatic deployment
- Or manually trigger the workflow through GitHub Actions interface
-
Navigate to the terraform directory:
cd terraform -
Review what will be destroyed:
terraform plan -destroy
-
Destroy the infrastructure:
terraform destroy
Warning: This will permanently delete all AWS resources created by Terraform.
The project includes a separate GitHub Actions workflow for infrastructure destruction.
-
Navigate to the GitHub Actions tab in your repository
-
Select the "Destroy Terraform" workflow
-
Click "Run workflow" to trigger the destruction process
Note: The destruction workflow requires the same AWS credentials as the deployment workflow.
If you encounter issues:
- Check AWS credentials are properly configured
- Verify you have the necessary IAM permissions
- Ensure the S3 bucket and DynamoDB table exist
- Check Terraform version compatibility