This project is a highly scalable financial transaction system designed to handle high throughput, strong consistency requirements, and concurrent user activity under heavy load. While the application supports only basic financial operations (account creation, deposits, withdrawals, and transfers), the primary objective of this project is to study and implement production-grade scalability, concurrency control, and fault-tolerant system design principles.
The system leverages PostgreSQL with PgBouncer, Redis Cluster, Apache Kafka, and Nginx to achieve over 3600 requests per second with 100% transactional correctness under stress testing.
This project serves as a distributed systems case study, demonstrating how real-world financial systems maintain correctness under scale.
Financial systems operate under strict correctness and reliability constraints:
- Account balances must never become inconsistent
- Transactions must not be lost or reordered
- Concurrent operations must not lead to race conditions
- The system must scale horizontally without compromising correctness
Naive designs that directly update databases from multiple concurrent requests often suffer from:
- Database connection exhaustion
- Lost updates and race conditions
- Inconsistent balances under high concurrency
- Poor fault tolerance
This project was built to explore how these challenges are solved in real-world systems using queues, partitioning, caching, and controlled concurrency.
- High throughput under heavy concurrent load
- Strict sequential processing of transactions per user/account
- Strong consistency for balance updates
- Horizontal scalability
- Fault tolerance and graceful degradation
- Clear separation of responsibilities across system components
- Nginx – Load balances incoming traffic across multiple application servers
- Node.js Application Servers – Validate requests and publish transactions
- Apache Kafka – Enforces strict ordering of transactions per user/account
- PostgreSQL + PgBouncer – Durable storage with efficient connection pooling
- Redis Cluster – Low-latency caching and session management
Financial transactions require ACID guarantees, particularly for balance updates. PostgreSQL provides strong transactional semantics, while PgBouncer prevents connection exhaustion by pooling database connections efficiently.
NoSQL databases were avoided because they typically trade strong consistency for availability and scalability, which is unacceptable for financial systems where correctness is critical.
Kafka was chosen because it provides:
- Guaranteed ordering within a partition
- Horizontal scalability through partitions
- Fault tolerance via broker replication
Each transaction is routed to a Kafka partition based on a hash of the user/account ID, ensuring that all transactions for a given user are processed sequentially.
Kafka producers are configured with idempotency enabled, ensuring that duplicate messages are not written to Kafka even in the presence of retries or transient network failures. This provides exactly-once message production semantics per producer session and preserves ordering within each partition.
Redis queues were avoided because they do not provide strong ordering guarantees at scale and can introduce single-point-of-failure risks.
Redis provides sub-millisecond access times and is used to cache frequently accessed data and manage user sessions. A clustered setup with replicas ensures high availability and scalability beyond what a single-node cache could provide.
Nginx efficiently distributes traffic across multiple application servers and handles a large number of concurrent connections with minimal overhead, allowing application servers to focus solely on business logic.
A key challenge in financial systems is ensuring that multiple concurrent requests from the same user do not corrupt system state.
If a user rapidly clicks an action (e.g., “Deposit”) or retries a request, multiple logically similar requests may reach the backend concurrently. Each request is treated as a distinct operation.
- Every transaction is associated with a user ID / account number
- A hash of this identifier determines the Kafka partition
- Each partition is consumed by exactly one worker
- All transactions for a given user are processed strictly sequentially
This design guarantees correctness by eliminating race conditions without relying on database-level locks. Even when multiple similar requests are generated (e.g., due to double clicks), they are executed deterministically and sequentially.
- Client sends a transaction request
- Nginx routes the request to an application server
- The server validates the request and publishes it to Kafka
- Kafka assigns the message to a partition using user/account hashing
- A worker consumes messages sequentially from the partition
- PostgreSQL updates occur within ACID transactions
- Redis is updated as a performance optimization layer
- A response is returned to the client
- Kafka is configured with an equal number of partitions and workers
- Transactions are hashed by account number to preserve ordering
- Kafka runs with 3 brokers for fault tolerance
- Redis runs as a 9-node cluster (3 masters, 2 replicas per master)
- PostgreSQL uses PgBouncer for efficient connection pooling
For development, the main server and workers share the same environment to reduce unnecessary complexity.
- Node.js & npm
- Docker & Docker Compose
git clone https://github.com/plebsicle/microfin.git
cd microfin
npm install
cd database
docker-compose up -d
cd redis
docker-compose up -d
redis-cli --cluster create 192.168.1.2:6379 192.168.1.4:6379 192.168.1.5:6379 192.168.1.6:6379 192.168.1.7:6379 192.168.1.8:6379 192.168.1.9:6379 192.168.1.10:6379 192.168.1.11:6379 --cluster-replicas 2
cd kafka
docker-compose up -d
cd nginx
docker-compose up -d
npm run dev
npm run dev1
npm run dev2
npm run dev3- Users can perform basic financial transactions
- The system guarantees correctness under concurrent load
- Traffic is distributed efficiently across servers
- PostgreSQL and PgBouncer configuration:
database/db.ts - Kafka and Redis configuration:
config/
The system was stress-tested using K6 with scenarios including:
- Account creation
- User sign-ins
- Deposits, withdrawals, and transfers
All tests achieved a 100% success rate with no data inconsistencies.
| Metric | Value |
|---|---|
| Requests Per Second | 3626.18/s |
| Average Response Time | 515.29 ms |
| Account Duration | Avg: 553.57 ms, p95: 1048.81 |
| Deposit Duration | Avg: 624.92 ms, p95: 1230.79 |
| Transfer Duration | Avg: 558.36 ms, p95: 1131.11 |
| Withdraw Duration | Avg: 344.32 ms, p95: 773.48 |
Prometheus and Grafana are used to monitor application-level server metrics, including request rate, latency, CPU usage, and memory consumption.
Infrastructure-level metrics for PostgreSQL, Kafka, and Redis are not instrumented in the current version of the system.
/api/signup//api/signin//api/accountGeneration//api/deposit//api/withdraw//api/transfer/
- No distributed transaction protocol between PostgreSQL and Redis
- Observability is limited to application servers
- Infrastructure-level monitoring for Kafka, Redis, and PostgreSQL
- Horizontal database sharding
- Advanced failure recovery strategies