A High-Performance, Thread-Safe, Sharded Cache Implementation in Rust: An Analysis of LRU Eviction Under Concurrent Loads
Author: Navin Ruas, et al. Date: January 5, 2026 Version: 1.1.0
This project presents CacheLRU, a concurrent, in-memory caching system designed to mitigate lock contention in multi-threaded environments. By implementing a Sharded Lock Strategy combined with a Least Recently Used (LRU) eviction policy, the system aims to provide
In concurrent computing, shared state management often becomes a bottleneck. Traditional caching implementations utilizing a single Mutex or RwLock suffer from Lock Contention, where the time threads spend waiting for access permissions exceeds the time spent on actual computation. This phenomenon degrades system scalability, adhering to Amdahl's Law.
This implementation focuses on:
- Concurrency Control: Utilizing
parking_lotprimitives for efficient thread synchronization. - Memory Management: Implementing a strict LRU eviction policy to bound memory usage.
- Temporal Validity: Providing Time-To-Live (TTL) mechanisms with lazy expiration.
- Observability: Atomic metrics collection for runtime analysis.
The system employs a Sharded Architecture. The key space
Each shard operates as an independent cache unit protected by its own Mutex. This reduces the probability of two threads contending for the same lock from
-
Storage: A
HashMapis used for$O(1)$ key lookups. -
Eviction: A generic
EvictionPolicytrait governs the removal of entries. The default implementation uses aVecDeque(acting as a doubly-linked list equivalent) to maintain access order. -
Entries: Data is encapsulated in a
CacheEntrystructure containing the value and an optionalInstantfor expiration.
-
Access (
$Get$ ): Upon access, the item is moved to the back of the queue (most recently used). If the TTL has elapsed, the item is atomically removed (Lazy Expiration). -
Insertion (
$Put$ ): New items are pushed to the back. If the shard capacity is exceeded, the item at the front (least recently used) is evicted.
| Operation | Time Complexity (Amortized) | Space Complexity |
|---|---|---|
| Insert | ||
| Lookup | ||
| Evict |
Preliminary benchmarks using criterion indicate that increasing the shard count (
The decoupling of shards allows for "pseudo-lock-free" behavior for disjoint keys. However, the system relies heavily on the quality of the hash function (currently DefaultHasher). Poor hashing distribution could lead to "hot shards," negating the benefits of partitioning.
- Lazy Expiration: Expired items consume memory until they are accessed or the cache fills up.
- Scan Operations: Operations requiring iteration over the entire cache (e.g.,
clear, global metrics) require acquiring locks on all shards, which is expensive.
The CacheLRU system demonstrates that sharded locking significantly outperforms global locking for concurrent cache workloads. The integration of the Strategy Pattern for eviction policies allows for future extensibility without architectural refactoring.
- Herlihy, M., & Shavit, N. (2012). The Art of Multiprocessor Programming. Morgan Kaufmann.
- Rust Language Team. (2021). The Rust Programming Language.
- Amanieu d'Antras. (2016).
parking_lot: Compact and efficient synchronization primitives.
# Clone the repository
git clone https://github.com/navinBRuas/CacheLRU.git
# Run test suite
cargo test
# Execute benchmarks
cargo benchPlease refer to GOVERNANCE.md and CONTRIBUTING.md for strict adherence to coding standards, including:
- Type Safety: No
unsafeblocks unless mathematically proven necessary. - Documentation: All public APIs must have docstrings describing arguments, returns, and complexity.
This project is licensed under the MIT License. See LICENSE for details.