Thanks to visit codestin.com
Credit goes to Github.com

Skip to content

naviNBRuas/CacheLRU

A High-Performance, Thread-Safe, Sharded Cache Implementation in Rust: An Analysis of LRU Eviction Under Concurrent Loads

Author: Navin Ruas, et al. Date: January 5, 2026 Version: 1.1.0


1. Abstract

This project presents CacheLRU, a concurrent, in-memory caching system designed to mitigate lock contention in multi-threaded environments. By implementing a Sharded Lock Strategy combined with a Least Recently Used (LRU) eviction policy, the system aims to provide $O(1)$ amortized time complexity for insertion and retrieval operations while ensuring thread safety. Performance metrics suggest that horizontal partitioning (sharding) significantly reduces mutex wait times compared to global locking mechanisms, making this implementation suitable for high-throughput enterprise systems.

2. Introduction

2.1 Problem Statement

In concurrent computing, shared state management often becomes a bottleneck. Traditional caching implementations utilizing a single Mutex or RwLock suffer from Lock Contention, where the time threads spend waiting for access permissions exceeds the time spent on actual computation. This phenomenon degrades system scalability, adhering to Amdahl's Law.

2.2 Scope

This implementation focuses on:

  1. Concurrency Control: Utilizing parking_lot primitives for efficient thread synchronization.
  2. Memory Management: Implementing a strict LRU eviction policy to bound memory usage.
  3. Temporal Validity: Providing Time-To-Live (TTL) mechanisms with lazy expiration.
  4. Observability: Atomic metrics collection for runtime analysis.

3. Methodology

3.1 Architectural Design

The system employs a Sharded Architecture. The key space $\mathcal{K}$ is partitioned into $N$ distinct segments (shards), where $N$ is a power of two. The target shard $S_i$ for a given key $k$ is determined by:

$$ i = \text{Hash}(k) \pmod N $$

Each shard operates as an independent cache unit protected by its own Mutex. This reduces the probability of two threads contending for the same lock from $1$ to $\frac{1}{N}$, assuming a uniform hash distribution.

3.2 Data Structures

  • Storage: A HashMap is used for $O(1)$ key lookups.
  • Eviction: A generic EvictionPolicy trait governs the removal of entries. The default implementation uses a VecDeque (acting as a doubly-linked list equivalent) to maintain access order.
  • Entries: Data is encapsulated in a CacheEntry structure containing the value and an optional Instant for expiration.

3.3 Algorithms

  • Access ($Get$): Upon access, the item is moved to the back of the queue (most recently used). If the TTL has elapsed, the item is atomically removed (Lazy Expiration).
  • Insertion ($Put$): New items are pushed to the back. If the shard capacity is exceeded, the item at the front (least recently used) is evicted.

4. Results and Performance

4.1 Theoretical Complexity

Operation Time Complexity (Amortized) Space Complexity
Insert $O(1)$ $O(N)$
Lookup $O(1)$ $O(N)$
Evict $O(1)$ $O(1)$

4.2 Empirical Benchmarks

Preliminary benchmarks using criterion indicate that increasing the shard count ($N$) linearly improves write throughput under high contention until CPU saturation is reached.

5. Discussion

5.1 Interpretation

The decoupling of shards allows for "pseudo-lock-free" behavior for disjoint keys. However, the system relies heavily on the quality of the hash function (currently DefaultHasher). Poor hashing distribution could lead to "hot shards," negating the benefits of partitioning.

5.2 Limitations

  • Lazy Expiration: Expired items consume memory until they are accessed or the cache fills up.
  • Scan Operations: Operations requiring iteration over the entire cache (e.g., clear, global metrics) require acquiring locks on all shards, which is expensive.

6. Conclusion

The CacheLRU system demonstrates that sharded locking significantly outperforms global locking for concurrent cache workloads. The integration of the Strategy Pattern for eviction policies allows for future extensibility without architectural refactoring.

7. References

  1. Herlihy, M., & Shavit, N. (2012). The Art of Multiprocessor Programming. Morgan Kaufmann.
  2. Rust Language Team. (2021). The Rust Programming Language.
  3. Amanieu d'Antras. (2016). parking_lot: Compact and efficient synchronization primitives.

8. Development & Contribution

8.1 Setup

# Clone the repository
git clone https://github.com/navinBRuas/CacheLRU.git

# Run test suite
cargo test

# Execute benchmarks
cargo bench

8.2 Standards

Please refer to GOVERNANCE.md and CONTRIBUTING.md for strict adherence to coding standards, including:

  • Type Safety: No unsafe blocks unless mathematically proven necessary.
  • Documentation: All public APIs must have docstrings describing arguments, returns, and complexity.

9. License

This project is licensed under the MIT License. See LICENSE for details.

About

No description, website, or topics provided.

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

Packages

No packages published

Languages