Want to make your app faster and don't want to spend on extra infrastructure ? Learn how you can do both with HA-store!
HA-store is a generic wrapper for your data queries, it features:
- Smart TLRU cache for 'hot' information
- Request coalescing, batching and retrying
- Insightful stats and events
- Lightweight, configurable, battle-tested
npm install ha-store
const store = require('ha-store');
const itemStore = store({
resolver: getItems, // Your resolver can be an async function or should return a Promise
uniqueParams: ['language']
});
// Anywhere in your application
itemStore.get('123', { language: 'fr' })
.then(item => /* The item you requested */);
itemStore.get(['123', '456'], { language: 'en' })
.then(items => /* All the items you requested */);| Name | Required | Default | Description |
|---|---|---|---|
| resolver | true | - | The method to wrap, and how to interpret the returned data. Uses the format <function(ids, params)> |
| responseParser | false | (system) | The method that format the results from the resolver into an indexed collection. Accepts indexed collections or arrays of objects with an id property. Uses the format <function(response, requestedIds, params)> |
| uniqueParams | false | [] |
The list of parameters that, when passed, generate unique results. Ex: 'language', 'view', 'fields', 'country'. These will generate different combinations of cache keys. |
| timeout | false | null |
The maximum time allowed for the resolver to resolve. |
| cache | false | {
limit: 5000,
ttl: 300000
} |
Caching options for the data - limit - the maximum number of records, and ttl - time to live for a record. |
| batch | false | {
tick: 50,
max: 100
} |
Batching options for the requests |
| retry | false | {
base: 5,
step: 3,
limit: 5000,
curve: <function(progress, start, end)>
} |
Retry options for the requests |
*All options are in (ms) *Scaling options are represented via and exponential curve with base and limit being the 2 edge values while steps is the number of events over that curve.
HA-store emits events to track cache hits, miss and outbound requests.
| Event | Description |
|---|---|
| cacheHit | When the requested item is present in the microcache, or is already being fetched. Prevents another request from being created. |
| cacheMiss | When the requested item is not present in the microcache and is not currently being fetched. A new request will be made. |
| coalescedHit | When a record query successfully hooks to the promise of the same record in transit. |
| query | When a batch of requests is about to be sent. |
| queryFailed | Indicates that the batch has failed. Retry policy will dictate if it should be re-attempted. |
| retryCancelled | Indicates that the batch has reached the allowed number of retries and is now abandoning. |
| querySuccess | Indicates that the batch request was successful. |
You may also want to track the amount of contexts and records stored via the size method.
npm test
npm run bench
Please do! This is an open source project - if you see something that you want, open an issue or file a pull request.
I am always looking for more maintainers, as well.
Apache 2.0 (c) 2019 Frederic Charette