@@ -257,6 +257,233 @@ Instead of the simple majority strategy (``ConsensusStrategy``) an
257257``UnanimousStrategy `` can be used to require the lock to be acquired in all
258258the stores.
259259
260+ .. caution ::
261+
262+ In order to get high availability when using the ``ConsensusStrategy ``, the
263+ minimum cluster size must be three servers. This allows the cluster to keep
264+ working when a single server fails (because this strategy requires that the
265+ lock is acquired in more than half of the servers).
266+
267+ Reliability
268+ -----------
269+
270+ The component guarantees that the same resource can't be lock twice as long as
271+ the component is used in the following way.
272+
273+ Remote Stores
274+ ~~~~~~~~~~~~~
275+
276+ Remote stores (:ref: `MemcachedStore <lock-store-memcached >` and
277+ :ref: `RedisStore <lock-store-redis >`) use an unique token to recognize the true
278+ owner of the lock. This token is stored in the
279+ :class: `Symfony\\ Component\\ Lock\\ Key ` object and is used internally by the
280+ ``Lock ``, therefore this key must not be shared between processes (session,
281+ caching, fork, ...).
282+
283+ .. caution ::
284+
285+ Do not share a key between processes.
286+
287+ Every concurrent process must store the ``Lock `` in the same server. Otherwise two
288+ different machines may allow two different processes to acquire the same ``Lock ``.
289+
290+ .. caution ::
291+
292+ To guarantee that the same server will always be safe, do not use Memcached
293+ behind a LoadBalancer, a cluster or round-robin DNS. Even if the main server
294+ is down, the calls must not be forwarded to a backup or failover server.
295+
296+ Expiring Stores
297+ ~~~~~~~~~~~~~~~
298+
299+ Expiring stores (:ref: `MemcachedStore <lock-store-memcached >` and
300+ :ref: `RedisStore <lock-store-redis >`) guarantee that the lock is acquired
301+ only for the defined duration of time. If the task takes longer to be
302+ accomplished, then the lock can be released by the store and acquired by
303+ someone else.
304+
305+ The ``Lock `` provides several methods to check its health. The ``isExpired() ``
306+ method checks whether or not it lifetime is over and the ``getRemainingLifetime() ``
307+ method returns its time to live in seconds.
308+
309+ Using the above methods, a more robust code would be::
310+
311+ // ...
312+ $lock = $factory->createLock('invoice-publication', 30);
313+
314+ $lock->acquire();
315+ while (!$finished) {
316+ if ($lock->getRemainingLifetime() <= 5) {
317+ if ($lock->isExpired()) {
318+ // lock was lost, perform a rollback or send a notification
319+ throw new \RuntimeException('Lock lost during the overall process');
320+ }
321+
322+ $lock->refresh();
323+ }
324+
325+ // Perform the task whose duration MUST be less than 5 minutes
326+ }
327+
328+ .. caution ::
329+
330+ Choose wisely the lifetime of the ``Lock `` and check whether its remaining
331+ time to leave is enough to perform the task.
332+
333+ .. caution ::
334+
335+ Storing a ``Lock `` usually takes a few milliseconds, but network conditions
336+ may increase that time a lot (up to a few seconds). Take that into account
337+ when choosing the right TTL.
338+
339+ By design, locks are stored in servers with a defined lifetime. If the date or
340+ time of the machine changes, a lock could be released sooner than expected.
341+
342+ .. caution ::
343+
344+ To guarantee that date won't change, the NTP service should be disabled
345+ and the date should be updated when the service is stopped.
346+
347+ FlockStore
348+ ~~~~~~~~~~
349+
350+ By using the file system, this ``Store `` is reliable as long as concurrent
351+ processes use the same physical directory to stores locks.
352+
353+ Processes must run on the same machine, virtual machine or container.
354+ Be careful when updating a Kubernetes or Swarm service because for a short
355+ period of time, there can be two running containers in parallel.
356+
357+ The absolute path to the directory must remain the same. Be careful of symlinks
358+ that could change at anytime: Capistrano and blue/green deployment often use
359+ that trick. Be careful when the path to that directory changes between two
360+ deployments.
361+
362+ Some file systems (such as some types of NFS) do not support locking.
363+
364+ .. caution ::
365+
366+ All concurrent processes must use the same physical file system by running
367+ on the same machine and using the same absolute path to locks directory.
368+
369+ By definition, usage of ``FlockStore `` in an HTTP context is incompatible
370+ with multiple front servers, unless to ensure that the same resource will
371+ always be locked on the same machine or to use a well configured shared file
372+ system.
373+
374+ Files on file system can be removed during a maintenance operation. For instance
375+ to cleanup the ``/tmp `` directory or after a reboot of the machine when directory
376+ uses tmpfs. It's not an issue if the lock is released when the process ended, but
377+ it is in case of ``Lock `` reused between requests.
378+
379+ .. caution ::
380+
381+ Do not store locks on a volatile file system if they have to be reused in
382+ several requests.
383+
384+ MemcachedStore
385+ ~~~~~~~~~~~~~~
386+
387+ The way Memcached works is to store items in memory. That means that by using
388+ the :ref: `MemcachedStore <lock-store-memcached >` the locks are not persisted
389+ and may disappear by mistake at anytime.
390+
391+ If the Memcached service or the machine hosting it restarts, every lock would
392+ be lost without notifying the running processes.
393+
394+ .. caution ::
395+
396+ To avoid that someone else acquires a lock after a restart, it's recommended
397+ to delay service start and wait at least as long as the longest lock TTL.
398+
399+ By default Memcached uses a LRU mechanism to remove old entries when the service
400+ needs space to add new items.
401+
402+ .. caution ::
403+
404+ Number of items stored in the Memcached must be under control. If it's not
405+ possible, LRU should be disabled and Lock should be stored in a dedicated
406+ Memcached service away from Cache.
407+
408+ When the Memcached service is shared and used for multiple usage, Locks could be
409+ removed by mistake. For instance some implementation of the PSR-6 ``clear() ``
410+ method uses the Memcached's ``flush() `` method which purges and removes everything.
411+
412+ .. caution ::
413+
414+ The method ``flush() `` must not be called, or locks should be stored in a
415+ dedicated Memcached service away from Cache.
416+
417+ RedisStore
418+ ~~~~~~~~~~
419+
420+ The way Redis works is to store items in memory. That means that by using
421+ the :ref: `RedisStore <lock-store-redis >` the locks are not persisted
422+ and may disappear by mistake at anytime.
423+
424+ If the Redis service or the machine hosting it restarts, every locks would
425+ be lost without notifying the running processes.
426+
427+ .. caution ::
428+
429+ To avoid that someone else acquires a lock after a restart, it's recommended
430+ to delay service start and wait at least as long as the longest lock TTL.
431+
432+ .. tip ::
433+
434+ Redis can be configured to persist items on disk, but this option would
435+ slow down writes on the service. This could go against other uses of the
436+ server.
437+
438+ When the Redis service is shared and used for multiple usages, locks could be
439+ removed by mistake.
440+
441+ .. caution ::
442+
443+ The command ``FLUSHDB `` must not be called, or locks should be stored in a
444+ dedicated Redis service away from Cache.
445+
446+ CombinedStore
447+ ~~~~~~~~~~~~~
448+
449+ Combined stores allow to store locks across several backends. It's a common
450+ mistake to think that the lock mechanism will be more reliable. This is wrong
451+ The ``CombinedStore `` will be, at best, as reliable as the least reliable of
452+ all managed stores. As soon as one managed store returns erroneous information,
453+ the ``CombinedStore `` won't be reliable.
454+
455+ .. caution ::
456+
457+ All concurrent processes must use the same configuration, with the same
458+ amount of managed stored and the same endpoint.
459+
460+ .. tip ::
461+
462+ Instead of using a cluster of Redis or Memcached servers, it's better to use
463+ a ``CombinedStore `` with a single server per managed store.
464+
465+ SemaphoreStore
466+ ~~~~~~~~~~~~~~
467+
468+ Semaphores are handled by the Kernel level. In order to be reliable, processes
469+ must run on the same machine, virtual machine or container. Be careful when
470+ updating a Kubernetes or Swarm service because for a short period of time, there
471+ can be two running containers in parallel.
472+
473+ .. caution ::
474+
475+ All concurrent processes must use the same machine. Before starting a
476+ concurrent process on a new machine, check that other process are stopped
477+ on the old one.
478+
479+ Overall
480+ ~~~~~~~
481+
482+ Changing the configuration of stores should be done very carefully. For
483+ instance, during the deployment of a new version. Processes with new
484+ configuration must not be started while old processes with old configuration
485+ are still running.
486+
260487.. _`locks` : https://en.wikipedia.org/wiki/Lock_(computer_science)
261488.. _Packagist : https://packagist.org/packages/symfony/lock
262489.. _`PHP semaphore functions` : http://php.net/manual/en/book.sem.php
0 commit comments