Thanks to visit codestin.com
Credit goes to github.com

Skip to content

session save path order matter? #1214

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
2 tasks
AnoopAlias opened this issue Jul 27, 2017 · 3 comments
Open
2 tasks

session save path order matter? #1214

AnoopAlias opened this issue Jul 27, 2017 · 3 comments
Assignees
Labels

Comments

@AnoopAlias
Copy link

AnoopAlias commented Jul 27, 2017

Expected behaviour

Probably this is the intended behavior , but its not mentioned anywhere in docs!

session.save_path = "tcp://127.0.0.1:9501?database=2&timeout=3,tcp://127.0.0.1:6379?database=2&timeout=3"

With the above setting .. php throw a redis error if port 9501 is down .But if we reverse the server order then it works. That is is multiple servers are given the first ones in the list if down throws an error. But this does not happen if the last one is down

Actual behaviour

If there are multiple servers the error must be thrown only when all servers are down

I'm seeing this behaviour on

  • OS: CentOS7
  • Redis: redis-3.2.3-1
  • PHP: php5.6
  • phpredis:3.1.3

Steps to reproduce, backtrace or example script

; Load redis php extension
extension=redis.so

; Redis session store backend
session.save_handler = redis
session.save_path = "tcp://127.0.0.1:9501?database=2&timeout=3,tcp://127.0.0.1:6379?database=2&timeout=3"

I've checked

  • There is no similar issue from other users
  • Issue isn't fixed in develop branch
@jasminmistry
Copy link

Please fix the issue.

@michael-grunder michael-grunder self-assigned this Nov 20, 2019
michael-grunder added a commit that referenced this issue Nov 21, 2019
This commit modifies our weighted session selection algorithm by adding
a scratch array so we can mark servers that we've picked but were unable
to connect to.

All we really do is treat any server that has failed as if it had a
weight of zero meaning it'll never be picked.

Addresses #1214
@michael-grunder
Copy link
Member

michael-grunder commented Nov 22, 2019

I've implemented a bit of experimental logic to try again if the first server we pick is down (and continue trying until we run out of servers to try).

This does present a problem of what to do if a session is initially created on the second choice (because the first pick was down) and then that first server comes back up. We'd go looking for the session and not find it.

@yatsukhnenko
Copy link
Member

@michael-grunder if I correctly understand it is not a bug that error is thrown when shard is not up. The main problem here is inconsistency when we swap servers in config so probably we can handle this and make it more predictable?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants