-
Notifications
You must be signed in to change notification settings - Fork 3.9k
Remove slave to jedisPool mapping #2504
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
@wxjackie Methods like |
The problem is that the slave nodes in the nodes are never checked and updated after initialization, so that I did the following operations, this phenomenon can occur: Thanks reply @sazzad16 , Please read my code test screenshot below, JedisCluster does cache the invalid slave information, and may still continue to use it,if the address |
This PR solves redis#2504 redis#2550, when renewSlotCache, we also remove dead nodes according to the latest query.
This PR solves redis#2504 redis#2550, when renewSlotCache, we also remove dead nodes according to the latest query.
The JedisInfoCache never get refreshed automatically even there's a failover / resharding / cluster vertical upgrade (bluegreen) / auto-scaling occurred for the Redis cluster. E.g. when a failover occurred, a slave will be promoted as a master node, and there would be a new slave node initialized and added to the shard. Then the cluster node IPs would be changed. |
This is marked as "will not fix" but can we get clarity on what that means?
Right now Jedis in cluster mode does not seem to handle failover as described by @rustlingwind |
@quinlam Not sure if you noticed but this is submitted as a pull request (PR) not as an issue. Irrespective of the issue, the change that this PR has proposed is not accepted. Ideally there should be separate issue and PR. In that case we could just close the PR. But this specific PR is kept open to attract more able eyes and brain for better solution, even another PR. That's where the label comes which is not ideal but the situation is not ideal in the first place. |
Recently occurred such a problem in our project: There are some redis clusters on K8s deployed and managed by us, after the business service runs for a period of time using one of the RedisCluster (cluster A), one of the shards of the cluster fails, and then the business service connects to another cluster(cluster B) through the JedisCluster.
In some cases, such as redis on K8s or some cloud service platforms, there may be situations where the redis node address has been reused by another redis cluster, for example:
But the
nodes
map inJedisInfoCache
saves the mapping of all master and slave nodes toJedisPool
. However, the slave->JedisPool has not been updated or reset in any way during the long-running after the initialization of JedisCluster. If the address of the slave (who has down and change to another address) is reused by other clusters and this cluster fails, JedisPool is randomly selected from thenodes
map for service discovery, and another cluster may be discoverd, causing read and write requests to be sent to other clusters incorrectly.So I think that the slave->JedisPool mapping should not be saved, because they are not checked after initialization but may be selected for disovery after failure.