Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ignore stale gossip packets that arrive out of order #1777

Open
wants to merge 1 commit into
base: unstable
Choose a base branch
from

Conversation

enjoy-binbin
Copy link
Member

There is a failure in the daily:

=== ASSERTION FAILED ===
==> cluster_legacy.c:6588 'primary->replicaof == ((void *)0)' is not true

This is the logs:

- i am fd4318562665b4490ccc86e7f7988017cf960371 and myself become a replica,
- 63c0167232dae95cdcc0a1568cd5368ac3b99f5 is the new primary
27867:M 24 Feb 2025 00:19:11.011 * Failover auth granted to 763c0167232dae95cdcc0a1568cd5368ac3b99f5 () for epoch 9
27867:M 24 Feb 2025 00:19:11.039 * Configuration change detected. Reconfiguring myself as a replica of node 763c0167232dae95cdcc0a1568cd5368ac3b99f5 () in shard c5f6b2a9c74cabd4d1e54d1130dc9cb9419bf76f
27867:S 24 Feb 2025 00:19:11.039 * Before turning into a replica, using my own primary parameters to synthesize a cached primary: I may be able to synchronize with the new primary with just a partial transfer.
27867:S 24 Feb 2025 00:19:11.039 * Connecting to PRIMARY 127.0.0.1:23654
27867:S 24 Feb 2025 00:19:11.039 * PRIMARY <-> REPLICA sync started

- in here myself got an stale message, but we still process the packet and cause this issue
27867:S 24 Feb 2025 00:19:11.040 * Ignore stale message from 763c0167232dae95cdcc0a1568cd5368ac3b99f5 () in shard c5f6b2a9c74cabd4d1e54d1130dc9cb9419bf76f; gossip config epoch: 8, current config epoch: 9
27867:S 24 Feb 2025 00:19:11.040 * Node 763c0167232dae95cdcc0a1568cd5368ac3b99f5 () is now a replica of node fd4318562665b4490ccc86e7f7988017cf960371 () in shard c5f6b2a9c74cabd4d1e54d1130dc9cb9419bf76f

We can see myself got a stale message, but we still process it, and changed
the role and cause a primary replica chain loop.

The reason is that, this text is copy from #651.

In some rare case, slot config updates (via either PING/PONG or UPDATE)
can be delivered out of order as illustrated below:

1. To keep the discussion simple, let's assume we have 2 shards, shard a
   and shard b. Let's also assume there are two slots in total with shard
   a owning slot 1 and shard b owning slot 2.
2. Shard a has two nodes: primary A and replica A*; shard b has primary
   B and replica B*.
3. A manual failover was initiated on A* and A* just wins the election.
4. A* announces to the world that it now owns slot 1 using PING messages.
   These PING messages are queued in the outgoing buffer to every other
   node in the cluster, namely, A, B, and B*.
5. Keep in mind that there is no ordering in the delivery of these PING
   messages. For the stale PING message to appear, we need the following
   events in the exact order as they are laid out.
a. An old PING message before A* becomes the new primary is still queued
   in A*'s outgoing buffer to A. This later becomes the stale message,
   which says A* is a replica of A. It is followed by A*'s election
   winning announcement PING message.
b. B or B* processes A's election winning announcement PING message
   and sets slots[1]=A*.
c. A sends a PING message to B (or B*). Since A hasn't learnt that A*
   wins the election, it claims that it owns slot 1 but with a lower
   epoch than B has on slot 1. This leads to B sending an UPDATE to
   A directly saying A* is the new owner of slot 1 with a higher epoch.
d. A receives the UPDATE from B and executes clusterUpdateSlotsConfigWith.
   A now realizes that it is a replica of A* hence setting myself->replicaof
   to A*.
e. Finally, the pre-failover PING message queued up in A*'s outgoing
   buffer to A is delivered and processed, out of order though, to A.
f. This stale PING message creates the replication loop

There is a failure in the daily:
```
=== ASSERTION FAILED ===
==> cluster_legacy.c:6588 'primary->replicaof == ((void *)0)' is not true
```

This is the logs:
```
- i am fd4318562665b4490ccc86e7f7988017cf960371 and myself become a replica,
- 63c0167232dae95cdcc0a1568cd5368ac3b99f5 is the new primary
27867:M 24 Feb 2025 00:19:11.011 * Failover auth granted to 763c0167232dae95cdcc0a1568cd5368ac3b99f5 () for epoch 9
27867:M 24 Feb 2025 00:19:11.039 * Configuration change detected. Reconfiguring myself as a replica of node 763c0167232dae95cdcc0a1568cd5368ac3b99f5 () in shard c5f6b2a9c74cabd4d1e54d1130dc9cb9419bf76f
27867:S 24 Feb 2025 00:19:11.039 * Before turning into a replica, using my own primary parameters to synthesize a cached primary: I may be able to synchronize with the new primary with just a partial transfer.
27867:S 24 Feb 2025 00:19:11.039 * Connecting to PRIMARY 127.0.0.1:23654
27867:S 24 Feb 2025 00:19:11.039 * PRIMARY <-> REPLICA sync started

- in here myself got an stale message, but we still process the packet and cause this issue
27867:S 24 Feb 2025 00:19:11.040 * Ignore stale message from 763c0167232dae95cdcc0a1568cd5368ac3b99f5 () in shard c5f6b2a9c74cabd4d1e54d1130dc9cb9419bf76f; gossip config epoch: 8, current config epoch: 9
27867:S 24 Feb 2025 00:19:11.040 * Node 763c0167232dae95cdcc0a1568cd5368ac3b99f5 () is now a replica of node fd4318562665b4490ccc86e7f7988017cf960371 () in shard c5f6b2a9c74cabd4d1e54d1130dc9cb9419bf76f
```

We can see myself got a stale message, but we still process it, and changed
the role and cause a primary replica chain loop.

The reason is that, this text is copy from valkey-io#651.

In some rare case, slot config updates (via either PING/PONG or UPDATE)
can be delivered out of order as illustrated below:
1. To keep the discussion simple, let's assume we have 2 shards, shard a
   and shard b. Let's also assume there are two slots in total with shard
   a owning slot 1 and shard b owning slot 2.
2. Shard a has two nodes: primary A and replica A*; shard b has primary
   B and replica B*.
3. A manual failover was initiated on A* and A* just wins the election.
4. A* announces to the world that it now owns slot 1 using PING messages.
   These PING messages are queued in the outgoing buffer to every other
   node in the cluster, namely, A, B, and B*.
5. Keep in mind that there is no ordering in the delivery of these PING
   messages. For the stale PING message to appear, we need the following
   events in the exact order as they are laid out.
a. An old PING message before A* becomes the new primary is still queued
   in A*'s outgoing buffer to A. This later becomes the stale message,
   which says A* is a replica of A. It is followed by A*'s election
   winning announcement PING message.
b. B or B* processes A's election winning announcement PING message
   and sets slots[1]=A*.
c. A sends a PING message to B (or B*). Since A hasn't learnt that A*
   wins the election, it claims that it owns slot 1 but with a lower
   epoch than B has on slot 1. This leads to B sending an UPDATE to
   A directly saying A* is the new owner of slot 1 with a higher epoch.
d. A receives the UPDATE from B and executes clusterUpdateSlotsConfigWith.
   A now realizes that it is a replica of A* hence setting myself->replicaof
   to A*.
e. Finally, the pre-failover PING message queued up in A*'s outgoing
   buffer to A is delivered and processed, out of order though, to A.
f. This stale PING message creates the replication loop

Signed-off-by: Binbin <[email protected]>
@enjoy-binbin
Copy link
Member Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant