-
Notifications
You must be signed in to change notification settings - Fork 244
DRIVERS-2884 Avoid connection churn when operations timeout #1675
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
source/client-side-operations-timeout/tests/connection-churn.yml
Outdated
Show resolved
Hide resolved
source/client-side-operations-timeout/tests/connection-churn.yml
Outdated
Show resolved
Hide resolved
# after maxTimeMS, whereas mongod returns it after | ||
# max(blockTimeMS, maxTimeMS). Until this ticket is resolved, these tests | ||
# will not pass on sharded clusters. | ||
topologies: ["standalone", "replicaset"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
standalone -> single
- name: findOne | ||
object: *collection | ||
arguments: | ||
timeoutMS: 50 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In python this timeout is too small and causes this find to fail before sending anything to the server. The same problem exists in the other tests too. Perhaps all of theses tests should run a setup command (eg ping) to ensure a connection is created and available in the pool, then run the finds. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes to the unified test format LGTM.
@prestonvasquez as per our conversation around where to add the missing event names in #1782, this schema version would be an ideal candidate as it already adds new events to the list.
source/connection-monitoring-and-pooling/connection-monitoring-and-pooling.md
Outdated
Show resolved
Hide resolved
source/connection-monitoring-and-pooling/connection-monitoring-and-pooling.md
Show resolved
Hide resolved
connectionId: int64; | ||
|
||
/** | ||
* The time it took to complete the pending read. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. Can we clarify that in the description of duration? We can take inspiration from the definitions of duration for checkout failed
and checkout succeeded
events. Ex:
/**
* The time it took to establish the connection.
* In accordance with the definition of establishment of a connection
* specified by `ConnectionPoolOptions.maxConnecting`,
* it is the time elapsed between emitting a `ConnectionCreatedEvent`
* and emitting this event as part of the same checking out.
*
* Naturally, when establishing a connection is part of checking out,
* this duration is not greater than
* `ConnectionCheckedOutEvent`/`ConnectionCheckOutFailedEvent.duration`.
*
* A driver MAY choose the type idiomatic to the driver.
* If the type chosen does not convey units, e.g., `int64`,
* then the driver MAY include units in the name, e.g., `durationMS`.
*/
duration: Duration;
So, maybe something like:
/**
* The time it took to complete the pending read.
* This duration is defined as the time elapsed between emitting a `PendingResponseStarted` event
* and emitting this event as part of the same checking out.
*
* A driver MAY choose the type idiomatic to the driver.
* If the type chosen does not convey units, e.g., `int64`,
* then the driver MAY include units in the name, e.g., `durationMS`.
*/
duration: Duration;
connectionId: int64; | ||
|
||
/** | ||
* The time it took to complete the pending read. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(same comment for other definitions of duration in this PR).
source/connection-monitoring-and-pooling/connection-monitoring-and-pooling.md
Outdated
Show resolved
Hide resolved
source/connection-monitoring-and-pooling/connection-monitoring-and-pooling.md
Outdated
Show resolved
Hide resolved
source/connection-monitoring-and-pooling/connection-monitoring-and-pooling.md
Outdated
Show resolved
Hide resolved
- connectionCheckedInEvent: {} # Second find succeeds. | ||
# If the connection is closed server-side while draining the response, the | ||
# driver must close the connection. | ||
- description: "connection closed server-side while draining response" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems to be a sufficient check that if the awaitPendingResponse
function fails with a non-timeout error the connection should be closed.
timeoutMS: 50 | ||
filter: {_id: 1} | ||
expectError: | ||
isTimeoutError: false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't this error be considered retryable under the readable read/write specs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CMAP only makes the pool-cleared error retryable at check-out. Since retryable reads occur at the operation layer and this particular network error happens at the connection pool layer (before a read command goes on the wire), I think we would have to extend the CMAP spec to say that network errors while checking out qualify as retryable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Checkout errors should already be retryable. For example, a network error when establishing a new connection will cause an automatic retry.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The only error type we tag as retryable during checkOut is PoolClearedError
. Nothing else in that layer is marked as retryable, which is why the test in question passes in the Go Driver. Am I missing something in the CMAP spec?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes that is a bug in the Go driver, see https://jira.mongodb.org/browse/DRIVERS-746
And the retryable writes spec:
When the driver encounters a network error establishing an initial connection to a server, it MUST add a RetryableWriteError label to that error if the MongoClient performing the operation has the retryWrites configuration option set to true.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't a network error that occurs during a handshake, it's a network error encountered when trying to drain data from an established connection.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Correct but my point is that it is the same case in spirit. We can't introduce this new error mode without making it retrtyable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've updated the retryable reads and writes specifications to retry for network errors when checking out a connection.
@@ -176,6 +176,11 @@ The RetryableWriteError label might be added to an error in a variety of ways: | |||
RetryableWriteError label to that error if the MongoClient performing the operation has the retryWrites | |||
configuration option set to true. | |||
|
|||
- When the driver encounters a network error checking out a connection, it MUST add a RetryableWriteError label to that | |||
error if the MongoClient performing the operation has the retryWrites configuration option set to true. For example, | |||
a network error encountered when checking out a connection that must attempt to discard a pending response from the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a network error encountered when checking out a connection that must attempt to discard a pending response
Is this sentence correct? I'm getting tripped up by the "that must attempt". Should it be
a network error encountered when reading a pending response during connection checkout.
- connectionCheckedOutEvent: {} | ||
- connectionCheckedInEvent: {} # Ping finishes. | ||
- connectionCheckedOutEvent: {} | ||
- connectionCheckedInEvent: {} # Insert fails. | ||
- connectionPendingResponseStarted: {} # Pending read fails on first find | ||
- connectionPendingResponseFailed: | ||
reason: error | ||
- connectionClosedEvent: | ||
reason: error | ||
- connectionCheckedOutEvent: {} | ||
- connectionCheckedInEvent: {} # Find finishes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we add server selection events to this list (maybe we could use logging tests for this, even though not all drivers have implemented the CLAM spec?)? It would be nice to clarify that the retry happens because the error returned is retryable and we use the existing retry mechanism, not that we retry checkout directly. Basically:
- connectionPendingResponseStarted: {} # Pending read fails on first find
- connectionPendingResponseFailed:
reason: error
- connectionClosedEvent:
reason: error
- serverSelectionStarted
- serverSelectionFinished
- connectionCheckedOutEvent: {}
- connectionCheckedInEvent: {} # Find finishes.
This PR implements the design for connection pooling improvements described in DRIVERS-2884, based on the CSOT (Client-Side Operation Timeout) spec. It addresses connection churn caused by network timeouts during operations, especially in environments with low client-side timeouts and high latency.
When a connection is checked out after a network timeout, the driver now attempts to resume and complete reading any pending server response (instead of closing and discarding the connection). This may require multiple checkouts.
Each pending response read is subject to a cumulative 3-second static timeout. The timeout is refreshed after each successful read, acknowledging that progress is being made. If no data is read and the timeout is exceeded, the connection is closed.
To reduce unnecessary latency, if the timeout has expired while the connection was idle in the pool, a non-blocking single-byte read is performed; if no data is available, the connection is closed immediately.
This update introduces new CMAP events and logging messages (PendingResponseStarted, PendingResponseSucceeded, PendingResponseFailed) to improve observability of this path.
Please complete the following before merging:
clusters, and serverless).