-
-
Notifications
You must be signed in to change notification settings - Fork 172
Kopf does not restart after 429 too many requests error #1108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I'm seeing the same issue on |
I see the same problem. Once an It would be great if one of the following could be achieved:
|
@nolar Any input / update on this? If you know where the potential issue may be, can you give some pointers for me or someone else to potentially fix? |
Same issues starting kopf with namespace wildcard scope:
Having kopf handlers on N resources (mostly CRD) and M namespaces, it seems like kopf tries to immediately start N x M listeners resulting in 429 from K8s API for most of them. kopf is running, but handlers for most resources are not fired. |
We have been seeing this problem when the kubernetes API server restarts while doing cluster upgrades. Are workaround for now is to restart the operator pods after doing a restart. |
Uh oh!
There was an error while loading. Please reload this page.
Long story short
We have kopf 1.37.1 running a watch stream on a single custom resource cluster wide. Eventually we received a 429 too many requests error from the kube-apiserver. The kopf watch did not restart after this error. It shutdown and continued running the kopf liveness and readiness probe locations.
Based on the logs it doesn't look too many requests at all.
We had the following updates to the CR list.
Line 813: 2024-02-26T06:02:22.849697098Z
Line 1256: 2024-02-26T07:02:27.741847287Z
Line 1383: 2024-02-26T07:18:02.638854440Z
Line 1706: 2024-02-26T08:02:29.404285926Z
Line 2148: 2024-02-26T09:02:30.995018345Z
Line 2583: 2024-02-26T10:02:32.614030753Z
Line 3059: 2024-02-26T11:02:36.687348782Z
Line 3249: 2024-02-26T11:27:30.696478118Z
Line 3479: 2024-02-26T11:54:46.913868429Z
Line 3958: 2024-02-26T12:54:51.819996162Z
Then we hit the following error from kopf.
All of the available watching timeout settings have been set. But no restart occurs. This fails silently without ending the kopf executable making it impossible to recover from.
Is there some setting or something we are missing to deal with this issue? Any execution recommendations that can be done to avoid this issue?
Kopf version
1.37.1
Kubernetes version
v1.27.8+4fab27b - Red Hat Openshift 4.14.8
Python version
3.11
Code
Our kopf configuration. Never been able to reproduce this issue reliably.
Logs
Additional information
No response
The text was updated successfully, but these errors were encountered: