Skip to content

clone on read from cache #321

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jan 25, 2021
Merged

clone on read from cache #321

merged 1 commit into from
Jan 25, 2021

Conversation

csviri
Copy link
Collaborator

@csviri csviri commented Jan 24, 2021

No description provided.

@csviri
Copy link
Collaborator Author

csviri commented Jan 25, 2021

@metacosm @adam-sandor @ppatierno So as discussed here:
#319
this PR should fix the mentioned issue an possible others, can we merge this? and close the issue and the other PRs?

@ppatierno
Copy link
Contributor

@csviri sorry I checked the wrong link! I have tested my PR not yours yet.

@metacosm
Copy link
Collaborator

Hmm, cloning by serializing and deserializing might have quite a performance impact and I'm not sure it wouldn't actually be faster to always retrieve the last version from the cluster…

@csviri
Copy link
Collaborator Author

csviri commented Jan 25, 2021

Hmm, cloning by serializing and deserializing might have quite a performance impact and I'm not sure it wouldn't actually be faster to always retrieve the last version from the cluster…

@metacosm
Retrieving from the cluster does also deserialization. But note that these are small objects (not huge json docs usually - think megabytes) this can be measured in micro seconds (we can actually log it on trace level). (https://www.ericdecanini.com/2020/10/13/benchmarking-gson-vs-jackson-vs-moshi-2020/)

Making an API call on every event we received / processing of controller is not a good idea. You will have to make or maintain a TCP connection to the K8S API server, while we have anyways the latest resource from the watch always. Also note that we try to minimize the K8S API, so we don't make a DOS attack on our own.
( This would mean API call per controller execution, where the execution might not be triggered by a custom resource change. Imagin you have 1000 CRs an receive an event from a Deployment Event source for each, you would do 1000 request for K8S api while there is no request needed at all, since the CustomResource is not changed )

I think its quite high priority to not do polling on K8S API in general.

There might be faster way for cloning, but at this state to fix this problem, I thing definitely the cloning is the way to go. On long term we can discuss different strategy. But this PR fixes this problem, the performance impact is negligible)

@csviri
Copy link
Collaborator Author

csviri commented Jan 25, 2021

@csviri sorry I checked the wrong link! I have tested my PR not yours yet.

@ppatierno
Uhh I saw this now, could you test with this PR pls.

@metacosm
Copy link
Collaborator

@csviri retrieving from the cluster does indeed deserialize but here we're serializing then deserializing… Also, the process is not a perfect clone i.e. not all state is necessarily serialized (though the controller should be able to reconstruct that state from the "clean" copy).

I do agree that we should merge this fix if it indeed addresses the issue but I'm not convinced that this caching strategy is the proper one and we might indeed need to re-think some things.

@csviri
Copy link
Collaborator Author

csviri commented Jan 25, 2021

@metacosm Sure, happy to discuss alternatives. Pls organise a meeting in case.

Although I think caches have a significant roles here also in event sources to avoid polling.

Hopefully can finish the blog this week, this is touched too, ehh cannot find enough time :/

@csviri csviri merged commit 03b6517 into master Jan 25, 2021
@metacosm metacosm deleted the clone-on-read-from-cache branch March 11, 2021 08:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Controller create/update handling called even if exception when applying finalizer
3 participants