-
Notifications
You must be signed in to change notification settings - Fork 709
feat: Synchronous Metrics Reader and Exporter #4549
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Could you explain a little more the use case for queueing and batching? We don't have any additional batching in the spec for PeriodicExportingMetricReader because metrics are effectively already batched via aggregation. If you get 1000 |
Hi @aabmass, Today, my options are:
I mentioned on the PR that if this puts us in a tough spot, then I am happy to drop the feature request or turn it experimental to see if there is any interest. But I don't want to cause an issue with the specification for trying to implement out with agreed readers we already have. I ended up creating a PR: #4559 to add an example for users that need synchronous gauge collections using the method we discussed on the last SIG call. I thought it also might be useful in a CICD where you might want to explicitly gauge readings and write them in batches before shutdown. |
Thanks for explaining. I agree OTel metrics don't support this well today since they aggregate to reduce data volume. Would generating a histogram work for you?
You wouldn't keep every single measurement along the way, but you could sample more frequently without generating a ton of points. Or is it super important to keep every point? Also, have you considered emitting Events (logs) with individual measurements? |
Oh, agreed, I think a histogram is a good alternative, but it locks you into that data representation. If you have the option to save the raw gauge measurements, it allows you to transform those results later, but retain the exact metric values generated at the time of collection. I was thinking of a three-stage Kafka processing pipeline, to capture telemetry measurements at each stage; you may want to capture message size, transformation logic .etc. Though these can be stored in a histogram as well. I think with the workaround, I am happy to close out the issue, and then if it pops up, we could revisit. |
I have closed out #4542 as out of scope, as I agree with the above options and workarounds. I think now that we have documented an example of the workaround, this should help future users with similar use cases. I mentioned in the closed PR that if the feature is asked for again in the future I am happy for the PR to be reopened or the code reused in another PR to solve the issue. Thanks for your analysis @aabmass and @lzchen I shall see you at the next Python sig call :) |
Great, thank you again @Jayclifford345 |
Uh oh!
There was an error while loading. Please reload this page.
Is your feature request related to a problem?
Summary
Add a
SynchronousExportingMetricReader
to complement the existingPeriodicExportingMetricReader
, supporting on-demand batch exporting of metrics rather than timer-based collection.Problem Statement
The current
PeriodicExportingMetricReader
uses fixed intervals, which doesn't fit all use cases. There are scenarios which require explicit control over when metrics are collected and exported. It also means there is a feature gap between metrics compared to logs and traces, which can handle batch exporting.Describe the solution you'd like
Solution
SynchronousExportingMetricReader
provides:collect()
callsReal-World Use Cases
Example Usage
Describe alternatives you've considered
Alternatives Considered
It could be possible to achieve similar behaviour with the
PeriodicExportingMetricReader
(setting collection interval to infinite and calling collect manually). However, it does not explicitly implement a queue and batching system. This function would bring feature parity between logs and traces.Additional Context
References
PR with new
SynchronousExportingMetricReader
#4542Would you like to implement a fix?
Yes
The text was updated successfully, but these errors were encountered: