-
Notifications
You must be signed in to change notification settings - Fork 103
[Bug] OpenTelemetry interceptors report errors #160
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Thanks for the reproduction! That really helps. I will hopefully get to this soon. |
@anhdle14 - While it takes me some time to get around to this, there is https://github.com/temporalio/samples-python/tree/main/open_telemetry that does work fine. Can you confirm? Also note that OTLP package that uses an older version of protobuf may not work with ours at this time. |
I'm seeing this as well in my tests with |
@nathanielobrown - Can you describe what you're trying to do exactly? Are you trying to use OpenTelemetry from a workflow or in your workflow file? Can you pass through the import? |
I'm importing def get_worker_interceptors() -> Sequence[Interceptor]:
return [SentryInterceptor(), TracingInterceptor()]
def get_client_interceptors() -> Sequence[ClientInterceptor]:
return [TracingInterceptor()] I haven't got tracing working well yet, so I'm no expert/still a bit lost in the otel stuff, but I can say I see this log output in tests that run a workflow. Maybe for all tests that run workflows, not 100% sure:
|
Is that in a workflow file? Have you made sure you pass through the import? |
Not sure I totally follow, but I can say I've been using |
Yes, if you're not using the sandbox then it must be some other issue. We have unit tests for OTel at https://github.com/temporalio/sdk-python/blob/main/tests/contrib/test_opentelemetry.py, but the original users' sample recreates the tracer twice and yours using Sentry + OTel which shouldn't matter. I will try to create a small reproducible test case when I can get to this issue. |
Just wanted to jot down that I think this might just be an issue on worker exit. This happens all over the place in my tests, but I'm creating+shutting down workers all over the place as well. In production I see a lot of these errors when a worker is killed. |
Yeah, there's an issue with detach context on generator exit (i.e. when event loop is GC'd). May be similar to open-telemetry/opentelemetry-python#2606. I have to probably just swallow detachment failure. |
Sorry, I don't follow this, should I not create the tracer twice? |
Correct. You are already setting it globally in |
Uh oh!
There was an error while loading. Please reload this page.
What are you really trying to do?
I am currently using OpenTelemetry and Temporal Interceptors to send traces to Grafana Tempo via Grafana Agent.
Describe the bug
It just returns errors like this without any more detailed
Minimal Reproduction
Environment/Versions
OS and processor: AWS | EKS | Docker | Ubuntu
Temporal version:
temporalio = "^0.1b2"
OTel version:
Are you using Docker or Kubernetes or building Temporal from source?
Yes
Additional context
N/A
The text was updated successfully, but these errors were encountered: