Closed
Description
If I set the log level to ERROR or anything higher I see OverflowError
error
Code
import time
import logging
from kafka import KafkaProducer
logging.basicConfig(level=logging.ERROR)
producer = KafkaProducer(bootstrap_servers=["kafka-in-docker:9092"], api_version=(7,3,1))
producer.send("foobar", b"hello world")
time.sleep(3)
Error
ERROR:kafka.producer.sender:Uncaught error in kafka producer I/O thread
Traceback (most recent call last):
File "/usr/local/lib/gravity/poetry/venvs/hpe-cnx-ctb-b6oAM6xe-py3.10/lib/python3.10/site-packages/kafka/producer/sender.py", line 60, in run
self.run_once()
File "/usr/local/lib/gravity/poetry/venvs/hpe-cnx-ctb-b6oAM6xe-py3.10/lib/python3.10/site-packages/kafka/producer/sender.py", line 160, in run_once
self._client.poll(timeout_ms=poll_timeout_ms)
File "/usr/local/lib/gravity/poetry/venvs/hpe-cnx-ctb-b6oAM6xe-py3.10/lib/python3.10/site-packages/kafka/client_async.py", line 600, in poll
self._poll(timeout / 1000)
File "/usr/local/lib/gravity/poetry/venvs/hpe-cnx-ctb-b6oAM6xe-py3.10/lib/python3.10/site-packages/kafka/client_async.py", line 634, in _poll
ready = self._selector.select(timeout)
File "/opt/pyenv/versions/3.10.12/lib/python3.10/selectors.py", line 469, in select
fd_event_list = self._selector.poll(timeout, max_ev)
OverflowError: timeout is too large
Platform
I am running this code
with kafka-python==2.0.5
on Python 3.10.12
on Ubuntu 22.04.5 LTS (Jammy Jellyfish)
in Docker Desktop 4.38.0 (181591)
on MacOS Sequoia 15.3.1 (24D70)
Metadata
Metadata
Assignees
Labels
No labels