What happened (please include outputs or screenshots):
When I run a watch on k8s jobs (probably applies to other objects) and k8s control plane gets upgraded in the meantime, the watcher just silently stops producing events. No exception is being raised.
What you expected to happen:
Exception being raised or events continuing to arrive
How to reproduce it (as minimally and precisely as possible):
- start watcher:
from kubernetes import client, config, watch
config.load_incluster_config()
batch_v1_api = client.BatchV1Api()
watcher = watch.Watch()
for event in watcher.stream(
func=batch_v1_api.list_namespaced_job,
namespace="mynamespace",
timeout_seconds=0,
):
print(event)
- upgrade k8s control plane
- create some new jobs
- new events are not coming in. no exceptions are being raised
Anything else we need to know?:
Reproduced on GKE stable channel with the last two control plane upgrades
Environment:
- Kubernetes version (
kubectl version): 1.33.5-gke.1080000
- OS (e.g., MacOS 10.13.6): linux
- Python version (
python --version): 3.12.12
- Python client version (
pip list | grep kubernetes): 33.1.0
What happened (please include outputs or screenshots):
When I run a watch on k8s jobs (probably applies to other objects) and k8s control plane gets upgraded in the meantime, the watcher just silently stops producing events. No exception is being raised.
What you expected to happen:
Exception being raised or events continuing to arrive
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
Reproduced on GKE stable channel with the last two control plane upgrades
Environment:
kubectl version): 1.33.5-gke.1080000python --version): 3.12.12pip list | grep kubernetes): 33.1.0