2025-12-06 01:00:37,488 - distributed.worker - ERROR - Exception during execution of task ('getitem-0d51c5c3085c0ba9edb097d16d739d58', 2). Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2382, in _prepare_args_for_execution data[k] = self.data[k] File "/opt/conda/lib/python3.10/site-packages/distributed/spill.py", line 226, in __getitem__ return super().__getitem__(key) File "/opt/conda/lib/python3.10/site-packages/zict/buffer.py", line 108, in __getitem__ raise KeyError(key) KeyError: "('shuffle-p2p-d1ae0c101f210d7c5f8ea8c7dc3cf426', 2)" During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2259, in execute args2, kwargs2 = self._prepare_args_for_execution(ts, args, kwargs) File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2386, in _prepare_args_for_execution data[k] = Actor(type(self.state.actors[k]), self.address, k, self) KeyError: "('shuffle-p2p-d1ae0c101f210d7c5f8ea8c7dc3cf426', 2)"
2025-12-05 08:40:49,197 - distributed.worker - WARNING - Compute Failed Key: ('assign-96356a2d6db04b819cb16e4ccb058d2b', 2) Function: subgraph_callable-5c713e64-b051-4a08-80cc-a7aaae13 args: ( source_id geometry 784 2794848552 POINT Z (444409.679 5895517.174 0.000)) kwargs: {} Exception: "GEOSException('IllegalArgumentException: point array must contain 0 or >1 elements\\n')"
2025-12-05 08:03:44,992 - distributed.worker - INFO - -------------------------------------------------
2025-12-05 08:03:44,992 - distributed.worker - INFO - Registered to: tcp://dask-scheduler:8786
2025-12-05 08:03:44,540 - distributed.worker - INFO - -------------------------------------------------
2025-12-05 08:03:44,540 - distributed.worker - INFO - Local Directory: /tmp/dask-worker-space/worker-nox7vk6i
2025-12-05 08:03:44,540 - distributed.worker - INFO - Memory: 3.73 GiB
2025-12-05 08:03:44,540 - distributed.worker - INFO - Threads: 1
2025-12-05 08:03:44,540 - distributed.worker - INFO - -------------------------------------------------
2025-12-05 08:03:44,540 - distributed.worker - INFO - Waiting to connect to: tcp://dask-scheduler:8786
2025-12-05 08:03:44,540 - distributed.worker - INFO - dashboard at: 172.21.12.191:8790
2025-12-05 08:03:44,539 - distributed.worker - INFO - Listening to: tcp://172.21.12.191:42709
2025-12-05 08:03:44,539 - distributed.worker - INFO - Start worker at: tcp://172.21.12.191:42709