2025-12-06 01:00:50,925 - distributed.worker - WARNING - Compute Failed Key: ('shuffle-p2p-86bd619d06d55badca637dddcb6987c5', 5) Function: shuffle_unpack args: ('d4f2bcb88bd283c797b0c8a2e03a2799', 5, 21324) kwargs: {} Exception: "RuntimeError('shuffle_unpack failed during shuffle d4f2bcb88bd283c797b0c8a2e03a2799')"
2025-12-06 01:00:17,645 - distributed.worker - ERROR - Exception during execution of task ('max-2e03880cd50e79e35b0c0be573cac2f7', 5). Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2382, in _prepare_args_for_execution data[k] = self.data[k] File "/opt/conda/lib/python3.10/site-packages/distributed/spill.py", line 226, in __getitem__ return super().__getitem__(key) File "/opt/conda/lib/python3.10/site-packages/zict/buffer.py", line 108, in __getitem__ raise KeyError(key) KeyError: "('getitem-91c858666facca416c7bde7bd9060128', 5)" During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2259, in execute args2, kwargs2 = self._prepare_args_for_execution(ts, args, kwargs) File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2386, in _prepare_args_for_execution data[k] = Actor(type(self.state.actors[k]), self.address, k, self) KeyError: "('getitem-91c858666facca416c7bde7bd9060128', 5)"
2025-12-05 04:10:11,502 - distributed.worker - WARNING - Compute Failed Key: ('assign-a1ca1211d630b21e5b87038005d3d25a', 3) Function: subgraph_callable-33c01b52-9d77-47e4-add5-39bfea9a args: ( source_id geometry 280 2794848536 POINT Z (431620.306 5887784.897 0.000) 416 2794848534 POINT Z (442638.501 5895338.032 0.000) 417 2794848534 POINT Z (442635.386 5895335.600 0.000) 419 2794848534 POINT Z (442632.324 5895333.177 0.000) 420 2794848534 POINT Z (442629.464 5895330.898 0.000) .. ... ... 642 2794848534 POINT Z (442319.795 5895108.949 0.000) 644 2794848534 POINT Z (442317.212 5895106.911 0.000) 648 2794848534 POINT Z (442306.259 5895098.832 0.000) 651 2794848534 POINT Z (442298.311 5895092.920 0.000) 653 2794848534 POINT Z (442294.806 5895090.337 0.000) [128 rows x 2 columns]) kwargs: {} Exception: "GEOSException('IllegalArgumentException: point array must contain 0 or >1 elements\\n')"
2025-12-04 18:10:45,274 - distributed.worker - WARNING - Compute Failed Key: ('assign-7f9d3741f6b65ad9f6b5deabb3451288', 13) Function: subgraph_callable-a17b2dbf-4a28-42f2-94b1-147f374c args: ( source_id geometry 191 2794848586 POINT Z (444155.528 5894830.608 0.000)) kwargs: {} Exception: "GEOSException('IllegalArgumentException: point array must contain 0 or >1 elements\\n')"
2025-12-04 08:30:58,859 - distributed.worker - ERROR - Worker stream died during communication: tcp://172.21.25.101:40477 Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/tornado/iostream.py", line 869, in _read_to_buffer bytes_read = self.read_from_fd(buf) File "/opt/conda/lib/python3.10/site-packages/tornado/iostream.py", line 1138, in read_from_fd return self.socket.recv_into(buf, len(buf)) ConnectionResetError: [Errno 104] Connection reset by peer The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2066, in gather_dep response = await get_data_from_worker( File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2892, in get_data_from_worker response = await send_recv( File "/opt/conda/lib/python3.10/site-packages/distributed/core.py", line 1024, in send_recv response = await comm.read(deserializers=deserializers) File "/opt/conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read convert_stream_closed_error(self, e) File "/opt/conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 142, in convert_stream_closed_error raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc distributed.comm.core.CommClosedError: in <TCP (closed) Ephemeral Worker->Worker for gather local=tcp://172.21.159.199:59966 remote=tcp://172.21.25.101:40477>: ConnectionResetError: [Errno 104] Connection reset by peer
2025-12-03 14:32:34,500 - distributed.worker - WARNING - Compute Failed Key: shuffle-barrier-9c85337d9ebdbf4376cc35340c7ff7ef Function: shuffle_barrier args: ('9c85337d9ebdbf4376cc35340c7ff7ef', [18319, 18319, 18319, 18319, 18319, 18319, 18319, 18319, 18319, 18319, 18319, 18319, 18319, 18319, 18319, 18319]) kwargs: {} Exception: "RuntimeError('shuffle_barrier failed during shuffle 9c85337d9ebdbf4376cc35340c7ff7ef')"
2025-12-03 07:32:23,333 - distributed.worker - ERROR - Worker stream died during communication: tcp://172.21.25.102:36683 Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/tornado/iostream.py", line 869, in _read_to_buffer bytes_read = self.read_from_fd(buf) File "/opt/conda/lib/python3.10/site-packages/tornado/iostream.py", line 1138, in read_from_fd return self.socket.recv_into(buf, len(buf)) ConnectionResetError: [Errno 104] Connection reset by peer The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2066, in gather_dep response = await get_data_from_worker( File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2892, in get_data_from_worker response = await send_recv( File "/opt/conda/lib/python3.10/site-packages/distributed/core.py", line 1024, in send_recv response = await comm.read(deserializers=deserializers) File "/opt/conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read convert_stream_closed_error(self, e) File "/opt/conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 142, in convert_stream_closed_error raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc distributed.comm.core.CommClosedError: in <TCP (closed) Ephemeral Worker->Worker for gather local=tcp://172.21.159.199:57996 remote=tcp://172.21.25.102:36683>: ConnectionResetError: [Errno 104] Connection reset by peer
2025-12-03 07:31:51,464 - distributed.worker - WARNING - Compute Failed Key: shuffle-barrier-59dd3d058fa856abd328333a7fc8616b Function: shuffle_barrier args: ('59dd3d058fa856abd328333a7fc8616b', [17772, 17772, 17772, 17772, 17772, 17772, 17772, 17772, 17772, 17772, 17772, 17772, 17772, 17772, 17772, 17772]) kwargs: {} Exception: "RuntimeError('shuffle_barrier failed during shuffle 59dd3d058fa856abd328333a7fc8616b')"
2025-12-03 05:31:06,953 - distributed.worker - WARNING - Compute Failed Key: shuffle-barrier-da7fec5ecb0608e2668fa35aad021412 Function: shuffle_barrier args: ('da7fec5ecb0608e2668fa35aad021412', [17514, 17514, 17514, 17514, 17514, 17514, 17514, 17514, 17514, 17514, 17514, 17514, 17514, 17514, 17514, 17514]) kwargs: {} Exception: "RuntimeError('shuffle_barrier failed during shuffle da7fec5ecb0608e2668fa35aad021412')"
2025-12-03 04:00:55,938 - distributed.worker - ERROR - Worker stream died during communication: tcp://172.21.12.150:43201 Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/tornado/iostream.py", line 869, in _read_to_buffer bytes_read = self.read_from_fd(buf) File "/opt/conda/lib/python3.10/site-packages/tornado/iostream.py", line 1138, in read_from_fd return self.socket.recv_into(buf, len(buf)) ConnectionResetError: [Errno 104] Connection reset by peer The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2066, in gather_dep response = await get_data_from_worker( File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2892, in get_data_from_worker response = await send_recv( File "/opt/conda/lib/python3.10/site-packages/distributed/core.py", line 1024, in send_recv response = await comm.read(deserializers=deserializers) File "/opt/conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read convert_stream_closed_error(self, e) File "/opt/conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 142, in convert_stream_closed_error raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc distributed.comm.core.CommClosedError: in <TCP (closed) Ephemeral Worker->Worker for gather local=tcp://172.21.159.199:51484 remote=tcp://172.21.12.150:43201>: ConnectionResetError: [Errno 104] Connection reset by peer
2025-12-02 14:34:30,004 - distributed.worker - WARNING - Compute Failed Key: ('assign-e5bb50facab4e66370d80f691c72c46e', 7) Function: subgraph_callable-5fbe17e6-e66f-4c7a-9158-2f3224bd args: ( source_id geometry 140 2794848524 POINT Z (445821.963 5896421.125 0.000)) kwargs: {} Exception: "GEOSException('IllegalArgumentException: point array must contain 0 or >1 elements\\n')"
2025-12-01 10:31:35,572 - distributed.worker - ERROR - failed during get data with tcp://172.21.159.199:36327 -> tcp://172.21.25.102:38633 Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/tornado/iostream.py", line 869, in _read_to_buffer bytes_read = self.read_from_fd(buf) File "/opt/conda/lib/python3.10/site-packages/tornado/iostream.py", line 1138, in read_from_fd return self.socket.recv_into(buf, len(buf)) ConnectionResetError: [Errno 104] Connection reset by peer The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 1793, in get_data response = await comm.read(deserializers=serializers) File "/opt/conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read convert_stream_closed_error(self, e) File "/opt/conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 142, in convert_stream_closed_error raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc distributed.comm.core.CommClosedError: in <TCP (closed) local=tcp://172.21.159.199:36327 remote=tcp://172.21.25.102:36082>: ConnectionResetError: [Errno 104] Connection reset by peer
2025-12-01 10:30:42,786 - distributed.worker - WARNING - Compute Failed Key: shuffle-barrier-028f4d8dbd6b7ff3f755e6895e818cbc Function: shuffle_barrier args: ('028f4d8dbd6b7ff3f755e6895e818cbc', [15005, 15005, 15005, 15005, 15005, 15005, 15005, 15005, 15005, 15005, 15005, 15005, 15005, 15005, 15005, 15005]) kwargs: {} Exception: "RuntimeError('shuffle_barrier failed during shuffle 028f4d8dbd6b7ff3f755e6895e818cbc')"
2025-11-29 06:03:36,888 - distributed.worker - WARNING - Compute Failed Key: ('hash-join-7ef5b3f2b603285b80084a8d4552bc03', 3) Function: merge_unpack args: ('9890fa55dea6f795459504f4d2b757c7', '3ec60cf056de4a75622ab028bf893f01', 3, 12302, 12304, 'inner', 'hashed_source_and_date', 'hashed_source_and_date', <distributed.protocol.serialize.Serialized object at 0x7fa7cce612a0>, ['_x', '_y']) kwargs: {} Exception: "RuntimeError('Worker tcp://172.21.12.197:38387 left during active shuffle 9890fa55dea6f795459504f4d2b757c7')"
2025-11-29 06:01:36,285 - distributed.worker - WARNING - Compute Failed Key: shuffle-barrier-cb5fee42f69c39b58db1afd88e40ae26 Function: shuffle_barrier args: ('cb5fee42f69c39b58db1afd88e40ae26', [12234, 12234, 12234, 12234, 12234, 12234, 12234, 12234, 12234, 12234, 12234, 12234, 12234, 12234, 12234, 12234]) kwargs: {} Exception: "RuntimeError('shuffle_barrier failed during shuffle cb5fee42f69c39b58db1afd88e40ae26')"
2025-11-29 01:00:58,271 - distributed.worker - ERROR - Exception during execution of task ('max-4031e9a17e9d67cc822b23e2b405af39', 5). Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2382, in _prepare_args_for_execution data[k] = self.data[k] File "/opt/conda/lib/python3.10/site-packages/distributed/spill.py", line 226, in __getitem__ return super().__getitem__(key) File "/opt/conda/lib/python3.10/site-packages/zict/buffer.py", line 108, in __getitem__ raise KeyError(key) KeyError: "('getitem-69409d0612acb88106181d8760fe41c0', 5)" During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2259, in execute args2, kwargs2 = self._prepare_args_for_execution(ts, args, kwargs) File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2386, in _prepare_args_for_execution data[k] = Actor(type(self.state.actors[k]), self.address, k, self) KeyError: "('getitem-69409d0612acb88106181d8760fe41c0', 5)"
2025-11-28 07:33:15,042 - distributed.worker - WARNING - Compute Failed Key: ('shuffle-transfer-259c364d1e73869684ab3e539f8b625b', 5) Function: shuffle_transfer args: ( source_id timestamp ... date_diff_ms _partitions 0 2794848547 1764314371573 ... 417.0 0 1 2794848523 1764314377391 ... 207.0 7 2 2794848523 1764314379839 ... 203.0 7 3 2794848547 1764314387364 ... 1884.0 0 4 2794848547 1764314388776 ... 731.0 0 ... ... ... ... ... ... 1680664 2794848523 1764314719099 ... 166.0 7 1680665 2794848523 1764314720381 ... 38.0 7 1680666 2794848523 1764314721588 ... 123.0 7 1680667 2794848523 1764314724913 ... 68.0 7 1680668 2794848523 1764314728625 ... 34.0 7 [1680669 rows x 5 columns], '259c364d1e73869684ab3e539f8b625b', 5, 16, '_partitions') kwargs: {} Exception: "RuntimeError('shuffle_transfer failed during shuffle 259c364d1e73869684ab3e539f8b625b')"
2025-11-28 07:33:14,536 - distributed.worker - WARNING - Compute Failed Key: ('hash-join-transfer-ed667638928568cb3f804bf82c468225', 5) Function: merge_transfer args: ( schema_version detector_id ... date_diff_ms __hash_partition 0 1 2794848467 ... 417.0 5 1 1 2794848503 ... 207.0 5 2 1 2794848503 ... 203.0 5 3 1 2794848467 ... 1884.0 5 4 1 2794848467 ... 731.0 5 ... ... ... ... ... ... 1680664 1 2794848418 ... 166.0 5 1680665 1 2794848517 ... 38.0 5 1680666 1 2794848418 ... 123.0 5 1680667 1 2794848418 ... 68.0 5 1680668 1 2794848517 ... 34.0 5 [1680669 rows x 28 columns], 'ed667638928568cb3f804bf82c468225', 5, 16) kwargs: {} Exception: "RuntimeError('shuffle_transfer failed during shuffle ed667638928568cb3f804bf82c468225')"
2025-11-28 07:33:02,694 - distributed.worker - WARNING - Compute Failed Key: ('shuffle-transfer-064ac4c02fbaebdb940e8300e19fec0b', 5) Function: shuffle_transfer args: ( source_id timestamp ... date_diff_ms _partitions 0 2794848523 1764314529359 ... 13.0 7 1 2794848523 1764314529996 ... 9.0 7 2 2794848523 1764314530750 ... 143.0 7 3 2794848547 1764314530847 ... 826.0 0 4 2794848523 1764314533100 ... 205.0 7 ... ... ... ... ... ... 1680664 2794848523 1764313959873 ... 296.0 7 1680665 2794848523 1764313961394 ... 6.0 7 1680666 2794848523 1764313961701 ... 8.0 7 1680667 2794848523 1764313965356 ... 302.0 7 1680668 2794848523 1764313966569 ... 295.0 7 [1680669 rows x 5 columns], '064ac4c02fbaebdb940e8300e19fec0b', 5, 16, '_partitions') kwargs: {} Exception: "RuntimeError('shuffle_transfer failed during shuffle 064ac4c02fbaebdb940e8300e19fec0b')"
2025-11-28 07:33:02,091 - distributed.worker - WARNING - Compute Failed Key: ('hash-join-transfer-c90fdcb7a0fc62a9a2b6ab8228823ed9', 5) Function: merge_transfer args: ( schema_version detector_id ... date_diff_ms __hash_partition 0 1 2794848518 ... 13.0 5 1 1 2794848518 ... 9.0 5 2 1 2794848518 ... 143.0 5 3 1 2794848516 ... 826.0 5 4 1 2794848518 ... 205.0 5 ... ... ... ... ... ... 1680664 1 2794848417 ... 296.0 5 1680665 1 2794848443 ... 6.0 5 1680666 1 2794848443 ... 8.0 5 1680667 1 2794848443 ... 302.0 5 1680668 1 2794848417 ... 295.0 5 [1680669 rows x 28 columns], 'c90fdcb7a0fc62a9a2b6ab8228823ed9', 5, 16) kwargs: {} Exception: "RuntimeError('shuffle_transfer failed during shuffle c90fdcb7a0fc62a9a2b6ab8228823ed9')"
2025-11-28 07:32:23,385 - distributed.worker - WARNING - Compute Failed Key: ('shuffle-transfer-d0d0afbbdb9503b06df5e2f228b9700e', 5) Function: shuffle_transfer args: ( source_id timestamp ... date_diff_ms _partitions 0 2794848615 1764314450684 ... 418.0 1 1 2794848615 1764314451625 ... 543.0 1 2 2794848523 1764314452230 ... 204.0 7 3 2794848615 1764314452617 ... 510.0 1 4 2794848523 1764314452840 ... 203.0 7 ... ... ... ... ... ... 1680664 2794848547 1764313883275 ... 611.0 0 1680665 2794848523 1764313889819 ... 304.0 7 1680666 2794848523 1764313890113 ... 289.0 7 1680667 2794848523 1764313893188 ... 17.0 7 1680668 2794848615 1764313893687 ... 275.0 1 [1680669 rows x 5 columns], 'd0d0afbbdb9503b06df5e2f228b9700e', 5, 16, '_partitions') kwargs: {} Exception: "RuntimeError('shuffle_transfer failed during shuffle d0d0afbbdb9503b06df5e2f228b9700e')"
2025-11-28 07:32:22,880 - distributed.worker - WARNING - Compute Failed Key: ('hash-join-transfer-5f05dc4c9f119a2e3195104fa850f5b3', 5) Function: merge_transfer args: ( schema_version detector_id ... date_diff_ms __hash_partition 0 1 2794848504 ... 418.0 5 1 1 2794848504 ... 543.0 5 2 1 2794848422 ... 204.0 5 3 1 2794848504 ... 510.0 5 4 1 2794848422 ... 203.0 5 ... ... ... ... ... ... 1680664 1 2794848471 ... 611.0 5 1680665 1 2794848424 ... 304.0 5 1680666 1 2794848417 ... 289.0 5 1680667 1 2794848424 ... 17.0 5 1680668 1 2794848467 ... 275.0 5 [1680669 rows x 28 columns], '5f05dc4c9f119a2e3195104fa850f5b3', 5, 16) kwargs: {} Exception: "RuntimeError('shuffle_transfer failed during shuffle 5f05dc4c9f119a2e3195104fa850f5b3')"
2025-11-28 03:31:45,946 - distributed.worker - WARNING - Compute Failed Key: shuffle-barrier-dc2823feafb278d9a143a75cbab28542 Function: shuffle_barrier args: ('dc2823feafb278d9a143a75cbab28542', [10797, 10797, 10797, 10797, 10797, 10797, 10797, 10797, 10797, 10797, 10797, 10797, 10797, 10797, 10797, 10797]) kwargs: {} Exception: "RuntimeError('shuffle_barrier failed during shuffle dc2823feafb278d9a143a75cbab28542')"
2025-11-27 08:02:34,650 - distributed.worker - WARNING - Compute Failed Key: ('hash-join-982d3c008ad7f99d414984d86793f0e7', 6) Function: merge_unpack args: ('4adefae537d0c50b5ba522a592b6dc84', 'dff51af84b99a105febd33c00f36797d', 6, 9918, 9920, 'inner', 'hashed_source_and_date', 'hashed_source_and_date', <distributed.protocol.serialize.Serialized object at 0x7fa8602e1ae0>, ['_x', '_y']) kwargs: {} Exception: "RuntimeError('Worker tcp://172.21.12.165:42389 left during active shuffle 4adefae537d0c50b5ba522a592b6dc84')"
2025-11-27 08:01:23,387 - distributed.worker - ERROR - Worker stream died during communication: tcp://172.21.159.210:41855 Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/tornado/iostream.py", line 869, in _read_to_buffer bytes_read = self.read_from_fd(buf) File "/opt/conda/lib/python3.10/site-packages/tornado/iostream.py", line 1138, in read_from_fd return self.socket.recv_into(buf, len(buf)) ConnectionResetError: [Errno 104] Connection reset by peer The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2066, in gather_dep response = await get_data_from_worker( File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2892, in get_data_from_worker response = await send_recv( File "/opt/conda/lib/python3.10/site-packages/distributed/core.py", line 1024, in send_recv response = await comm.read(deserializers=deserializers) File "/opt/conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read convert_stream_closed_error(self, e) File "/opt/conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 142, in convert_stream_closed_error raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc distributed.comm.core.CommClosedError: in <TCP (closed) Ephemeral Worker->Worker for gather local=tcp://172.21.159.199:42930 remote=tcp://172.21.159.210:41855>: ConnectionResetError: [Errno 104] Connection reset by peer
2025-11-27 08:01:23,260 - distributed.worker - WARNING - Compute Failed Key: ('hash-join-transfer-91041821273392b9ef4c3b0f2894d983', 5) Function: merge_transfer args: ( schema_version detector_id ... date_diff_ms __hash_partition 0 1 2794848450 ... 625.0 5 1 1 2794848463 ... 675.0 5 2 1 2794848463 ... 662.0 5 3 1 2794848463 ... 580.0 5 4 1 2794848487 ... 532.0 5 .. ... ... ... ... ... 201 1 2794848474 ... 5.0 5 202 1 2794848474 ... 604.0 5 203 1 2794848474 ... 611.0 5 204 1 2794848474 ... 3.0 5 205 1 2794848516 ... 512.0 5 [206 rows x 22 columns], '91041821273392b9ef4c3b0f2894d983', 5, 16) kwargs: {} Exception: "RuntimeError('shuffle_transfer failed during shuffle 91041821273392b9ef4c3b0f2894d983')"
2025-11-27 05:04:02,857 - distributed.worker - WARNING - Compute Failed Key: ('hash-join-2a60ba395db53340c682de36ce611246', 5) Function: merge_unpack args: ('69aaa2da1747969857e89a6fbd98fe6f', '97f5dd9a58ab10440a67c0140d82ebf0', 5, 9685, 9686, 'inner', 'hashed_source_and_date', 'hashed_source_and_date', <distributed.protocol.serialize.Serialized object at 0x7fa8404df8b0>, ['_x', '_y']) kwargs: {} Exception: "KeyError('97f5dd9a58ab10440a67c0140d82ebf0')"
2025-11-27 05:00:59,565 - distributed.worker - ERROR - Exception during execution of task ('hash-join-transfer-4edd5dc1008abdcbc3c63c1adc314cb3', 5). Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2382, in _prepare_args_for_execution data[k] = self.data[k] File "/opt/conda/lib/python3.10/site-packages/distributed/spill.py", line 226, in __getitem__ return super().__getitem__(key) File "/opt/conda/lib/python3.10/site-packages/zict/buffer.py", line 108, in __getitem__ raise KeyError(key) KeyError: "('assign-3bdc3c0ea92c304fd4ed58082e87f778', 5)" During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2259, in execute args2, kwargs2 = self._prepare_args_for_execution(ts, args, kwargs) File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2386, in _prepare_args_for_execution data[k] = Actor(type(self.state.actors[k]), self.address, k, self) KeyError: "('assign-3bdc3c0ea92c304fd4ed58082e87f778', 5)"
2025-11-27 03:40:18,570 - distributed.worker - WARNING - Compute Failed Key: ('assign-4adaddd4e1ceeb4bc1ec528e451160e7', 13) Function: subgraph_callable-25524dc7-9f08-4e38-9fe5-07dd89f2 args: ( source_id geometry 0 2794848586 POINT Z (449412.099 5901774.957 0.000)) kwargs: {} Exception: "GEOSException('IllegalArgumentException: point array must contain 0 or >1 elements\\n')"
2025-11-26 16:32:05,041 - distributed.worker - INFO - -------------------------------------------------
2025-11-26 16:32:05,041 - distributed.worker - INFO - Registered to: tcp://dask-scheduler:8786
2025-11-26 16:32:04,622 - distributed.worker - INFO - -------------------------------------------------
2025-11-26 16:32:04,622 - distributed.worker - INFO - Local Directory: /tmp/dask-worker-space/worker-rn9zm_ta
2025-11-26 16:32:04,621 - distributed.worker - INFO - Memory: 3.73 GiB
2025-11-26 16:32:04,621 - distributed.worker - INFO - Threads: 1
2025-11-26 16:32:04,621 - distributed.worker - INFO - -------------------------------------------------
2025-11-26 16:32:04,621 - distributed.worker - INFO - Waiting to connect to: tcp://dask-scheduler:8786
2025-11-26 16:32:04,621 - distributed.worker - INFO - dashboard at: 172.21.159.199:8790
2025-11-26 16:32:04,621 - distributed.worker - INFO - Listening to: tcp://172.21.159.199:36327
2025-11-26 16:32:04,621 - distributed.worker - INFO - Start worker at: tcp://172.21.159.199:36327