2025-12-05 12:20:27,255 - distributed.worker - WARNING - Compute Failed Key: ('lambda-fac61fb1513c9ecdae17b261810b86b4', 0) Function: subgraph_callable-839d778c-a793-4e87-bc2e-9eb4cea3 args: ( schema_version detector_id source_id ... parking length_m overtaking 1 1 2794848456 2794848555 ... False 246.097723 True 2 1 2794848483 2794848555 ... False 246.097723 True 3 1 2794848456 2794848555 ... False 246.097723 True 4 1 2794848483 2794848555 ... False 246.097723 True 6 1 2794848456 2794848555 ... False 246.097723 True ... ... ... ... ... ... ... ... 2512 1 2794848516 2794848555 ... False 20.517361 True 2515 1 2794848516 2794848555 ... False 20.517361 True 2517 1 2794848516 2794848555 ... False 20.517361 True 2519 1 2794848516 2794848555 ... False 20.517361 True 2524 1 2794848516 2794848555 ... False 20.517361 True [8 kwargs: {} Exception: "ValueError('Found array with 1 sample(s) (shape=(1, 1)) while a minimum of 2 is required by AgglomerativeClustering.')"
2025-12-05 07:30:45,849 - distributed.worker - WARNING - Compute Failed Key: ('shuffle-p2p-8c0c543644f0e53db6e051b942a01544', 7) Function: shuffle_unpack args: ('9ec91486a56c2c7ef771082ed2149971', 7, 20445) kwargs: {} Exception: "RuntimeError('shuffle_unpack failed during shuffle 9ec91486a56c2c7ef771082ed2149971')"
2025-12-05 02:51:01,859 - distributed.worker - WARNING - Compute Failed Key: ('assign-5d394e4e49343a449512234e2973692d', 3) Function: subgraph_callable-949ea42d-c47b-48ca-a59d-7f6fe71a args: ( source_id geometry 0 2794848536 POINT Z (447892.846 5900368.596 0.000)) kwargs: {} Exception: "GEOSException('IllegalArgumentException: point array must contain 0 or >1 elements\\n')"
2025-12-04 08:30:58,856 - distributed.worker - ERROR - Worker stream died during communication: tcp://172.21.25.101:40477 Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/tornado/iostream.py", line 869, in _read_to_buffer bytes_read = self.read_from_fd(buf) File "/opt/conda/lib/python3.10/site-packages/tornado/iostream.py", line 1138, in read_from_fd return self.socket.recv_into(buf, len(buf)) ConnectionResetError: [Errno 104] Connection reset by peer The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2066, in gather_dep response = await get_data_from_worker( File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2892, in get_data_from_worker response = await send_recv( File "/opt/conda/lib/python3.10/site-packages/distributed/core.py", line 1024, in send_recv response = await comm.read(deserializers=deserializers) File "/opt/conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read convert_stream_closed_error(self, e) File "/opt/conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 142, in convert_stream_closed_error raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc distributed.comm.core.CommClosedError: in <TCP (closed) Ephemeral Worker->Worker for gather local=tcp://172.21.25.104:54760 remote=tcp://172.21.25.101:40477>: ConnectionResetError: [Errno 104] Connection reset by peer
2025-12-03 12:00:52,320 - distributed.worker - WARNING - Compute Failed Key: shuffle-barrier-accfd2c78b98ab833598e5f444758bf0 Function: shuffle_barrier args: ('accfd2c78b98ab833598e5f444758bf0', [18125, 18125, 18125, 18125, 18125, 18125, 18125, 18125, 18125, 18125, 18125, 18125, 18125, 18125, 18125, 18125]) kwargs: {} Exception: "RuntimeError('shuffle_barrier failed during shuffle accfd2c78b98ab833598e5f444758bf0')"
2025-12-03 07:03:07,244 - distributed.worker - WARNING - Compute Failed Key: ('hash-join-94374102b41c6d37ee09da60b10f93fd', 4) Function: merge_unpack args: ('4623a32eb3e199f35eab3c309ce2c655', '54fde937d8413dce4b75f81b4f087e05', 4, 17737, 17739, 'inner', 'hashed_source_and_date', 'hashed_source_and_date', <distributed.protocol.serialize.Serialized object at 0x7fdb831a8b20>, ['_x', '_y']) kwargs: {} Exception: "RuntimeError('Worker tcp://172.21.12.199:39093 left during active shuffle 4623a32eb3e199f35eab3c309ce2c655')"
2025-12-03 07:02:54,841 - distributed.worker - WARNING - Compute Failed Key: ('hash-join-aae28ec3b952674bc0aebcf7e2e61033', 4) Function: merge_unpack args: ('eaca08679c208a99b4015bf3c3a49e09', '81082f0ad39fc6be7c4d43c7654ff7e3', 4, 17723, 17725, 'inner', 'hashed_source_and_date', 'hashed_source_and_date', <distributed.protocol.serialize.Serialized object at 0x7fdc7922e110>, ['_x', '_y']) kwargs: {} Exception: "RuntimeError('Worker tcp://172.21.12.199:46413 left during active shuffle eaca08679c208a99b4015bf3c3a49e09')"
2025-12-03 05:32:38,893 - distributed.worker - WARNING - Compute Failed Key: ('hash-join-transfer-e0442b1e2a5a00939a222a799698c518', 12) Function: merge_transfer args: ( schema_version detector_id ... date_diff_ms __hash_partition 0 1 2794848471 ... 526.0 12 1 1 2794848471 ... 421.0 12 2 1 2794848489 ... 996.0 12 3 1 2794848507 ... 238.0 12 4 1 2794848504 ... 659.0 12 ... ... ... ... ... ... 1680580 1 2794848504 ... 432.0 12 1680581 1 2794848507 ... 3.0 12 1680582 1 2794848456 ... 8.0 12 1680583 1 2794848504 ... 523.0 12 1680584 1 2794848456 ... 306.0 12 [1680585 rows x 28 columns], 'e0442b1e2a5a00939a222a799698c518', 12, 16) kwargs: {} Exception: "RuntimeError('shuffle_transfer failed during shuffle e0442b1e2a5a00939a222a799698c518')"
2025-12-03 05:32:27,561 - distributed.worker.memory - WARNING - Worker is at 57% memory usage. Resuming worker. Process memory: 2.16 GiB -- Worker memory limit: 3.73 GiB
2025-12-03 05:32:27,447 - distributed.worker.memory - WARNING - Worker is at 80% memory usage. Pausing worker. Process memory: 3.01 GiB -- Worker memory limit: 3.73 GiB
2025-12-03 05:32:25,799 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 2.67 GiB -- Worker memory limit: 3.73 GiB
2025-12-03 05:32:11,450 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 2.96 GiB -- Worker memory limit: 3.73 GiB
2025-12-03 05:31:36,438 - distributed.worker - WARNING - Compute Failed Key: ('hash-join-transfer-849559c0082f84220cf3361bcb8dcb7b', 12) Function: merge_transfer args: ( schema_version detector_id ... date_diff_ms __hash_partition 0 1 2794848514 ... 423.0 12 1 1 2794848516 ... 469.0 12 2 1 2794848516 ... 518.0 12 3 1 2794848514 ... 733.0 12 4 1 2794848514 ... 210.0 12 ... ... ... ... ... ... 1680580 1 2794848514 ... 488.0 12 1680581 1 2794848470 ... 71.0 12 1680582 1 2794848516 ... 409.0 12 1680583 1 2794848514 ... 357.0 12 1680584 1 2794848516 ... 560.0 12 [1680585 rows x 28 columns], '849559c0082f84220cf3361bcb8dcb7b', 12, 16) kwargs: {} Exception: "RuntimeError('shuffle_transfer failed during shuffle 849559c0082f84220cf3361bcb8dcb7b')"
2025-12-03 05:31:35,971 - distributed.worker.memory - WARNING - Unmanaged memory use is high. This may indicate a memory leak or the memory may not be released to the OS; see https://distributed.dask.org/en/latest/worker-memory.html#memory-not-released-back-to-the-os for more information. -- Unmanaged memory: 2.61 GiB -- Worker memory limit: 3.73 GiB
2025-12-03 04:00:54,235 - distributed.worker - WARNING - Compute Failed Key: ('hash-join-transfer-172949d1a63c679eeac0e91f99849060', 10) Function: merge_transfer args: ( schema_version ... __hash_partition 6874 1 ... 5 6875 1 ... 2 6876 1 ... 4 6877 1 ... 12 6878 1 ... 13 ... ... ... ... 7556 1 ... 11 7557 1 ... 1 7558 1 ... 6 7559 1 ... 4 7560 1 ... 6 [687 rows x 18 columns], '172949d1a63c679eeac0e91f99849060', 10, 16) kwargs: {} Exception: "RuntimeError('shuffle_transfer failed during shuffle 172949d1a63c679eeac0e91f99849060')"
2025-12-03 04:00:50,961 - distributed.worker - WARNING - Compute Failed Key: ('hash-join-330856b8cc6b0cf704dd953710d8345f', 14) Function: merge_unpack args: ('6cea3e1345b31246120a47cc45a8fc4b', '20b01e1c625e5358f7fdfd24d63738f2', 14, 17392, 17397, 'inner', 'hashed_source_and_date', 'hashed_source_and_date', <distributed.protocol.serialize.Serialized object at 0x7fdb9e909ed0>, ['_x', '_y']) kwargs: {} Exception: "RuntimeError('Worker tcp://172.21.25.6:45767 left during active shuffle 6cea3e1345b31246120a47cc45a8fc4b')"
2025-12-02 14:00:47,001 - distributed.worker - WARNING - Compute Failed Key: ('hash-join-ba4449f81416d8d387235ba5c185d6de', 1) Function: merge_unpack args: ('55dd516f22eb63c02d2ae41da0c3d703', '7f45f1459f1f2db2e517c12645891e9d', 1, 16634, 16639, 'inner', 'hashed_source_and_date', 'hashed_source_and_date', <distributed.protocol.serialize.Serialized object at 0x7fdb83f546d0>, ['_x', '_y']) kwargs: {} Exception: "RuntimeError('Worker tcp://172.21.12.165:40271 left during active shuffle 55dd516f22eb63c02d2ae41da0c3d703')"
2025-12-02 07:30:47,159 - distributed.worker - ERROR - Worker stream died during communication: tcp://172.21.159.252:41957 Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/tornado/iostream.py", line 869, in _read_to_buffer bytes_read = self.read_from_fd(buf) File "/opt/conda/lib/python3.10/site-packages/tornado/iostream.py", line 1138, in read_from_fd return self.socket.recv_into(buf, len(buf)) ConnectionResetError: [Errno 104] Connection reset by peer The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2066, in gather_dep response = await get_data_from_worker( File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2892, in get_data_from_worker response = await send_recv( File "/opt/conda/lib/python3.10/site-packages/distributed/core.py", line 1024, in send_recv response = await comm.read(deserializers=deserializers) File "/opt/conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read convert_stream_closed_error(self, e) File "/opt/conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 142, in convert_stream_closed_error raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc distributed.comm.core.CommClosedError: in <TCP (closed) Ephemeral Worker->Worker for gather local=tcp://172.21.25.104:46598 remote=tcp://172.21.159.252:41957>: ConnectionResetError: [Errno 104] Connection reset by peer
2025-12-01 15:01:48,330 - distributed.worker - ERROR - Worker stream died during communication: tcp://172.21.25.97:37027 Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/tornado/iostream.py", line 869, in _read_to_buffer bytes_read = self.read_from_fd(buf) File "/opt/conda/lib/python3.10/site-packages/tornado/iostream.py", line 1138, in read_from_fd return self.socket.recv_into(buf, len(buf)) ConnectionResetError: [Errno 104] Connection reset by peer The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2066, in gather_dep response = await get_data_from_worker( File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2892, in get_data_from_worker response = await send_recv( File "/opt/conda/lib/python3.10/site-packages/distributed/core.py", line 1024, in send_recv response = await comm.read(deserializers=deserializers) File "/opt/conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read convert_stream_closed_error(self, e) File "/opt/conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 142, in convert_stream_closed_error raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc distributed.comm.core.CommClosedError: in <TCP (closed) Ephemeral Worker->Worker for gather local=tcp://172.21.25.104:44050 remote=tcp://172.21.25.97:37027>: ConnectionResetError: [Errno 104] Connection reset by peer
2025-11-29 19:31:13,227 - distributed.worker - ERROR - Exception during execution of task ('min-00d25a13d8261ba621577edcbd467689', 10). Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2382, in _prepare_args_for_execution data[k] = self.data[k] File "/opt/conda/lib/python3.10/site-packages/distributed/spill.py", line 226, in __getitem__ return super().__getitem__(key) File "/opt/conda/lib/python3.10/site-packages/zict/buffer.py", line 108, in __getitem__ raise KeyError(key) KeyError: "('getitem-b49a4622923f17edd0e872dae000d813', 10)" During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2259, in execute args2, kwargs2 = self._prepare_args_for_execution(ts, args, kwargs) File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2386, in _prepare_args_for_execution data[k] = Actor(type(self.state.actors[k]), self.address, k, self) KeyError: "('getitem-b49a4622923f17edd0e872dae000d813', 10)"
2025-11-29 06:03:26,619 - distributed.worker - WARNING - Compute Failed Key: ('hash-join-be4d0f70e39e8b6b0a96d720c5e09995', 3) Function: merge_unpack args: ('7f690a7bc348472c9d341e0595859edf', '799944f2d6b7e481933498acc8427d97', 3, 12288, 12290, 'inner', 'hashed_source_and_date', 'hashed_source_and_date', <distributed.protocol.serialize.Serialized object at 0x7fdbb52d7610>, ['_x', '_y']) kwargs: {} Exception: "RuntimeError('Worker tcp://172.21.12.197:37319 left during active shuffle 7f690a7bc348472c9d341e0595859edf')"
2025-11-29 06:01:06,094 - distributed.worker - ERROR - Worker stream died during communication: tcp://172.21.12.197:35449 Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/tornado/iostream.py", line 869, in _read_to_buffer bytes_read = self.read_from_fd(buf) File "/opt/conda/lib/python3.10/site-packages/tornado/iostream.py", line 1138, in read_from_fd return self.socket.recv_into(buf, len(buf)) ConnectionResetError: [Errno 104] Connection reset by peer The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2066, in gather_dep response = await get_data_from_worker( File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2892, in get_data_from_worker response = await send_recv( File "/opt/conda/lib/python3.10/site-packages/distributed/core.py", line 1024, in send_recv response = await comm.read(deserializers=deserializers) File "/opt/conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read convert_stream_closed_error(self, e) File "/opt/conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 142, in convert_stream_closed_error raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc distributed.comm.core.CommClosedError: in <TCP (closed) Ephemeral Worker->Worker for gather local=tcp://172.21.25.104:50382 remote=tcp://172.21.12.197:35449>: ConnectionResetError: [Errno 104] Connection reset by peer
2025-11-27 08:01:23,392 - distributed.worker - ERROR - Worker stream died during communication: tcp://172.21.12.165:43927 Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/tornado/iostream.py", line 869, in _read_to_buffer bytes_read = self.read_from_fd(buf) File "/opt/conda/lib/python3.10/site-packages/tornado/iostream.py", line 1138, in read_from_fd return self.socket.recv_into(buf, len(buf)) ConnectionResetError: [Errno 104] Connection reset by peer The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2066, in gather_dep response = await get_data_from_worker( File "/opt/conda/lib/python3.10/site-packages/distributed/worker.py", line 2892, in get_data_from_worker response = await send_recv( File "/opt/conda/lib/python3.10/site-packages/distributed/core.py", line 1024, in send_recv response = await comm.read(deserializers=deserializers) File "/opt/conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 241, in read convert_stream_closed_error(self, e) File "/opt/conda/lib/python3.10/site-packages/distributed/comm/tcp.py", line 142, in convert_stream_closed_error raise CommClosedError(f"in {obj}: {exc.__class__.__name__}: {exc}") from exc distributed.comm.core.CommClosedError: in <TCP (closed) Ephemeral Worker->Worker for gather local=tcp://172.21.25.104:37092 remote=tcp://172.21.12.165:43927>: ConnectionResetError: [Errno 104] Connection reset by peer
2025-11-27 08:01:23,207 - distributed.worker - WARNING - Compute Failed Key: ('hash-join-transfer-91041821273392b9ef4c3b0f2894d983', 12) Function: merge_transfer args: ( schema_version detector_id ... date_diff_ms __hash_partition 0 1 2794848450 ... 606.0 12 1 1 2794848450 ... 616.0 12 2 1 2794848450 ... 521.0 12 3 1 2794848463 ... 625.0 12 4 1 2794848487 ... 594.0 12 .. ... ... ... ... ... 248 1 2794848516 ... 1015.0 12 249 1 2794848516 ... 1026.0 12 250 1 2794848510 ... 614.0 12 251 1 2794848510 ... 613.0 12 252 1 2794848510 ... 625.0 12 [253 rows x 22 columns], '91041821273392b9ef4c3b0f2894d983', 12, 16) kwargs: {} Exception: "RuntimeError('shuffle_transfer failed during shuffle 91041821273392b9ef4c3b0f2894d983')"
2025-11-26 16:32:04,906 - distributed.worker - INFO - -------------------------------------------------
2025-11-26 16:32:04,906 - distributed.worker - INFO - Registered to: tcp://dask-scheduler:8786
2025-11-26 16:32:04,416 - distributed.worker - INFO - -------------------------------------------------
2025-11-26 16:32:04,416 - distributed.worker - INFO - Local Directory: /tmp/dask-worker-space/worker-vnel3uyw
2025-11-26 16:32:04,416 - distributed.worker - INFO - Memory: 3.73 GiB
2025-11-26 16:32:04,416 - distributed.worker - INFO - Threads: 1
2025-11-26 16:32:04,416 - distributed.worker - INFO - -------------------------------------------------
2025-11-26 16:32:04,416 - distributed.worker - INFO - Waiting to connect to: tcp://dask-scheduler:8786
2025-11-26 16:32:04,416 - distributed.worker - INFO - dashboard at: 172.21.25.104:8790
2025-11-26 16:32:04,416 - distributed.worker - INFO - Listening to: tcp://172.21.25.104:39025
2025-11-26 16:32:04,416 - distributed.worker - INFO - Start worker at: tcp://172.21.25.104:39025