-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Implement async support for open_datatree #10742
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
This looks great! Would it be possible to make the sync path reuse the async methods internally? This would help reduce duplication, increase test coverage and speed up sync workflows. |
|
Thanks for the suggestion @shoyer! I explored implementing sync-to-async reuse using a universal coroutine runner. The main challenge is handling environments where an event loop is already running (such as Jupyter notebooks), which requires spawning background threads using asyncio.run() fails with However, this approach raises some design concerns:
The tradeoff is between code deduplication vs. user control and predictable behavior. Other major Python libraries (like httpx, requests-async) often keep separate sync/async implementations for similar reasons. What's your take on the threading tradeoff vs. the deduplication benefits? CC @TomNicholas |
|
I'm pretty sure Zarr v3 uses async internally to implement sync methods. It may be worth taking a look at how Zarr does things, especially given the strong overlap in the contributor communities. Launching a few threads is not particularly resource-intensive, so I'm not worried about that. Thread safety is a potential concern, but we do already take care to ensure that Xarray is thread safe internally, especially for IO backends. I think we can safely say that the vast majority of Xarray users are not familiar with async programming models, so I think they could really benefit from having this work by default. This is quite different from the user base for the web programming libraries you mention. |
|
@shoyer did you see #10622? I raised that issue to discuss the general problem of how these libraries interact with each other when it comes to concurrency.
Yes zarr manages its own threadpool. |
|
OK, let's try to reach some initial resolution about the async strategy for Xarary over in #10622 first! |
Changes: - Refactor open_datatree() to use zarr_sync() with async implementation for concurrent dataset and index creation across groups - Add _open_datatree_from_stores_async() helper that opens datasets and creates indexes concurrently using asyncio.gather with a semaphore to limit concurrency (avoids deadlocks with stores like Icechunk) - Add open_datatree_async() method for explicit async API - Remove duplicate _maybe_create_default_indexes_async from zarr.py, now imports from api.py (single source of truth) This significantly improves performance when opening DataTrees from high-latency storage backends (e.g., ~2 seconds vs sequential loading).
Remove the asyncio.Semaphore that was limiting concurrency to 10 concurrent operations. Investigation showed: - Zarr already has built-in concurrency control (async.concurrency=10) - The semaphore only applied to asyncio.to_thread() calls, not zarr I/O - Removing it improves performance by ~30-40% (~2s -> ~1.2-1.4s) The semaphore was defensive code for a problem that doesn't exist - zarr and icechunk handle their own concurrency limits internally.
|
Hey @TomNicholas and @shoyer, I've updated the async DataTree implementation based on our previous discussions. Key changes: User-facing API remains synchronous - no await needed: Users just call the normal sync API How it works internally:
Please let me know your thoughs on this. |
The async implementation uses zarr.core.sync which only exists in zarr v3. Add a conditional check using _zarr_v3() to: - Use async path with zarr_sync() for zarr v3 (concurrent loading) - Fall back to sequential loading for zarr v2 This fixes CI failures on min-versions environment which uses zarr v2.
My understanding of that issue is that people thought that it should be zarr's responsiblity to offer API that xarray could use (e.g. @aladinor have you benchmarked this at scale? Creating a graph like this one would be really interesting. |
- Add helper methods _build_group_members and _create_stores_from_members to reduce code duplication between sync and async store opening - Use zarr_sync() to run async index creation in _datatree_from_backend_datatree for zarr engine, making open_datatree fully async behind the scenes - Fix missing chunks validation and source encoding in open_datatree_async - Add tests for chunks validation, source encoding, and chunks parameter
68093dc to
c3ec77e
Compare
- Add type annotations to nested async functions in _datatree_from_backend_datatree to fix mypy annotation-unchecked notes breaking pytest-mypy-plugins tests - Use os.path.join and os.path.normpath in test_async_source_encoding for cross-platform compatibility on Windows
Add type annotations to _maybe_create_default_indexes_async and its nested functions (load_var, create_index, _create) to satisfy mypy's annotation-unchecked checks. Also add Variable and Hashable imports to the TYPE_CHECKING block. This fixes pytest-mypy-plugins tests that were failing due to mypy emitting annotation-unchecked notes for untyped nested functions.
- Remove open_datatree_async() from api.py (public API) - Remove open_datatree_async() from zarr.py (backend method) - Keep internal async optimization in _datatree_from_backend_datatree() - Use _zarr_v3() for proper zarr version check instead of ImportError - Update tests to only test internal async functionality - Add test to verify sync open_datatree uses async internally for zarr v3 The async optimization is now internal only - users call the sync open_datatree() which automatically uses async index creation for zarr v3 backends.
keewis
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not an expert on the zarr backend, but this mostly looks good to me.
This is not done for the sync versions either, so I don't think this has to be done in this PR, but logically I think xr.open_datatree(...) == xr.DataTree.from_dict(xr.open_groups(...)), so it might make sense to have _open_datatree_async call _open_groups_as_dict_async?
Co-authored-by: Justus Magin <[email protected]>
Benchmarking showed async index creation provides no measurable benefit since it's CPU-bound work. Simplified to sync loop per reviewer feedback.
for more information, see https://pre-commit.ci
|
@keewis, you're right that open_datatree ≈ DataTree.from_dict(open_groups_as_dict(...)) and refactoring to share code would reduce duplication. Since this affects both sync and async paths, I'll address it in a follow-up PR to keep this one focused. I've noted the differences (semaphore, index creation) that need to be unified.
|
Co-authored-by: Justus Magin <[email protected]>
for more information, see https://pre-commit.ci
Co-authored-by: Justus Magin <[email protected]>
for more information, see https://pre-commit.ci
open_datasetcreates default indexes sequentially, causing significant latency in cloud high-latency stores #10579 and #12whats-new.rst