-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GDAL Virtual Rasters #166
Comments
I'm not an expert on VRTs but I think it could work. It could potentially be useful if you want to create a dataset from rasters which are overlapping and the VRT represents an already dedupped version of the data (assuming the logic for deduplication is appropriate). Mostly, I'm not sure how useful it is to have this functionality because I am not familiar of VRTs that are made publicly available or published for general use. I have heard of VRTs being used for on-the-fly definition of mosaics. I am also going to tag my colleagues @wildintellect and @vincentsarago who have more experience with VRTs than I do and may be able to think of reasons this may or may not work. |
@abarciauskas-bgse converting a VRT to a Reference File for Zarr seems fine. I'm not sure the VRT would contain all the chunk information you need so the source files may also need to also be scanned. At that point it's not super different than just being given a list of files to include in a manifest. Example:
Fun I didn't know about https://gdal.org/drivers/raster/vrt_multidimensional.html not sure I've ever seen one of these. To be clear a VRT does not de-duplicate anything. When using a VRT with GDAL
So up to you if you'd want a VRT which takes effort, or would rather just be passed a list of files to include in a mosaiced reference file. Here's a great one you can experiment with https://github.com/scottstanie/sardem/blob/master/sardem/data/cop_global.vrt |
Interesting thanks @wildintellect . Thanks for clearing that up about de-duplication. I was under the impressions that VRTs could represent a mosaic after deduplication of source files (e.g. spatial overlapping is resolved through logic while building the VRT). But I suppose that use case would be choosing overlapping data preference by block level, not pixel level. |
Thanks for the ping @TomNicholas! Some good points have already been mentioned. I think I just brought up VRTs because they are another example of lightweight sidecar metadata that simplifies the user experience of data management :) ... I haven't thought too much about integrations with virtualizarr, but some ideas below: I suppose in the same way you create a reference file for NetCDF/DMR++ to bypass HDF and use Zarr instead, you could do the same for TIFF/VRT to bypass GDAL. Would probably want to do some benchmarking there, because unlike hdf, GDAL is pretty good at using overviews and efficiently figuring out range requests during reads (for the common case of a VRT pointing at cloud-optimized geotiffs). I think another connection here is what is the serialization format for virtualizarr and what is it's scope? My understanding is the eventual goal is to save directly to ZARR v3 format and there are I'm sure lots of existing discussions that I'm not up to speed on. But my mental model is that VRT, STAC, ZARR, KerchunkJSON are all lightweight metadata mappings that can encode many things (file and byte locations, arbitrary metadata, "on read" computations like scale and offset, subset, reprojection). It seems these lightweight mappings work well up to a limit, and then you encounter the need for some sort of spatial index or database system :) So again, my mapping becomes (KerchunkJSON -> Parquet, VRT -> GTI, STAC -> pgSTAC, ZARR -> Earthmover? |
Thanks @scottyhq !
I see the chunk manifest as exclusively dealing with file and byte locations, and everything else in that list should live elsewhere in zarr (e.g. codecs or metadata following a certain convention). I would be very curious to hear @mdsumner's thoughts on all the above. |
I think @scottyhq captured my stance well, and I'm glad to see GTI mentioned here - that's really important, and new. I actually see this completely in the opposite direction, and I wish there was more use of GDAL and VRT itself, it's amazing - but there's these heavy lenses in R and Python over the actual API (but, we have very good support in {gdalraster} and in osgeo.gdal already) - that's a story for elsewhere. VRT is already an extremely lightweight virtualization, and I went looking for a similar serialization/description for xarray and ended up here. kerchunk/virtualizarr is perfect for hdf/grib IMO but not for the existing GDAL suite. Apart from harvesting filepaths, urls, connections (database strings, vrt:// strings, /vsi* protocols) I don't see what would be the point. There certainly could be a Zarr description of a mosaic, but I'd be adding that as feature to GDAL as the software to convert it from VRT or from a WMTS connection, etc, not trying to bypass it. VRT can mix formats too, it's a very general way to craft a virtual dataset from disparate and even depauparate sources. If you want to bypass GDAL for TIFF I think you've already got what's needed, but to support VRT you would need to recreate GDAL in large part. How would it take a subset/decimation/rescaling/set-missing-metadata description for a file? I don't think you can sensibly write reference byte ranges for parts of native tiles. All that said, I'm extremely interested in the relationship between image-tile-servers/GTI/VRT and the various vrt:// and /vsi* abstractions, and how Zarr and its virtualizations work. There are gaps in both, but together they cover a huge gamut of capability and I'm exploring as fast as I can to be able to talk more sensibly about all that. |
oh one technical point on the mention of "byte locations", which I misplaced in my first read
That is not a general VRT thing (I think that also wasn't being suggested, but still I think it's worth adding more here) Apart from being able to describe "raw" sources. You can craft VRT that wraps a blob in a file or in memory, described by shape,bbox,crs,dtype,address,size for example, but it's not something that's used for formats with an official driver. The documentation is here: Virtual raster: https://gdal.org/drivers/raster/vrt.html Virtual file systems: https://gdal.org/user/virtual_file_systems.html VRT for raw binary files: https://gdal.org/drivers/raster/vrt.html#vrt-descriptions-for-raw-files MEM or in-memory raster: I think it's interesting in its relationship to how virtualizarr/kerchunk works and there's a lot of potential crossover. |
I don't think the idea here would be to replace or add a new layer, but instead to create tools that can easily translate VRT to virtual Zarr or possibly vice versa. See the issues on DMR++ for a similar idea for another virtualization format. |
One thing to keep in mind: GDAL already has support for reading Zarr (I think including Zarr v3). Once the Chunk Manifest ZEP is stabilized, hopefully we can get it added to the GDAL reader, which will hopefully open up these Chunk Manifests to everything built on GDAL.
|
Coming back to this conversation from #432 and in a post-icechunk world. The recommended serialisation format for VirtualiZarr is now effectively Icechunk. But the points @mdsumner and @scottyhq raise above seem like very significant differences between the data models of Zarr and VRT that preclude turning VRTs into Icechunk Zarr stores for anything but very specific cases (e.g. those with no overlap). However maybe if you used Icechunk's transactional engine to version control non-zarr data you could get something like that... cc @abarciauskas-bgse @sharkinsspatial Regardless reading Icechunk data from GDAL as @TomAugspurger suggested still sounds useful, but would require binding to the Icechunk rust client somehow. |
I think there are two separable discussions happening: 1. Can/should virtualizarr support virtualizing existing VRTs? and 2. Can GDAL work with virtualized Zarr V3? I'm going to focus on 2 :)
Interesting, I haven't been following virtualizarr discussions recently so this is news to me!
I was hoping https://virtualizarr.readthedocs.io/en/stable/usage.html#writing-as-zarr would be achievable b/c if I understand correctly writing a to-spec Zarr V3 is what would enable the "automatic" compatibility with GDAL... Icechunk seems great and I'm totally in favor of using it, but for smaller datasets it adds cognitive load (sessions? commits?) and I'm still a bit confused about how GDAL could interact with icechunk stores: Ideal simple code (but doesn't work :): combined_vds.virtualize.to_zarr('combined_v3.zarr')
ds = xr.open_dataset('combined_v3.zarr')
# As mentioned above, if it's written as "compliant zarr v3" other software like gdal could work with it:
!gdalinfo combined_v3.zarr Alternative code (not clear to me how GDAL can understand an icechunk store): from icechunk import Repository, local_filesystem_storage
storage = local_filesystem_storage("/tmp/icechunk/store")
repo = Repository.create(storage=storage)
session = repo.writable_session("main") # Note typo in docs: writeable -> writable
combined_vds.virtualize.to_icechunk(session.store)
snapshot = session.commit("my first virtualized dataset")
ds = xr.open_zarr(session.store, consolidated=False)
# Should this work? How can gdal 'read' an icechunk store / recognize valid zarr v3 format ?
# session.store.get('zarr.json') # TypeError: IcechunkStore.get() missing 1 required positional argument: 'prototype'
!gdalinfo /tmp/icechunk/store ? |
I don't think anything but Rust can understand an Icechunk store. GDAL can't read virtualized Zarr. It can read the metadata, but that doesn't help unless you can decode the references and point your reader at them (certainly the ZARR driver can be repurposed for that and I think we're not the only ones talking about it). But with Icechunk stores seem like it's not just creators that need the Rust bindings, that's starting to loom as another problem to me. (... but probably it's just the references that are encoded more opaquely, presumably the actual chunks are as generic as ever). |
(It would be nice to discuss this on the icechunk issue tracker instead, because it's not a question that's actually related to the VirtualiZarr package.)
A little more context here. We could still make a lightweight format for virtual zarr, but someone has to really want it, and the icechunk dev team would prefer everyone just get aboard the icechunk train 😁
Wait why is that still in the docs!? It was removed in #426.
You'll never get "automatic" ability to read virtual zarr of any type, it always is going to require some change to a reader to teach it to understand a
You're not wrong. But it also brings incredibly powerful version control features that a bare manifest doesn't, and whose implementation is complementary to even having a manifest.
GDAL would have to call an icechunk client to interact with icechunk stores. Currently the only client is the one written in rust (with python bindings) and maintained by Earthmover.
But Icechunk has an open spec, so if you really really wanted to avoid the rust dependency you could write your own icechunk client in any language. Basically icechunk uses the zarr data model and the zarr-python library, but doesn't use the "native zarr" format that really is an implementation detail of the zarr-python
Yes the chunk storage is still fairly straightforward, but supporting time-travel means you don't know which chunks to read without also having an implementation of the version-control layer. |
all good, lots of my lingering qs answered there! |
FWIW I have been recommending Icechunk serialization of virtual datasets for easily updatable workflows by nimble/forward-looking teams and suggesting people with larger, production-based use-cases prepare to use Icechunk after it reaches v1 (likely within the next couple months). I wouldn't recommend integrating Icechunk into production right now across the board. The pilot project in https://github.com/earth-mover/icechunk-nasa is still finding a pain points so that other people don't have to if they would rather wait for a stable release. |
From https://docs.csc.fi/support/tutorials/gis/virtual-rasters/ (emphasis mine):
That sounds a lot like a set of reference files doesn't it... Maybe we could ingest those virtual raster files and turn them into chunk manifests, like we're doing with DMR++ in #113?
Also we can definitely open Cloud optimized GeoTIFFS now (since #162).
Thanks to @scottyhq for mentioning this idea. Maybe him, @abarciauskas-bgse, or someone else who knows more about GDAL can say whether they think this idea might actually work or not.
The text was updated successfully, but these errors were encountered: