Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GDAL Virtual Rasters #166

Open
TomNicholas opened this issue Jun 29, 2024 · 15 comments
Open

GDAL Virtual Rasters #166

TomNicholas opened this issue Jun 29, 2024 · 15 comments
Labels
enhancement New feature or request help wanted Extra attention is needed references generation Reading byte ranges from archival files

Comments

@TomNicholas
Copy link
Member

From https://docs.csc.fi/support/tutorials/gis/virtual-rasters/ (emphasis mine):

Virtual rasters is useful GDAL concept for managing large raster datasets that are split into not overlapping map sheets. Virtual rasters are not useful for managing time-series or overlapping rasters, for example remote sensing tiles.

Technically a virtual raster is just a small xml file that tells GDAL where the actual data files are, but from user's point of view virtual rasters can be treated much like any other raster format. Virtual rasters can include raster data in any file format GDAL supports. Virtual rasters are useful because they allow handling of large datasets as if they were a single file eliminating need for locating correct files.

It is possible to use virtual rasters so, that only the small xml-file is stored locally and the big raster files are in Allas, Amazon S3, publicly on server or any other place supported by GDAL virtual drivers. The data is moved to local only for the area and zoom level requested when the virtual raster is opened. The best performing format to save your raster data in remote service is Cloud optimized GeoTIFF, but other formats are also possible.

That sounds a lot like a set of reference files doesn't it... Maybe we could ingest those virtual raster files and turn them into chunk manifests, like we're doing with DMR++ in #113?

Also we can definitely open Cloud optimized GeoTIFFS now (since #162).

Thanks to @scottyhq for mentioning this idea. Maybe him, @abarciauskas-bgse, or someone else who knows more about GDAL can say whether they think this idea might actually work or not.

@TomNicholas TomNicholas added enhancement New feature or request help wanted Extra attention is needed references generation Reading byte ranges from archival files labels Jun 29, 2024
@abarciauskas-bgse
Copy link
Collaborator

I'm not an expert on VRTs but I think it could work. It could potentially be useful if you want to create a dataset from rasters which are overlapping and the VRT represents an already dedupped version of the data (assuming the logic for deduplication is appropriate). Mostly, I'm not sure how useful it is to have this functionality because I am not familiar of VRTs that are made publicly available or published for general use. I have heard of VRTs being used for on-the-fly definition of mosaics.

I am also going to tag my colleagues @wildintellect and @vincentsarago who have more experience with VRTs than I do and may be able to think of reasons this may or may not work.

@wildintellect
Copy link

@abarciauskas-bgse converting a VRT to a Reference File for Zarr seems fine. I'm not sure the VRT would contain all the chunk information you need so the source files may also need to also be scanned. At that point it's not super different than just being given a list of files to include in a manifest.

Example:

<VRTDataset rasterXSize="512" rasterYSize="512">
    <GeoTransform>440720.0, 60.0, 0.0, 3751320.0, 0.0, -60.0</GeoTransform>
    <VRTRasterBand dataType="Byte" band="1">
        <ColorInterp>Gray</ColorInterp>
        <SimpleSource>
        <SourceFilename relativeToVRT="1">utm.tif</SourceFilename>
        <SourceBand>1</SourceBand>
        <SrcRect xOff="0" yOff="0" xSize="512" ySize="512"/>
        <DstRect xOff="0" yOff="0" xSize="512" ySize="512"/>
        </SimpleSource>
    </VRTRasterBand>
</VRTDataset>

Fun I didn't know about https://gdal.org/drivers/raster/vrt_multidimensional.html not sure I've ever seen one of these.

To be clear a VRT does not de-duplicate anything. When using a VRT with GDAL

If there is some amount of spatial overlapping between files, the order of files appearing in the list of source matter: files that are listed at the end are the ones from which the content will be fetched
https://gdal.org/programs/gdalbuildvrt.html
https://gdal.org/drivers/raster/vrt.html

So up to you if you'd want a VRT which takes effort, or would rather just be passed a list of files to include in a mosaiced reference file.

Here's a great one you can experiment with https://github.com/scottstanie/sardem/blob/master/sardem/data/cop_global.vrt
This shows nested VRTs and point to a public dataset on AWS that is a global DEM with no overlaps, 1 projection, only 1 band, and 1 time point. So in some ways the simplest possible scenario.

@abarciauskas-bgse
Copy link
Collaborator

Interesting thanks @wildintellect .

Thanks for clearing that up about de-duplication. I was under the impressions that VRTs could represent a mosaic after deduplication of source files (e.g. spatial overlapping is resolved through logic while building the VRT). But I suppose that use case would be choosing overlapping data preference by block level, not pixel level.

@scottyhq
Copy link
Contributor

scottyhq commented Jul 3, 2024

Thanks for the ping @TomNicholas! Some good points have already been mentioned. I think I just brought up VRTs because they are another example of lightweight sidecar metadata that simplifies the user experience of data management :) ... I haven't thought too much about integrations with virtualizarr, but some ideas below:

I suppose in the same way you create a reference file for NetCDF/DMR++ to bypass HDF and use Zarr instead, you could do the same for TIFF/VRT to bypass GDAL. Would probably want to do some benchmarking there, because unlike hdf, GDAL is pretty good at using overviews and efficiently figuring out range requests during reads (for the common case of a VRT pointing at cloud-optimized geotiffs).

I think another connection here is what is the serialization format for virtualizarr and what is it's scope? My understanding is the eventual goal is to save directly to ZARR v3 format and there are I'm sure lots of existing discussions that I'm not up to speed on. But my mental model is that VRT, STAC, ZARR, KerchunkJSON are all lightweight metadata mappings that can encode many things (file and byte locations, arbitrary metadata, "on read" computations like scale and offset, subset, reprojection).

It seems these lightweight mappings work well up to a limit, and then you encounter the need for some sort of spatial index or database system :) So again, my mapping becomes (KerchunkJSON -> Parquet, VRT -> GTI, STAC -> pgSTAC, ZARR -> Earthmover?

@TomNicholas
Copy link
Member Author

Thanks @scottyhq !

lightweight metadata mappings that can encode many things (file and byte locations, arbitrary metadata, "on read" computations like scale and offset, subset, reprojection).

I see the chunk manifest as exclusively dealing with file and byte locations, and everything else in that list should live elsewhere in zarr (e.g. codecs or metadata following a certain convention).

I would be very curious to hear @mdsumner's thoughts on all the above.

@mdsumner
Copy link
Contributor

mdsumner commented Aug 9, 2024

I think @scottyhq captured my stance well, and I'm glad to see GTI mentioned here - that's really important, and new.

I actually see this completely in the opposite direction, and I wish there was more use of GDAL and VRT itself, it's amazing - but there's these heavy lenses in R and Python over the actual API (but, we have very good support in {gdalraster} and in osgeo.gdal already) - that's a story for elsewhere.

VRT is already an extremely lightweight virtualization, and I went looking for a similar serialization/description for xarray and ended up here. kerchunk/virtualizarr is perfect for hdf/grib IMO but not for the existing GDAL suite. Apart from harvesting filepaths, urls, connections (database strings, vrt:// strings, /vsi* protocols) I don't see what would be the point. There certainly could be a Zarr description of a mosaic, but I'd be adding that as feature to GDAL as the software to convert it from VRT or from a WMTS connection, etc, not trying to bypass it. VRT can mix formats too, it's a very general way to craft a virtual dataset from disparate and even depauparate sources.

If you want to bypass GDAL for TIFF I think you've already got what's needed, but to support VRT you would need to recreate GDAL in large part. How would it take a subset/decimation/rescaling/set-missing-metadata description for a file? I don't think you can sensibly write reference byte ranges for parts of native tiles.

All that said, I'm extremely interested in the relationship between image-tile-servers/GTI/VRT and the various vrt:// and /vsi* abstractions, and how Zarr and its virtualizations work. There are gaps in both, but together they cover a huge gamut of capability and I'm exploring as fast as I can to be able to talk more sensibly about all that.

@mdsumner
Copy link
Contributor

mdsumner commented Aug 9, 2024

oh one technical point on the mention of "byte locations", which I misplaced in my first read

lightweight metadata mappings that can encode many things (file and byte locations, arbitrary metadata, "on read"
computations like scale and offset, subset, reprojection).

That is not a general VRT thing (I think that also wasn't being suggested, but still I think it's worth adding more here)

Apart from being able to describe "raw" sources. You can craft VRT that wraps a blob in a file or in memory, described by shape,bbox,crs,dtype,address,size for example, but it's not something that's used for formats with an official driver.

The documentation is here:

Virtual raster:

https://gdal.org/drivers/raster/vrt.html

Virtual file systems:

https://gdal.org/user/virtual_file_systems.html

VRT for raw binary files:

https://gdal.org/drivers/raster/vrt.html#vrt-descriptions-for-raw-files

MEM or in-memory raster:
https://gdal.org/drivers/raster/mem.html

I think it's interesting in its relationship to how virtualizarr/kerchunk works and there's a lot of potential crossover.

@TomNicholas
Copy link
Member Author

VRT is already an extremely lightweight virtualization

I don't think the idea here would be to replace or add a new layer, but instead to create tools that can easily translate VRT to virtual Zarr or possibly vice versa. See the issues on DMR++ for a similar idea for another virtualization format.

@TomAugspurger
Copy link
Contributor

TomAugspurger commented Aug 9, 2024 via email

@TomNicholas
Copy link
Member Author

Coming back to this conversation from #432 and in a post-icechunk world.

The recommended serialisation format for VirtualiZarr is now effectively Icechunk. But the points @mdsumner and @scottyhq raise above seem like very significant differences between the data models of Zarr and VRT that preclude turning VRTs into Icechunk Zarr stores for anything but very specific cases (e.g. those with no overlap).

However maybe if you used Icechunk's transactional engine to version control non-zarr data you could get something like that... cc @abarciauskas-bgse @sharkinsspatial

Regardless reading Icechunk data from GDAL as @TomAugspurger suggested still sounds useful, but would require binding to the Icechunk rust client somehow.

@scottyhq
Copy link
Contributor

scottyhq commented Feb 26, 2025

I think there are two separable discussions happening: 1. Can/should virtualizarr support virtualizing existing VRTs? and 2. Can GDAL work with virtualized Zarr V3? I'm going to focus on 2 :)

The recommended serialisation format for VirtualiZarr is now effectively Icechunk.

Interesting, I haven't been following virtualizarr discussions recently so this is news to me!

reading Icechunk data from GDAL as @TomAugspurger suggested still sounds useful, but would require binding to the Icechunk rust client somehow.

I was hoping https://virtualizarr.readthedocs.io/en/stable/usage.html#writing-as-zarr would be achievable b/c if I understand correctly writing a to-spec Zarr V3 is what would enable the "automatic" compatibility with GDAL...

Icechunk seems great and I'm totally in favor of using it, but for smaller datasets it adds cognitive load (sessions? commits?) and I'm still a bit confused about how GDAL could interact with icechunk stores:

Ideal simple code (but doesn't work :):

combined_vds.virtualize.to_zarr('combined_v3.zarr')

ds = xr.open_dataset('combined_v3.zarr') 

# As mentioned above, if it's written as "compliant zarr v3" other software like gdal could work with it:
!gdalinfo combined_v3.zarr

Alternative code (not clear to me how GDAL can understand an icechunk store):

from icechunk import Repository, local_filesystem_storage

storage = local_filesystem_storage("/tmp/icechunk/store")
repo = Repository.create(storage=storage)
session = repo.writable_session("main") # Note typo in docs: writeable -> writable 
combined_vds.virtualize.to_icechunk(session.store)
snapshot = session.commit("my first virtualized dataset")

ds = xr.open_zarr(session.store, consolidated=False)

# Should this work? How can gdal 'read' an icechunk store / recognize valid zarr v3 format ?
# session.store.get('zarr.json') # TypeError: IcechunkStore.get() missing 1 required positional argument: 'prototype'
!gdalinfo /tmp/icechunk/store ?

@mdsumner
Copy link
Contributor

mdsumner commented Feb 26, 2025

I don't think anything but Rust can understand an Icechunk store. GDAL can't read virtualized Zarr. It can read the metadata, but that doesn't help unless you can decode the references and point your reader at them (certainly the ZARR driver can be repurposed for that and I think we're not the only ones talking about it). But with Icechunk stores seem like it's not just creators that need the Rust bindings, that's starting to loom as another problem to me. (... but probably it's just the references that are encoded more opaquely, presumably the actual chunks are as generic as ever).

@TomNicholas
Copy link
Member Author

TomNicholas commented Feb 26, 2025

Can GDAL work with virtualized Zarr V3? I'm going to focus on 2 :)

(It would be nice to discuss this on the icechunk issue tracker instead, because it's not a question that's actually related to the VirtualiZarr package.)

The recommended serialisation format for VirtualiZarr is now effectively Icechunk.

Interesting, I haven't been following virtualizarr discussions recently so this is news to me!

A little more context here. We could still make a lightweight format for virtual zarr, but someone has to really want it, and the icechunk dev team would prefer everyone just get aboard the icechunk train 😁

https://virtualizarr.readthedocs.io/en/stable/usage.html#writing-as-zarr

Wait why is that still in the docs!? It was removed in #426.

If I understand correctly writing a to-spec Zarr V3 is what would enable the "automatic" compatibility with GDAL...

You'll never get "automatic" ability to read virtual zarr of any type, it always is going to require some change to a reader to teach it to understand a manifest.json / icechunk store. The question is just how big of a change it requires, whether or not that change brings in dependencies, and whether the format you're reading has a stable spec.

Icechunk seems great and I'm totally in favor of using it, but for smaller datasets it adds cognitive load (sessions? commits?)

You're not wrong. But it also brings incredibly powerful version control features that a bare manifest doesn't, and whose implementation is complementary to even having a manifest.

and I'm still a bit confused about how GDAL could interact with icechunk stores:

GDAL would have to call an icechunk client to interact with icechunk stores. Currently the only client is the one written in rust (with python bindings) and maintained by Earthmover.

I don't think anything but Rust can understand an Icechunk store.

But Icechunk has an open spec, so if you really really wanted to avoid the rust dependency you could write your own icechunk client in any language. Basically icechunk uses the zarr data model and the zarr-python library, but doesn't use the "native zarr" format that really is an implementation detail of the zarr-python FsspecStore.

but probably it's just the references that are encoded more opaquely, presumably the actual chunks are as generic as ever

Yes the chunk storage is still fairly straightforward, but supporting time-travel means you don't know which chunks to read without also having an implementation of the version-control layer.

@mdsumner
Copy link
Contributor

all good, lots of my lingering qs answered there!

@maxrjones
Copy link
Member

The recommended serialisation format for VirtualiZarr is now effectively Icechunk

FWIW I have been recommending Icechunk serialization of virtual datasets for easily updatable workflows by nimble/forward-looking teams and suggesting people with larger, production-based use-cases prepare to use Icechunk after it reaches v1 (likely within the next couple months). I wouldn't recommend integrating Icechunk into production right now across the board. The pilot project in https://github.com/earth-mover/icechunk-nasa is still finding a pain points so that other people don't have to if they would rather wait for a stable release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed references generation Reading byte ranges from archival files
Projects
None yet
Development

No branches or pull requests

7 participants