Skip to content

Commit

Permalink
Merge branch 'main' of https://github.com/IPL-UV/supers2
Browse files Browse the repository at this point in the history
  • Loading branch information
csaybar committed Dec 2, 2024
2 parents 4b3361b + 7c48e02 commit 0e67aa4
Show file tree
Hide file tree
Showing 6 changed files with 144,586 additions and 20 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
docs/source
demo.py
images
# From https://raw.githubusercontent.com/github/gitignore/main/Python.gitignore

# Byte-compiled / optimized / DLL files
Expand Down
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,6 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
### Fixed
- Fixed minor bugs in the data processing module related to edge cases.

## [0.0.5] - 2024-10-24
## [0.0.7] - 2024-10-24
### Added
- First public release with support for enhancing Sentinel-2 spatial resolution to 2.5 meters.
131 changes: 116 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
#

<p align="center">
<img src="./assets/images/banner_supers2.png" width="50%">
<img src="https://raw.githubusercontent.com/IPL-UV/supers2/refs/heads/main/assets/images/banner_supers2.png" width="50%">
</p>

<p align="center">
Expand Down Expand Up @@ -50,7 +50,7 @@ pip install supers2

## **How to use** 🛠️

### **Basic usage: enhancing spatial resolution of Sentinel-2 images** 🌍
### **Load libraries**

```python
import matplotlib.pyplot as plt
Expand All @@ -59,7 +59,14 @@ import supers2
import torch
import cubo

## Download Sentinel-2 L2A cube
import supers2

```

### **Download Sentinel-2 L2A cube**

```python
# Create a Sentinel-2 L2A data cube for a specific location and date range
da = cubo.create(
lat=4.31,
lon=-76.2,
Expand All @@ -71,6 +78,17 @@ da = cubo.create(
resolution=10
)

### **Prepare the data (CPU and GPU usage)**

When converting the NumPy array to a PyTorch tensor, the use of `cuda()` is optional and depends on whether the user has access to a GPU. Below is the explanation for both cases:

- **GPU:** If a GPU is available and CUDA is installed, you can transfer the tensor to the GPU using `.cuda()`. This improves the processing speed, especially for large datasets or deep learning models.

- **CPU:** If no GPU is available, the tensor will be processed on the CPU, which is the default behavior in PyTorch. In this case, simply omit the `.cuda()` call.

Here’s how you can handle both scenarios dynamically:

```python
# Convert the data array to NumPy and scale
original_s2_numpy = (da[11].compute().to_numpy() / 10_000).astype("float32")

Expand All @@ -96,29 +114,109 @@ ax[1].set_title("Enhanced Resolution S2")
plt.show()
```

<p align="center">
<img src="./assets/images/example1.png" width="100%">
</p>
### **Configuring the Spatial Resolution Enhancement Model**

In **supers2**, you can choose from several types of models to enhance the spatial resolution of Sentinel-2 images. Below are the configurations for each model type and their respective [size options](https://github.com/IPL-UV/supers2/releases/tag/v0.1.0). Each model is configured using `supers2.setmodel`, where the `sr_model_snippet` argument defines the super-resolution model, and `fusionx2_model_snippet` and `fusionx4_model_snippet` correspond to additional fusion models.

## Chante the model settings 🛠️
### **Available Models:**

At the end of the document, you can find a table with the available models and their characteristics.
#### **1. CNN Models**
CNN-based models are available in the following sizes: `lightweight`, `small`, `medium`, `expanded`, and `large`.

```python
# Set up the model to enhance the spatial resolution
# Example configuration for a CNN model
models = supers2.setmodel(
resolution = "2.5m", # Set the desired resolution
sr_model_snippet = "sr__opensrbaseline__cnn__medium__l1", # RGBN model from 10m to 2.5m
fusionx2_model_snippet = "fusionx2__opensrbaseline__cnn__large__l1", # RedESWIR model from 20m to 10m
fusionx4_model_snippet = "fusionx4__opensrbaseline__cnn__large__l1", #RedESWIR model from 10m to 2.5m
weights_path = None, # Path to the weights file
device = "cpu" # Use the CPU
sr_model_snippet="sr__opensrbaseline__cnn__lightweight__l1",
fusionx2_model_snippet="fusionx2__opensrbaseline__cnn__lightweight__l1",
fusionx4_model_snippet="fusionx4__opensrbaseline__cnn__lightweight__l1",
resolution="2.5m",
device=device
)

# Apply spatial resolution enhancement
superX = supers2.predict(X, models=models, resolution="2.5m")
```
Model size options (replace `small` with the desired size):

- `lightweight`
- `small`
- `medium`
- `expanded`
- `large`

#### **2. SWIN Models**
SWIN models are optimized for varying levels of detail and offer size options: `lightweight`, `small`, `medium`, and `expanded`.

```python
# Example configuration for a SWIN model
models = supers2.setmodel(
sr_model_snippet="sr__opensrbaseline__swin__lightweight__l1",
fusionx2_model_snippet="fusionx2__opensrbaseline__cnn__lightweight__l1",
fusionx4_model_snippet="fusionx4__opensrbaseline__cnn__lightweight__l1",
resolution="2.5m",
device=device
)
```

Available sizes:

- `lightweight`
- `small`
- `medium`
- `expanded`

#### **3. MAMBA Models**
MAMBA models also come in various sizes, similar to SWIN and CNN: `lightweight`, `small`, `medium`, and `expanded`.

```python
# Example configuration for a MAMBA model
models = supers2.setmodel(
sr_model_snippet="sr__opensrbaseline__mamba__lightweight__l1",
fusionx2_model_snippet="fusionx2__opensrbaseline__cnn__lightweight__l1",
fusionx4_model_snippet="fusionx4__opensrbaseline__cnn__lightweight__l1",
resolution="2.5m",
device=device
)
```

Available sizes:

- `lightweight`
- `small`
- `medium`
- `expanded`


#### **4. Diffusion Model**
The opensrdiffusion model is only available in the `large` size. This model is suited for deep resolution enhancement without additional configurations.

```python
# Configuration for the Diffusion model
models = supers2.setmodel(
sr_model_snippet="sr__opensrdiffusion__large__l1",
fusionx2_model_snippet="fusionx2__opensrbaseline__cnn__lightweight__l1",
fusionx4_model_snippet="fusionx4__opensrbaseline__cnn__lightweight__l1",
resolution="2.5m",
device=device
)
```

#### **5. Simple Models (Bilinear and Bicubic)**
For fast interpolation, bilinear and bicubic interpolation models can be used. These models do not require complex configurations and are useful for quick evaluations of enhanced resolution.

```python
from supers2.models.simple import BilinearSR, BicubicSR

# Bilinear Interpolation Model
bilinear_model = BilinearSR(device=device, scale_factor=4).to(device)
super_bilinear = bilinear_model(X[None])

# Bicubic Interpolation Model
bicubic_model = BicubicSR(device=device, scale_factor=4).to(device)
super_bicubic = bicubic_model(X[None])
```

### **Apply spatial resolution enhancement**

### **Predict only RGBNIR bands** 🌍

Expand Down Expand Up @@ -151,6 +249,9 @@ ax[1].set_title("Standard Deviation")
plt.show()
```

<p align="center">
<img src="https://raw.githubusercontent.com/IPL-UV/supers2/refs/heads/main/assets/images/example1.png" width="100%">
</p>

### Estimate the Local Attention Map of the model 📊

Expand Down
Loading

0 comments on commit 0e67aa4

Please sign in to comment.