Skip to content

Commit

Permalink
Remove Async examples
Browse files Browse the repository at this point in the history
  • Loading branch information
lukasbindreiter committed Dec 2, 2024
1 parent f684f97 commit 12a8c79
Show file tree
Hide file tree
Showing 39 changed files with 107 additions and 1,026 deletions.
8 changes: 1 addition & 7 deletions api-reference/datasets/accessing-collection.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,16 +11,10 @@ information for it.

<RequestExample>

```python Python (Sync)
```python Python
collections = dataset.collection("My-collection")
```

```python Python (Async)
collections = dataset.collection("My-collection")
# just creates a collection object, no network calls are made
# so no await required
```

</RequestExample>

## Parameters
Expand Down
7 changes: 1 addition & 6 deletions api-reference/datasets/accessing-dataset.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,7 @@ Once you have listed all available datasets, you can access a specific dataset b

<RequestExample>

```python Python (Sync)
dataset = datasets.open_data.copernicus.sentinel1_sar
# or any other dataset available to you
```

```python Python (Async)
```python Python
dataset = datasets.open_data.copernicus.sentinel1_sar
# or any other dataset available to you
```
Expand Down
6 changes: 1 addition & 5 deletions api-reference/datasets/collection-info.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,10 @@ You can access information such as availability and number of available datapoin

<RequestExample>

```python Python (Sync)
```python Python
info = collection.info()
```

```python Python (Async)
info = await collection.info()
```

</RequestExample>

## Errors
Expand Down
9 changes: 1 addition & 8 deletions api-reference/datasets/listing-collection.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,20 +8,13 @@ You can list all the collections available for a dataset using the `collections`

<RequestExample>

```python Python (Sync)
```python Python
collections = dataset.collections(
availability = True,
count = False,
)
```

```python Python (Async)
collections = await dataset.collections(
availability = True,
count = False,
)
```

</RequestExample>

## Parameters
Expand Down
9 changes: 1 addition & 8 deletions api-reference/datasets/listing-datasets.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,18 +8,11 @@ All available datasets can be listed using the datasets method on your Tilebox d

<RequestExample>

```python Python (Sync)
```python Python
from tilebox.datasets import Client

client = Client()
datasets = client.datasets()
```

```python Python (Async)
from tilebox.datasets.aio import Client

client = Client()
datasets = await client.datasets()
```

</RequestExample>
28 changes: 1 addition & 27 deletions api-reference/datasets/loading-data.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Tilebox as time. Currently this includes either strings in ISO 8601 format or py

<RequestExample>

```python Python (Sync)
```python Python
from datetime import datetime
from tilebox.clients.core.data import TimeInterval

Expand All @@ -40,32 +40,6 @@ meta_data = collection.load(..., skip_data=True)
first_50 = collection.load(meta_data.time[:50], skip_data=False)
```

```python Python (Async)
from datetime import datetime
from tilebox.clients.core.data import TimeInterval

# loading a specific time
time = "2023-05-01 12:45:33.423"
data = await collection.load(time)

# loading a time interval
interval = ("2023-05-01", "2023-08-01")
data = await collection.load(interval, show_progress=True)

# loading a time interval alternative equivalent to the above example
interval = TimeInterval(
start = datetime(2023, 5, 1),
end = datetime(2023, 8, 1),
start_exclusive = False,
end_inclusive = False,
)
data = await collection.load(interval, show_progress=True)

# loading with an iterable
meta_data = await collection.load(..., skip_data=True)
first_50 = await collection.load(meta_data.time[:50], skip_data=False)
```

</RequestExample>

## Parameters
Expand Down
10 changes: 1 addition & 9 deletions api-reference/datasets/loading-datapoint.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,22 +8,14 @@ To load a single data point from a collection using its id, use the find method

<RequestExample>

```python Python (Sync)
```python Python
datapoint_id = "0186d6b6-66cc-fcfd-91df-bbbff72499c3"
data = collection.find(
datapoint_id,
skip_data = False,
)
```

```python Python (Async)
datapoint_id = "0186d6b6-66cc-fcfd-91df-bbbff72499c3"
data = await collection.find(
datapoint_id,
skip_data = False,
)
```

</RequestExample>

## Parameters
Expand Down
14 changes: 1 addition & 13 deletions api-reference/storage-providers/creating-storage-client.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ For a complete example look at the [Accessing Open Data](/datasets/open-data#sam

<RequestExample>

```python Python (Sync)
```python Python
from pathlib import Path
from tilebox.storage import ASFStorageClient
# or UmbraStorageClient
Expand All @@ -22,18 +22,6 @@ storage_client = ASFStorageClient(
)
```

```python Python (Async)
from pathlib import Path
from tilebox.storage.aio import ASFStorageClient
# or UmbraStorageClient
# or CopernicusStorageClient

storage_client = ASFStorageClient(
"ASF_USERNAME", "ASF_PASSWORD",
cache_directory=Path("./data")
)
```

</RequestExample>

## Parameters
Expand Down
7 changes: 1 addition & 6 deletions api-reference/storage-providers/deleting-cache.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,14 +8,9 @@ To delete the entire download cache you can use the `destroy_cache` method.

<RequestExample>

```python Python (Sync)
```python Python
# careful, this will delete the entire cache directory
storage_client.destroy_cache()
```

```python Python (Async)
# careful, this will delete the entire cache directory
await storage_client.destroy_cache()
```

</RequestExample>
7 changes: 1 addition & 6 deletions api-reference/storage-providers/deleting-products.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,16 +8,11 @@ To delete downloaded products or images again you can use the `delete` method.

<RequestExample>

```python Python (Sync)
```python Python
storage_client.delete(path_to_data)
storage_client.delete(path_to_image)
```

```python Python (Async)
await storage_client.delete(path_to_data)
await storage_client.delete(path_to_image)
```

</RequestExample>

## Parameters
Expand Down
21 changes: 1 addition & 20 deletions api-reference/storage-providers/direct-storage-access.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ It does not cache any files and expects an `output_dir` parameter for all downlo

<RequestExample>

```python Python (Sync)
```python Python
from pathlib import Path
from tilebox.storage import ASFStorageClient
# or UmbraStorageClient
Expand All @@ -28,23 +28,4 @@ path_to_data = direct_storage_client.download(
)
```

```python Python (Async)
from pathlib import Path
from tilebox.storage.aio import ASFStorageClient
# or UmbraStorageClient
# or CopernicusStorageClient

direct_storage_client = ASFStorageClient(
"ASF_USERNAME", "ASF_PASSWORD",
cache_directory=None
)
path_to_data = await direct_storage_client.download(
datapoint,
output_dir=Path("./data"),
verify=True,
extract=True,
show_progress=True,
)
```

</RequestExample>
11 changes: 1 addition & 10 deletions api-reference/storage-providers/downloading-products.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ You can download the product file for a given data point using the download meth

<RequestExample>

```python Python (Sync)
```python Python
path_to_data = storage_client.download(
datapoint,
verify=True,
Expand All @@ -17,15 +17,6 @@ path_to_data = storage_client.download(
)
```

```python Python (Async)
path_to_data = await storage_client.download(
datapoint,
verify=True,
extract=True,
show_progress=True,
)
```

</RequestExample>

## Parameters
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,18 +8,12 @@ In case a storage provider offers quicklook images for products you can download

<RequestExample>

```python Python (Sync)
```python Python
path_to_image = storage_client.download_quicklook(
datapoint
)
```

```python Python (Async)
path_to_image = await storage_client.download_quicklook(
datapoint
)
```

</RequestExample>

## Parameters
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,20 +8,13 @@ In interactive environments you can also display quicklook images directly in th

<RequestExample>

```python Python (Sync)
```python Python
image = storage_client.quicklook(
datapoint
)
image # display the image as the cell output
```

```python Python (Async)
image = await storage_client.quicklook(
datapoint
)
image # display the image as the cell output
```

</RequestExample>

## Parameters
Expand Down
14 changes: 1 addition & 13 deletions api-reference/workflows/cache-access.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,7 @@ Make sure to specify dependencies between tasks to ensure that certain cache key
been written to.

<RequestExample>

```python Python (Sync)
```python Python
class WriterTask(Task):
def execute(self, context: ExecutionContext):
context.job_cache["some-key"] = b"my-value"
Expand All @@ -20,15 +19,4 @@ class ReaderTask(Task):
def execute(self, context: ExecutionContext):
data = context.job_cache["some-key"]
```

```python Python (Async)
class WriterTask(Task):
def execute(self, context: ExecutionContext):
context.job_cache["some-key"] = b"my-value"

class ReaderTask(Task):
def execute(self, context: ExecutionContext):
data = context.job_cache["some-key"]
```

</RequestExample>
6 changes: 1 addition & 5 deletions api-reference/workflows/cancelling-job.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,10 @@ If after cancelling a job you want to resume it, you can [retry](/api-reference/

<RequestExample>

```python Python (Sync)
```python Python
job_client.cancel(job)
```

```python Python (Async)
await job_client.cancel(job)
```

</RequestExample>

## Parameters
Expand Down
18 changes: 2 additions & 16 deletions api-reference/workflows/cluster-management.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,11 @@ You can use an instance of the `ClusterClient` to find, list, create, and delete

<RequestExample>

```python Python (Sync)
```python Python
from tilebox.workflows import Client

client = Client()
cluster_client = client.clusters()
cluster_client = client.clusters()

# Find, List, Create and Delete clusters
cluster = cluster_client.find("my-cluster-EdsdUozYprBJDL") # cluster-slug
Expand All @@ -22,18 +22,4 @@ cluster = cluster_client.create("My Cluster")
cluster_client.delete("my-cluster-EdsdUozYprBJDL")
```

```python Python (Async)
from tilebox.workflows import Client

client = Client()
cluster_client = await client.clusters()

# Find, List, Create and Delete clusters
cluster = await cluster_client.find("my-cluster-EdsdUozYprBJDL") # cluster-slug
all_clusters = await cluster_client.all()
# will generate a new cluster slug from the provided name
cluster = await cluster_client.create("My Cluster")
await cluster_client.delete("my-cluster-EdsdUozYprBJDL")
```

</RequestExample>
Loading

0 comments on commit 12a8c79

Please sign in to comment.