Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

262 docs rfccreate a page automate hypercore using jobs #3728

Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
90 commits
Select commit Hold shift + click to select a range
7bc034a
chore: space to validate initial build.
billy-the-fish Jan 9, 2025
b7e966e
chore: add first hypercore files.
billy-the-fish Jan 9, 2025
3899a6e
chore: add api ref files and alter_table to the build.
billy-the-fish Jan 9, 2025
bd9946c
chore: add remove_columnstore_policy to the build.
billy-the-fish Jan 9, 2025
5ad76e2
chore: add convert_to_columnstore to the build.
billy-the-fish Jan 9, 2025
993cf07
chore: add convert_to_rowstore to the build.
billy-the-fish Jan 9, 2025
d1fde35
chore: updates for build.
billy-the-fish Jan 9, 2025
f727241
chore: updates for build.
billy-the-fish Jan 9, 2025
67c638f
chore: remove links from steps in a list.
billy-the-fish Jan 9, 2025
39c8f8c
chore: remove links from steps in a list.
billy-the-fish Jan 9, 2025
193f0d6
Merge branch 'latest' into 235-docs-rfc-update-the-api-ref-for-hypers…
billy-the-fish Jan 9, 2025
af34323
chore: hopefully this will work now.
billy-the-fish Jan 9, 2025
a80f250
chore: add deprecation for compression.
billy-the-fish Jan 9, 2025
c03cae9
chore: simplify workflow, keep code for main docs.
billy-the-fish Jan 9, 2025
e58a7c9
chore: updates on review. .
billy-the-fish Jan 9, 2025
338bac1
Merge branch 'latest' into 235-docs-rfc-update-the-api-ref-for-hypers…
billy-the-fish Jan 10, 2025
1409dfd
chore: updates on review.
billy-the-fish Jan 10, 2025
d8008e0
Merge branch 'latest' into 235-docs-rfc-update-the-api-ref-for-hypers…
billy-the-fish Jan 13, 2025
a4bf7f1
Merge branch 'latest' into 235-docs-rfc-update-the-api-ref-for-hypers…
billy-the-fish Jan 14, 2025
3aeab3e
feat: update structure for hypercore
billy-the-fish Jan 6, 2025
fe98f85
chore: keep on writing.
billy-the-fish Jan 14, 2025
95f5921
chore: policies doc first draft.
billy-the-fish Jan 14, 2025
89db9df
chore: policies doc first draft.
billy-the-fish Jan 14, 2025
e978d7f
chore: more writing.
billy-the-fish Jan 15, 2025
de1c436
feat: update structure for hypercore
billy-the-fish Jan 6, 2025
834a406
chore: add replacedby link. Has to be hardcoded so will not work unti…
billy-the-fish Jan 15, 2025
e969ec0
Merge branch 'latest' into 235-docs-rfc-update-the-api-ref-for-hypers…
billy-the-fish Jan 15, 2025
fe6d956
Apply suggestions from code review
billy-the-fish Jan 15, 2025
8c87b3c
chore: update on review.
billy-the-fish Jan 15, 2025
ea73f6a
chore: add replacedby link. Has to be hardcoded so will not work unti…
billy-the-fish Jan 15, 2025
0547899
Flatten the structure of the integrations section (#3681)
atovpeko Jan 15, 2025
3bf038e
Apply suggestions from code review
billy-the-fish Jan 15, 2025
cd18d3d
chore: update on review.
billy-the-fish Jan 15, 2025
257bf02
chore: policies doc first draft.
billy-the-fish Jan 14, 2025
cc2ae52
chore: update on review.
billy-the-fish Jan 16, 2025
3a1f706
chore: update on review.
billy-the-fish Jan 16, 2025
25a2bcc
chore: add enable_segmentwise_recompression.
billy-the-fish Jan 16, 2025
c849517
chore: add enable_segmentwise_recompression.
billy-the-fish Jan 16, 2025
5d4ea6e
chore: update on review.
billy-the-fish Jan 16, 2025
be89feb
Merge branch 'latest' into 235-docs-rfc-update-the-api-ref-for-hypers…
billy-the-fish Jan 16, 2025
93ebdde
Merge branch 'latest' into 235-docs-rfc-update-the-api-ref-for-hypers…
billy-the-fish Jan 17, 2025
9c09b7c
Apply suggestions from code review
billy-the-fish Jan 15, 2025
d94b137
chore: add enable_segmentwise_recompression.
billy-the-fish Jan 16, 2025
35bf330
chore: add enable_segmentwise_recompression.
billy-the-fish Jan 16, 2025
2f85716
Remove deprecated flag (#3708)
MetalBlueberry Jan 16, 2025
d62bf57
Integration template added
atovpeko Jan 16, 2025
0d2cd30
Sortout integrations troubleshooting (#3712)
billy-the-fish Jan 16, 2025
b8f1db3
Links cleanup
atovpeko Jan 16, 2025
8167561
3688 docs rfc writing test for anagha (#3689)
billy-the-fish Jan 17, 2025
0ba6e44
chore: policies doc first draft.
billy-the-fish Jan 14, 2025
289952e
chore: update on review.
billy-the-fish Jan 16, 2025
492b206
chore: updates on review.
billy-the-fish Jan 17, 2025
e4a4a57
chore: update deprecation notices.
billy-the-fish Jan 17, 2025
8025d11
chore: update deprecation notices.
billy-the-fish Jan 17, 2025
ce662f7
chore: remove stuff breaking the build.
billy-the-fish Jan 17, 2025
33fd723
feat: add api reference for Hypercore TAM
mkindahl Jan 17, 2025
b1af92d
chore: nearly there.
billy-the-fish Jan 19, 2025
8b875d3
chore: nearly there.
billy-the-fish Jan 20, 2025
cc5030d
Merge branch 'latest' into 235-docs-rfc-update-the-api-ref-for-hypers…
billy-the-fish Jan 20, 2025
868aa6d
Merge branch 'latest' into 262-docs-rfccreate-a-page-automate-hyperco…
billy-the-fish Jan 20, 2025
7ca04c9
chore: remove stuff breaking the build.
billy-the-fish Jan 18, 2025
f8a1b47
chore: update deprecation notices.
billy-the-fish Jan 17, 2025
73285c8
chore: update deprecation notices.
billy-the-fish Jan 17, 2025
4760c11
chore: remove stuff breaking the build.
billy-the-fish Jan 17, 2025
7c55f55
chore: policies doc first draft.
billy-the-fish Jan 14, 2025
4892a68
chore: update on review.
billy-the-fish Jan 16, 2025
562bb5d
chore: updates on review.
billy-the-fish Jan 17, 2025
7e935d9
chore: update on review.
billy-the-fish Jan 16, 2025
03b2b58
chore: updates on review.
billy-the-fish Jan 17, 2025
f796bb0
Merge branch '235-docs-rfc-update-the-api-ref-for-hyperstore-and-comp…
billy-the-fish Jan 20, 2025
6fe303b
Add user manual for indexing Hypercore tables.
mkindahl Jan 24, 2025
f8241c1
Add missing page reference
mkindahl Jan 24, 2025
15219b9
Hypercore add indexing to optimize and modify pages (#3773)
billy-the-fish Feb 3, 2025
2f480c4
Merge branch 'release-2.18.0-main' into 262-docs-rfccreate-a-page-aut…
billy-the-fish Feb 3, 2025
b7b1882
chore: add convert_to_columnstore to the build.
billy-the-fish Jan 9, 2025
493d963
chore: add deprecation for compression.
billy-the-fish Jan 9, 2025
52ba188
chore: updates on review.
billy-the-fish Jan 10, 2025
7c2fb90
feat: update structure for hypercore
billy-the-fish Jan 6, 2025
bc00b56
chore: policies doc first draft.
billy-the-fish Jan 14, 2025
b7f11b4
chore: policies doc first draft.
billy-the-fish Jan 14, 2025
396ead1
chore: add replacedby link. Has to be hardcoded so will not work unti…
billy-the-fish Jan 15, 2025
e1736a5
Flatten the structure of the integrations section (#3681)
atovpeko Jan 15, 2025
d9e1286
chore: update on review.
billy-the-fish Jan 16, 2025
1f6e543
3688 docs rfc writing test for anagha (#3689)
billy-the-fish Jan 17, 2025
4557238
chore: updates on review.
billy-the-fish Jan 17, 2025
18fb1e2
feat: add api reference for Hypercore TAM
mkindahl Jan 17, 2025
35c0e10
chore: review updates.
billy-the-fish Feb 4, 2025
4858775
chore: review updates.
billy-the-fish Feb 4, 2025
7e313bb
Hypercore add indexing to optimize and modify pages (#3773)
billy-the-fish Feb 3, 2025
556c3b8
chore: review updates.
billy-the-fish Feb 4, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
89 changes: 89 additions & 0 deletions _partials/_cloud_self_configuration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
import EarlyAccess from "versionContent/_partials/_early_access.mdx";

## Policies

### `timescaledb.max_background_workers (int)`

Max background worker processes allocated to TimescaleDB. Set to at least 1 +
the number of databases loaded with a TimescaleDB extension in a PostgreSQL
instance. Default value is 16.

### `timescaledb.enable_tiered_reads (bool)`

Enable [tiered reads][enabling-data-tiering] to that you query your data normally when it's distributed across different storage tiers.
Your hypertable is spread across the tiers, so queries and `JOIN`s work and fetch the same data as usual.

By default, tiered data is not accessed by queries. Querying tiered data may slow down query performance
as the data is not stored locally on Timescale's high-performance storage tier.

## Hypercore features

### `timescaledb.default_hypercore_use_access_method (bool)`

The default value for `hypercore_use_access_method` for functions that have this parameter. This function is in `user` context, meaning that any user can set it for the session. The default value is `false`.

<EarlyAccess />

## $SERVICE_LONG tuning

### `timescaledb.disable_load (bool)`

Disable the loading of the actual extension

### `timescaledb.enable_cagg_reorder_groupby (bool)`
Enable group by reordering

### `timescaledb.enable_chunk_append (bool)`
Enable chunk append node

### `timescaledb.enable_constraint_aware_append (bool)`
Enable constraint-aware append scans

### `timescaledb.enable_constraint_exclusion (bool)`
Enable constraint exclusion

### `timescaledb.enable_job_execution_logging (bool)`
Enable job execution logging

### `timescaledb.enable_optimizations (bool)`
Enable TimescaleDB query optimizations

### `timescaledb.enable_ordered_append (bool)`
Enable ordered append scans

### `timescaledb.enable_parallel_chunk_append (bool)`
Enable parallel chunk append node

### `timescaledb.enable_runtime_exclusion (bool)`
Enable runtime chunk exclusion

### `timescaledb.enable_tiered_reads (bool)`

Enable [tiered reads][enabling-data-tiering] to that you query your data normally when it's distributed across different storage tiers.
Your hypertable is spread across the tiers, so queries and `JOIN`s work and fetch the same data as usual.

By default, tiered data is not accessed by queries. Querying tiered data may slow down query performance
as the data is not stored locally on Timescale's high-performance storage tier.


### `timescaledb.enable_transparent_decompression (bool)`
Enable transparent decompression


### `timescaledb.restoring (bool)`
Stop any background workers which could have been performing tasks. This is especially useful you
migrate data to your [$SERVICE_LONG][pg-dump-and-restore] or [self-hosted database][migrate-entire].

### `timescaledb.max_cached_chunks_per_hypertable (int)`
Maximum cached chunks

### `timescaledb.max_open_chunks_per_insert (int)`
Maximum open chunks per insert

### `timescaledb.max_tuples_decompressed_per_dml_transaction (int)`

The max number of tuples that can be decompressed during an INSERT, UPDATE, or DELETE.

[enabling-data-tiering]: /use-timescale/:currentVersion:/data-tiering/enabling-data-tiering/
[pg-dump-and-restore]: /migrate/:currentVersion:/pg-dump-and-restore/
[migrate-entire]: /self-hosted/:currentVersion:/migration/entire-database/
6 changes: 1 addition & 5 deletions _partials/_early_access.md
Original file line number Diff line number Diff line change
@@ -1,5 +1 @@
<Highlight type="important">
This feature is early access. Early access features might be subject to billing
changes in the future. If you have feedback, reach out to your customer success
manager, or [contact us](https://www.timescale.com/contact/).
</Highlight>
<Tag variant="hollow">Early access: TimescaleDB v2.18.0</Tag>
21 changes: 21 additions & 0 deletions _partials/_hypercore-conversion-overview.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
When you convert chunks from the rowstore to the columnstore, multiple records are grouped into a single row.
The columns of this row hold an array-like structure that stores all the data. For example, data in the following
rowstore chunk:

| Timestamp | Device ID | Device Type | CPU |Disk IO|
|---|---|---|---|---|
|12:00:01|A|SSD|70.11|13.4|
|12:00:01|B|HDD|69.70|20.5|
|12:00:02|A|SSD|70.12|13.2|
|12:00:02|B|HDD|69.69|23.4|
|12:00:03|A|SSD|70.14|13.0|
|12:00:03|B|HDD|69.70|25.2|

Is converted and compressed into arrays in a row in the columnstore:

|Timestamp|Device ID|Device Type|CPU|Disk IO|
|-|-|-|-|-|
|[12:00:01, 12:00:01, 12:00:02, 12:00:02, 12:00:03, 12:00:03]|[A, B, A, B, A, B]|[SSD, HDD, SSD, HDD, SSD, HDD]|[70.11, 69.70, 70.12, 69.69, 70.14, 69.70]|[13.4, 20.5, 13.2, 23.4, 13.0, 25.2]|

Because a single row takes up less disk space, you can reduce your chunk size by more than 90%, and can also
speed up your queries. This saves on storage costs, and keeps your queries operating at lightning speed.
54 changes: 54 additions & 0 deletions _partials/_hypercore_manual_workflow.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
import EarlyAccess from "versionContent/_partials/_early_access.mdx";

1. **Stop the jobs that are automatically adding chunks to the columnstore**

Retrieve the list of jobs from the [timescaledb_information.jobs][informational-views] view
to find the job you need to [alter_job][alter_job].

``` sql
SELECT alter_job(JOB_ID, scheduled => false);
```

1. **Convert a chunk to update back to the rowstore**

``` sql
CALL convert_to_rowstore('_timescaledb_internal._hyper_2_2_chunk');
```

1. **Update the data in the chunk you added to the rowstore**

Best practice is to structure your [INSERT][insert] statement to include appropriate
partition key values, such as the timestamp. TimescaleDB adds the data to the correct chunk:

``` sql
INSERT INTO metrics (time, value)
VALUES ('2025-01-01T00:00:00', 42);
```

1. **Convert the updated chunks back to the columnstore**

``` sql
CALL convert_to_columnstore('_timescaledb_internal._hyper_1_2_chunk');
```

* <EarlyAccess /> To enable indexing over the specific chunk you are adding to the
columnstore, enable the Hypercore table access method:

``` sql
call convert_to_columnstore('_timescaledb_internal._hyper_1_2_chunk',
hypercore_use_access_method => true);
```
You must [enable columnstore on a hypertable][setup-hypercore] before you apply `hypercore_use_access_method`
to a chunk. You can also do this using [ALTER TABLE][compression_alter-table].

1. **Restart the jobs that are automatically converting chunks to the columnstore**

``` sql
SELECT alter_job(JOB_ID, scheduled => true);
```

[alter_job]: /api/:currentVersion:/actions/alter_job/
[informational-views]: /api/:currentVersion:/informational-views/jobs/
[insert]: /use-timescale/:currentVersion:/write-data/insert/
[setup-hypercore]: /use-timescale/:currentVersion:/hypercore/real-time-analytics-in-hypercore/
[compression_alter-table]: /api/:currentVersion:/hypercore/alter_table/
66 changes: 58 additions & 8 deletions _partials/_hypercore_policy_workflow.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,38 @@
1. **Enable columnstore**
import EarlyAccess from "versionContent/_partials/_early_access.mdx";

1. **Connect to your $SERVICE_LONG**

In [$CONSOLE][services-portal] open an [SQL editor][in-console-editors]. You can also connect to your service using [psql][connect-using-psql].

1. **Enable columnstore on a hypertable**

Create a [job][job] that automatically moves chunks in a hypertable to the columnstore at a specific time interval.
By default, your table is `orderedby` the time column. For efficient queries on columnstore data, remember to
`segmentby` the column you will use most often to filter your data:

* [Use `ALTER TABLE` for a hypertable][alter_table_hypercore]
```sql
ALTER TABLE stocks_real_time SET (timescaledb.enable_columnstore = true, timescaledb.segmentby = 'symbol');
ALTER TABLE stocks_real_time SET (
timescaledb.enable_columnstore = true,
timescaledb.segmentby = 'symbol');
```
* [Use ALTER MATERIALIZED VIEW for a continuous aggregate][compression_continuous-aggregate]
```sql
ALTER MATERIALIZED VIEW stock_candlestick_daily set (timescaledb.enable_columnstore = true, timescaledb.segmentby = 'symbol' );
ALTER MATERIALIZED VIEW stock_candlestick_daily set (
timescaledb.enable_columnstore = true,
timescaledb.segmentby = 'symbol' );
```
Before you say `huh`, a continuous aggregate is a specialized hypertable.

* <EarlyAccess /> Enable indexing over all data in the rowstore and columnstore:

```sql
alter table stocks_real_time,
set access method hypercore,
set (timescaledb.enable_columnstore = true, timescaledb.segmentby = 'symbol');
```

This is also early access for continuous aggregates.

1. **Add a policy to move chunks to the columnstore at a specific time interval**

For example, 60 days after the data was added to the table:
Expand All @@ -17,6 +41,16 @@
```
See [add_columnstore_policy][add_columnstore_policy].

* <EarlyAccess /> To enable indexing over data in the rowstore and the columnstore, tell the policy
to use the Hypercore table access method.

``` sql
CALL add_columnstore_policy(
'older_stock_prices',
after => INTERVAL '60d',
hypercore_use_access_method => true);
```

1. **View the policies that you set or the policies that already exist**

``` sql
Expand All @@ -27,8 +61,13 @@

1. **Pause a columnstore policy**

If you need to modify or add a lot of data to a chunk in the columnstore, best practice is to stop any jobs moving
chunks to the columnstore, [convert the chunk back to the rowstore][convert_to_rowstore], then modify the data.
After the update, [convert the chunk to the columnstore][convert_to_columnstore] and restart the jobs.

``` sql
SELECT * FROM timescaledb_information.jobs where proc_name = 'policy_compression' AND relname = 'stocks_real_time'
SELECT * FROM timescaledb_information.jobs where
proc_name = 'policy_compression' AND relname = 'stocks_real_time'

-- Select the JOB_ID from the results

Expand All @@ -37,16 +76,19 @@
See [alter_job][alter_job].

1. **Restart a columnstore policy**

``` sql
SELECT alter_job(JOB_ID, scheduled => true);
```
See [alter_job][alter_job].

1. **Remove a columnstore policy**

``` sql
CALL remove_columnstore_policy('older_stock_prices');
```
See [remove_columnstore_policy][remove_columnstore_policy].

1. **Disable columnstore**

If your table has chunks in the columnstore, you have to
Expand All @@ -57,9 +99,17 @@
See [alter_table_hypercore][alter_table_hypercore].


[job]: /api/:currentVersion:/actions/add_job/
[alter_table_hypercore]: /api/:currentVersion:/hypercore/alter_table/
[alter_job]: /api/:currentVersion:/actions/alter_job/
[add_columnstore_policy]: /api/:currentVersion:/hypercore/add_columnstore_policy/
[compression_continuous-aggregate]: /api/:currentVersion:/hypercore/alter_materialized_view/
[convert_to_rowstore]: /api/:currentVersion:/hypercore/convert_to_rowstore/
[convert_to_columnstore]: /api/:currentVersion:/hypercore/convert_to_columnstore/
[informational-views]: /api/:currentVersion:/informational-views/jobs/
[add_columnstore_policy]: /api/:currentVersion:/hypercore/add_columnstore_policy/
[hypercore_workflow]: /api/:currentVersion:/hypercore/#hypercore-workflow
[alter_job]: /api/:currentVersion:/actions/alter_job/
[remove_columnstore_policy]: /api/:currentVersion:/hypercore/remove_columnstore_policy/
[convert_to_rowstore]: /api/:currentVersion:/hypercore/convert_to_rowstore/
[in-console-editors]: /getting-started/:currentVersion:/run-queries-from-console/
[services-portal]: https://console.cloud.timescale.com/dashboard/services
[connect-using-psql]: /use-timescale/:currentVersion:/integrations/query-admin/psql#connect-to-your-service
[insert]: /use-timescale/:currentVersion:/write-data/insert/
8 changes: 8 additions & 0 deletions _partials/_prereqs-cloud-and-self.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
To follow the procedure on this page you need to:

* Create a [target $SERVICE_LONG][create-service]

This procedure also works for [self-hosted $TIMESCALE_DB][enable-timescaledb].

[create-service]: /getting-started/:currentVersion:/services/
[enable-timescaledb]: /self-hosted/:currentVersion:/install/
5 changes: 5 additions & 0 deletions _partials/_prereqs-cloud-only.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
To follow the procedure on this page you need to:

* Create a [target $SERVICE_LONG][create-service]

[create-service]: /getting-started/:currentVersion:/services/
4 changes: 2 additions & 2 deletions _partials/_usage-based-storage-intro.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
$CLOUD_LONG charges are based on the amount of storage you use. You don't pay for
fixed storage size, and you don't need to worry about scaling disk size as your
data grows; We handle it all for you. To reduce your data costs further,
use [compression][compression], a [data retention policy][data-retention], and
use [Hypercore][hypercore], a [data retention policy][data-retention], and
[tiered storage][data-tiering].

[compression]: /use-timescale/:currentVersion:/compression/about-compression
[hypercore]: /api/:currentVersion:/hypercore/
[data-retention]: /use-timescale/:currentVersion:/data-retention/
[data-tiering]: /use-timescale/:currentVersion:/data-tiering/
6 changes: 4 additions & 2 deletions api/add_policies.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,8 @@ timescaledb_experimental.add_policies(
refresh_start_offset "any" = NULL,
refresh_end_offset "any" = NULL,
compress_after "any" = NULL,
drop_after "any" = NULL
drop_after "any" = NULL,
hypercore_use_access_method BOOL = NULL)
) RETURNS BOOL
```

Expand All @@ -52,14 +53,15 @@ If you would like to set this add your policies manually (see [`add_continuous_a
|`refresh_end_offset`|`INTERVAL` or `INTEGER`|The end of the continuous aggregate refresh window, expressed as an offset from the policy run time. Must be greater than `refresh_start_offset`.|
|`compress_after`|`INTERVAL` or `INTEGER`|Continuous aggregate chunks are compressed if they exclusively contain data older than this interval.|
|`drop_after`|`INTERVAL` or `INTEGER`|Continuous aggregate chunks are dropped if they exclusively contain data older than this interval.|
| `hypercore_use_access_method` | BOOLEAN | `NULL` | Set to `true` to use hypercore table access metod. If set to `NULL` it will use the value from `timescaledb.default_hypercore_use_access_method`. |

For arguments that could be either an `INTERVAL` or an `INTEGER`, use an
`INTERVAL` if your time bucket is based on timestamps. Use an `INTEGER` if your
time bucket is based on integers.

## Returns

Returns true if successful.
Returns `true` if successful.

## Sample usage

Expand Down
4 changes: 4 additions & 0 deletions api/compression/add_compression_policy.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ api:
license: community
type: function
---

import Deprecated2180 from "versionContent/_partials/_deprecated_2_18_0.mdx";

# add_compression_policy() <Tag type="community" content="community" />
Expand Down Expand Up @@ -53,6 +54,9 @@ on the type of the time column of the hypertable or continuous aggregate:
|`initial_start`|TIMESTAMPTZ|Time the policy is first run. Defaults to NULL. If omitted, then the schedule interval is the interval from the finish time of the last execution to the next start. If provided, it serves as the origin with respect to which the next_start is calculated |
|`timezone`|TEXT|A valid time zone. If `initial_start` is also specified, subsequent executions of the compression policy are aligned on its initial start. However, daylight savings time (DST) changes may shift this alignment. Set to a valid time zone if this is an issue you want to mitigate. If omitted, UTC bucketing is performed. Defaults to `NULL`.|
|`if_not_exists`|BOOLEAN|Setting to `true` causes the command to fail with a warning instead of an error if a compression policy already exists on the hypertable. Defaults to false.|
| `hypercore_use_access_method` | BOOLEAN | `NULL` | Set to `true` to use hypercore table access metod. If set to `NULL` it will use the value from `timescaledb.default_hypercore_use_access_method`. |


<!-- vale Google.Acronyms = YES -->
<!-- vale Vale.Spelling = YES -->

Expand Down
1 change: 1 addition & 0 deletions api/compression/alter_table_compression.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ api:
license: community
type: command
---

import Deprecated2180 from "versionContent/_partials/_deprecated_2_18_0.mdx";

# ALTER TABLE (Compression) <Tag type="community" content="community" />
Expand Down
1 change: 1 addition & 0 deletions api/compression/chunk_compression_stats.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ api:
license: community
type: function
---

import Deprecated2180 from "versionContent/_partials/_deprecated_2_18_0.mdx";

# chunk_compression_stats() <Tag type="community">Community</Tag>
Expand Down
2 changes: 1 addition & 1 deletion api/compression/compress_chunk.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@ You can get a list of chunks belonging to a hypertable using the
[`show_chunks` function](/api/latest/hypertable/show_chunks/).
</Highlight>


### Required arguments

|Name|Type|Description|
Expand All @@ -43,6 +42,7 @@ You can get a list of chunks belonging to a hypertable using the
|Name|Type|Description|
|---|---|---|
| `if_not_compressed` | BOOLEAN | Disabling this will make the function error out on chunks that are already compressed. Defaults to true.|
| `hypercore_use_access_method` | BOOLEAN | `NULL` |✖| Set to `true` to use hypercore table access metod. If set to `NULL` it will use the value from `timescaledb.default_hypercore_use_access_method`. |

### Returns

Expand Down
Loading