diff --git a/_partials/_deprecated_2_18_0.md b/_partials/_deprecated_2_18_0.md
new file mode 100644
index 0000000000..7b09a2ef7a
--- /dev/null
+++ b/_partials/_deprecated_2_18_0.md
@@ -0,0 +1 @@
+Old API from [TimescaleDB v2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0)
diff --git a/_partials/_hypercore_policy_workflow.md b/_partials/_hypercore_policy_workflow.md
new file mode 100644
index 0000000000..503bf63971
--- /dev/null
+++ b/_partials/_hypercore_policy_workflow.md
@@ -0,0 +1,65 @@
+1. **Enable columnstore**
+
+ * [Use `ALTER TABLE` for a hypertable][alter_table_hypercore]
+ ```sql
+ ALTER TABLE stocks_real_time SET (timescaledb.enable_columnstore = true, timescaledb.segmentby = 'symbol');
+ ```
+ * [Use ALTER MATERIALIZED VIEW for a continuous aggregate][compression_continuous-aggregate]
+ ```sql
+ ALTER MATERIALIZED VIEW stock_candlestick_daily set (timescaledb.enable_columnstore = true, timescaledb.segmentby = 'symbol' );
+ ```
+
+1. **Add a policy to move chunks to the columnstore at a specific time interval**
+
+ For example, 60 days after the data was added to the table:
+ ``` sql
+ CALL add_columnstore_policy('older_stock_prices', after => INTERVAL '60d');
+ ```
+ See [add_columnstore_policy][add_columnstore_policy].
+
+1. **View the policies that you set or the policies that already exist**
+
+ ``` sql
+ SELECT * FROM timescaledb_information.jobs
+ WHERE proc_name='policy_compression';
+ ```
+ See [timescaledb_information.jobs][informational-views].
+
+1. **Pause a columnstore policy**
+
+ ``` sql
+ SELECT * FROM timescaledb_information.jobs where proc_name = 'policy_compression' AND relname = 'stocks_real_time'
+
+ -- Select the JOB_ID from the results
+
+ SELECT alter_job(JOB_ID, scheduled => false);
+ ```
+ See [alter_job][alter_job].
+
+1. **Restart a columnstore policy**
+ ``` sql
+ SELECT alter_job(JOB_ID, scheduled => true);
+ ```
+ See [alter_job][alter_job].
+
+1. **Remove a columnstore policy**
+ ``` sql
+ CALL remove_columnstore_policy('older_stock_prices');
+ ```
+ See [remove_columnstore_policy][remove_columnstore_policy].
+1. **Disable columnstore**
+
+ If your table has chunks in the columnstore, you have to
+ [convert the chunks back to the rowstore][convert_to_rowstore] before you disable the columnstore.
+ ``` sql
+ ALTER TABLE stocks_real_time SET (timescaledb.enable_columnstore = false);
+ ```
+ See [alter_table_hypercore][alter_table_hypercore].
+
+
+[alter_table_hypercore]: /api/:currentVersion:/hypercore/alter_table/
+[alter_job]: /api/:currentVersion:/actions/alter_job/
+[add_columnstore_policy]: /api/:currentVersion:/hypercore/add_columnstore_policy/
+[informational-views]: /api/:currentVersion:/informational-views/jobs/
+[remove_columnstore_policy]: /api/:currentVersion:/hypercore/remove_columnstore_policy/
+[convert_to_rowstore]: /api/:currentVersion:/hypercore/convert_to_rowstore/
diff --git a/_partials/_multi-node-deprecation.md b/_partials/_multi-node-deprecation.md
index a5e1eda837..31f68e1bd0 100644
--- a/_partials/_multi-node-deprecation.md
+++ b/_partials/_multi-node-deprecation.md
@@ -1,9 +1,10 @@
-[Multi-node support is deprecated][multi-node-deprecation].
+[Multi-node support is sunsetted][multi-node-deprecation].
TimescaleDB v2.13 is the last release that includes multi-node support for PostgreSQL
versions 13, 14, and 15.
+
[multi-node-deprecation]: https://github.com/timescale/timescaledb/blob/main/docs/MultiNodeDeprecation.md
diff --git a/_partials/_since_2_18_0.md b/_partials/_since_2_18_0.md
new file mode 100644
index 0000000000..19a03870be
--- /dev/null
+++ b/_partials/_since_2_18_0.md
@@ -0,0 +1 @@
+Since [TimescaleDB v2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0)
diff --git a/api/alter_table_compression.md b/api/alter_table_compression.md
deleted file mode 100644
index c2990aa5ac..0000000000
--- a/api/alter_table_compression.md
+++ /dev/null
@@ -1,77 +0,0 @@
----
-api_name: ALTER TABLE (Compression)
-excerpt: Change compression settings on a compressed hypertable
-topics: [compression]
-keywords: [compression]
-tags: [settings, hypertables, alter, change]
-api:
- license: community
- type: command
----
-
-# ALTER TABLE (Compression)
-
-'ALTER TABLE' statement is used to turn on compression and set compression
-options.
-
-By itself, this `ALTER` statement alone does not compress a hypertable. To do so, either create a
-compression policy using the [add_compression_policy][add_compression_policy] function or manually
-compress a specific hypertable chunk using the [compress_chunk][compress_chunk] function.
-
-The syntax is:
-
-``` sql
-ALTER TABLE SET (timescaledb.compress,
- timescaledb.compress_orderby = ' [ASC | DESC] [ NULLS { FIRST | LAST } ] [, ...]',
- timescaledb.compress_segmentby = ' [, ...]',
- timescaledb.compress_chunk_time_interval='interval'
-);
-```
-
-## Required arguments
-
-|Name|Type|Description|
-|-|-|-|
-|`timescaledb.compress`|BOOLEAN|Enable or disable compression|
-
-## Optional arguments
-
-|Name|Type|Description|
-|-|-|-|
-|`timescaledb.compress_orderby`|TEXT|Order used by compression, specified in the same way as the ORDER BY clause in a SELECT query. The default is the descending order of the hypertable's time column.|
-|`timescaledb.compress_segmentby`|TEXT|Column list on which to key the compressed segments. An identifier representing the source of the data such as `device_id` or `tags_id` is usually a good candidate. The default is no `segment by` columns.|
-|`timescaledb.compress_chunk_time_interval`|TEXT|EXPERIMENTAL: Set compressed chunk time interval used to roll chunks into. This parameter compresses every chunk, and then irreversibly merges it into a previous adjacent chunk if possible, to reduce the total number of chunks in the hypertable. Note that chunks will not be split up during decompression. It should be set to a multiple of the current chunk interval. This option can be changed independently of other compression settings and does not require the `timescaledb.compress` argument.|
-
-## Parameters
-
-|Name|Type|Description|
-|-|-|-|
-|`table_name`|TEXT|Hypertable that supports compression|
-|`column_name`|TEXT|Column used to order by or segment by|
-|`interval`|TEXT|Time interval used to roll compressed chunks into|
-
-## Sample usage
-
-Configure a hypertable that ingests device data to use compression. Here, if the hypertable
-is often queried about a specific device or set of devices, the compression should be
-segmented using the `device_id` for greater performance.
-
-```sql
-ALTER TABLE metrics SET (timescaledb.compress, timescaledb.compress_orderby = 'time DESC', timescaledb.compress_segmentby = 'device_id');
-```
-
-You can also specify compressed chunk interval without changing other
-compression settings:
-
-```sql
-ALTER TABLE metrics SET (timescaledb.compress_chunk_time_interval = '24 hours');
-```
-
-To disable the previously set option, set the interval to 0:
-
-```sql
-ALTER TABLE metrics SET (timescaledb.compress_chunk_time_interval = '0');
-```
-
-[add_compression_policy]: /api/:currentVersion:/compression/add_compression_policy/
-[compress_chunk]: /api/:currentVersion:/compression/compress_chunk/
diff --git a/api/add_compression_policy.md b/api/compression/add_compression_policy.md
similarity index 95%
rename from api/add_compression_policy.md
rename to api/compression/add_compression_policy.md
index 74210e4142..66637f260f 100644
--- a/api/add_compression_policy.md
+++ b/api/compression/add_compression_policy.md
@@ -8,9 +8,12 @@ api:
license: community
type: function
---
+import Deprecated2180 from "versionContent/_partials/_deprecated_2_18_0.mdx";
# add_compression_policy()
+ Replaced by add_columnstore_policy().
+
Allows you to set a policy by which the system compresses a chunk
automatically in the background after it reaches a given age.
@@ -89,3 +92,4 @@ SELECT add_compression_policy('cpu_weekly', INTERVAL '8 weeks');
[compression_continuous-aggregate]: /api/:currentVersion:/continuous-aggregates/alter_materialized_view/
[set_integer_now_func]: /api/:currentVersion:/hypertable/set_integer_now_func
[informational-views]: /api/:currentVersion:/informational-views/jobs/
+
diff --git a/api/compression/alter_table_compression.md b/api/compression/alter_table_compression.md
new file mode 100644
index 0000000000..1b6827e308
--- /dev/null
+++ b/api/compression/alter_table_compression.md
@@ -0,0 +1,82 @@
+---
+api_name: ALTER TABLE (Compression)
+excerpt: Change compression settings on a compressed hypertable
+topics: [compression]
+keywords: [compression]
+tags: [settings, hypertables, alter, change]
+api:
+ license: community
+ type: command
+---
+import Deprecated2180 from "versionContent/_partials/_deprecated_2_18_0.mdx";
+
+# ALTER TABLE (Compression)
+
+ Replaced by ALTER TABLE (Hypercore).
+
+'ALTER TABLE' statement is used to turn on compression and set compression
+options.
+
+By itself, this `ALTER` statement alone does not compress a hypertable. To do so, either create a
+compression policy using the [add_compression_policy][add_compression_policy] function or manually
+compress a specific hypertable chunk using the [compress_chunk][compress_chunk] function.
+
+The syntax is:
+
+``` sql
+ALTER TABLE SET (timescaledb.compress,
+ timescaledb.compress_orderby = ' [ASC | DESC] [ NULLS { FIRST | LAST } ] [, ...]',
+ timescaledb.compress_segmentby = ' [, ...]',
+ timescaledb.compress_chunk_time_interval='interval'
+);
+```
+
+## Required arguments
+
+|Name|Type|Description|
+|-|-|-|
+|`timescaledb.compress`|BOOLEAN|Enable or disable compression|
+
+## Optional arguments
+
+|Name|Type| Description |
+|-|-|--|
+|`timescaledb.compress_orderby`|TEXT| Order used by compression, specified in the same way as the ORDER BY clause in a SELECT query. The default is the descending order of the hypertable's time column. |
+|`timescaledb.compress_segmentby`|TEXT| Column list on which to key the compressed segments. An identifier representing the source of the data such as `device_id` or `tags_id` is usually a good candidate. The default is no `segment by` columns. |
+|`timescaledb.enable_segmentwise_recompression`|TEXT| Set to `OFF` to disable segmentwise recompression on compressed chunks. This can be beneficial for some user workloads where segmentwise recompression is slow, and full recompression is more performant. The default is `ON`. |
+|`timescaledb.compress_chunk_time_interval`|TEXT| EXPERIMENTAL: Set compressed chunk time interval used to roll chunks into. This parameter compresses every chunk, and then irreversibly merges it into a previous adjacent chunk if possible, to reduce the total number of chunks in the hypertable. Note that chunks will not be split up during decompression. It should be set to a multiple of the current chunk interval. This option can be changed independently of other compression settings and does not require the `timescaledb.compress` argument. |
+
+
+## Parameters
+
+|Name|Type|Description|
+|-|-|-|
+|`table_name`|TEXT|Hypertable that supports compression|
+|`column_name`|TEXT|Column used to order by or segment by|
+|`interval`|TEXT|Time interval used to roll compressed chunks into|
+
+## Sample usage
+
+Configure a hypertable that ingests device data to use compression. Here, if the hypertable
+is often queried about a specific device or set of devices, the compression should be
+segmented using the `device_id` for greater performance.
+
+```sql
+ALTER TABLE metrics SET (timescaledb.compress, timescaledb.compress_orderby = 'time DESC', timescaledb.compress_segmentby = 'device_id');
+```
+
+You can also specify compressed chunk interval without changing other
+compression settings:
+
+```sql
+ALTER TABLE metrics SET (timescaledb.compress_chunk_time_interval = '24 hours');
+```
+
+To disable the previously set option, set the interval to 0:
+
+```sql
+ALTER TABLE metrics SET (timescaledb.compress_chunk_time_interval = '0');
+```
+
+[add_compression_policy]: /api/:currentVersion:/compression/add_compression_policy/
+[compress_chunk]: /api/:currentVersion:/compression/compress_chunk/
diff --git a/api/chunk_compression_stats.md b/api/compression/chunk_compression_stats.md
similarity index 94%
rename from api/chunk_compression_stats.md
rename to api/compression/chunk_compression_stats.md
index 5f7b995937..989a015053 100644
--- a/api/chunk_compression_stats.md
+++ b/api/compression/chunk_compression_stats.md
@@ -8,9 +8,12 @@ api:
license: community
type: function
---
+import Deprecated2180 from "versionContent/_partials/_deprecated_2_18_0.mdx";
# chunk_compression_stats() Community
+ Replaced by chunk_columnstore_stats().
+
Get chunk-specific statistics related to hypertable compression.
All sizes are in bytes.
diff --git a/api/compress_chunk.md b/api/compression/compress_chunk.md
similarity index 88%
rename from api/compress_chunk.md
rename to api/compression/compress_chunk.md
index d8c4962439..d0779260de 100644
--- a/api/compress_chunk.md
+++ b/api/compression/compress_chunk.md
@@ -9,8 +9,12 @@ api:
type: function
---
+import Deprecated2180 from "versionContent/_partials/_deprecated_2_18_0.mdx";
+
# compress_chunk() Community
+ Replaced by convert_to_columnstore().
+
The `compress_chunk` function is used to compress (or recompress, if necessary)
a specific chunk. This is most often used instead of the
[`add_compression_policy`][add_compression_policy] function, when a user
@@ -27,6 +31,7 @@ You can get a list of chunks belonging to a hypertable using the
[`show_chunks` function](/api/latest/hypertable/show_chunks/).
+
### Required arguments
|Name|Type|Description|
diff --git a/api/decompress_chunk.md b/api/compression/decompress_chunk.md
similarity index 86%
rename from api/decompress_chunk.md
rename to api/compression/decompress_chunk.md
index 8d5bf8a33e..5c8c0c9b4d 100644
--- a/api/decompress_chunk.md
+++ b/api/compression/decompress_chunk.md
@@ -7,9 +7,12 @@ api:
license: community
type: function
---
+import Deprecated2180 from "versionContent/_partials/_deprecated_2_18_0.mdx";
# decompress_chunk() Community
+ Replaced by convert_to_rowstore().
+
If you need to modify or add a lot of data to a chunk that has already been
compressed, you should decompress the chunk first. This is especially
useful for backfilling old data.
diff --git a/api/hypertable_compression_stats.md b/api/compression/hypertable_compression_stats.md
similarity index 92%
rename from api/hypertable_compression_stats.md
rename to api/compression/hypertable_compression_stats.md
index ebc37ba065..8e244e968f 100644
--- a/api/hypertable_compression_stats.md
+++ b/api/compression/hypertable_compression_stats.md
@@ -8,9 +8,12 @@ api:
license: community
type: function
---
+import Deprecated2180 from "versionContent/_partials/_deprecated_2_18_0.mdx";
# hypertable_compression_stats() Community
+ Replaced by hypertable_columnstore_stats().
+
Get statistics related to hypertable compression. All sizes are in bytes.
For more information about using hypertables, including chunk size partitioning,
diff --git a/api/compression.md b/api/compression/index.md
similarity index 87%
rename from api/compression.md
rename to api/compression/index.md
index e802dd4a8c..fa9f8e8c83 100644
--- a/api/compression.md
+++ b/api/compression/index.md
@@ -4,8 +4,13 @@ excerpt: Compress your hypertable
keywords: [compression]
tags: [hypertables]
---
+import Deprecated2180 from "versionContent/_partials/_deprecated_2_18_0.mdx";
-# Compression Community
+# Compression (Old API, use Hypercore) Community
+
+ Replaced by Hypercore.
+
+Compression functionality is included in Hypercore.
Before you set up compression, you need to
[configure the hypertable for compression][configure-compression] and then
@@ -30,6 +35,7 @@ Compressed chunks have the following limitations:
after constraint creation.
* [Timescale SkipScan][skipscan] does not currently work on compressed chunks.
+
## Restrictions
In general, compressing a hypertable imposes some limitations on the types
@@ -59,3 +65,4 @@ You can also use advanced insert statements like `ON CONFLICT` and `RETURNING`.
[compress_chunk]: /api/:currentVersion:/compression/compress_chunk/
[configure-compression]: /api/:currentVersion:/compression/alter_table_compression/
[skipscan]: /use-timescale/:currentVersion:/query-data/skipscan/
+[hypercore]: /api/:currentVersion:/hypercore/
diff --git a/api/recompress_chunk.md b/api/compression/recompress_chunk.md
similarity index 92%
rename from api/recompress_chunk.md
rename to api/compression/recompress_chunk.md
index 85d6274881..367c7ee9b2 100644
--- a/api/recompress_chunk.md
+++ b/api/compression/recompress_chunk.md
@@ -9,8 +9,12 @@ api:
type: function
---
+import Deprecated2180 from "versionContent/_partials/_deprecated_2_18_0.mdx";
+
# recompress_chunk()
+ Replaced by convert_to_columnstore().
+
Recompresses a compressed chunk that had more data inserted after compression.
```sql
@@ -41,6 +45,7 @@ the procedure with `CALL`. Don't use a `SELECT` statement.
chunk for the first time, use [`compress_chunk`](/api/latest/compression/compress_chunk/).
+
## Required arguments
|Name|Type|Description|
diff --git a/api/remove_compression_policy.md b/api/compression/remove_compression_policy.md
similarity index 84%
rename from api/remove_compression_policy.md
rename to api/compression/remove_compression_policy.md
index 489d5ec052..de0174c56c 100644
--- a/api/remove_compression_policy.md
+++ b/api/compression/remove_compression_policy.md
@@ -8,9 +8,12 @@ api:
license: community
type: function
---
+import Deprecated2180 from "versionContent/_partials/_deprecated_2_18_0.mdx";
# remove_compression_policy()
+ Replaced by remove_columnstore_policy().
+
If you need to remove the compression policy. To restart policy-based
compression you need to add the policy again. To view the policies that
already exist, see [informational views][informational-views].
diff --git a/api/distributed-hypertables.md b/api/distributed-hypertables.md
index 9919824880..78f727e849 100644
--- a/api/distributed-hypertables.md
+++ b/api/distributed-hypertables.md
@@ -1,5 +1,5 @@
---
-title: Distributed hypertables
+title: Distributed hypertables ( Sunsetted v2.14.x )
excerpt: Create and manage distributed hypertables
keywords: [distributed hypertables]
---
@@ -8,7 +8,7 @@ import MultiNodeDeprecation from "versionContent/_partials/_multi-node-deprecati
-# Distributed Hypertables Community
+# Distributed hypertables ( Sunsetted v2.14.x) Community
Distributed hypertables are an extension of regular hypertables, available when
using a [multi-node installation][getting-started-multi-node] of TimescaleDB.
diff --git a/api/hypercore/add_columnstore_policy.md b/api/hypercore/add_columnstore_policy.md
new file mode 100644
index 0000000000..2832822d47
--- /dev/null
+++ b/api/hypercore/add_columnstore_policy.md
@@ -0,0 +1,108 @@
+---
+api_name: add_columnstore_policy()
+excerpt: Set a policy to automatically move chunks in a hypertable to the columnstore when they reach a given age.
+topics: [hypercore, columnstore, jobs]
+keywords: [columnstore, hypercore, policies]
+tags: [scheduled jobs, background jobs, automation framework]
+products: [cloud, self_hosted]
+api:
+ license: community
+ type: procedure
+---
+
+import Since2180 from "versionContent/_partials/_since_2_18_0.mdx";
+
+# add_columnstore_policy()
+
+Create a [job][job] that automatically moves chunks in a hypertable to the columnstore after a
+specific time interval.
+
+You enable the columnstore a hypertable or continuous aggregate before you create a columnstore policy.
+You do this by calling `ALTER TABLE` for hypertables and `ALTER MATERIALIZED VIEW` for continuous aggregates.
+
+To view the policies that you set or the policies that already exist,
+see [informational views][informational-views], to remove a policy, see [remove_columnstore_policy][remove_columnstore_policy].
+
+
+
+## Samples
+
+To create a columnstore job:
+
+
+
+1. **Enable columnstore**
+
+ * [Use `ALTER TABLE` for a hypertable][compression_alter-table]
+ ```sql
+ ALTER TABLE stocks_real_time SET (timescaledb.enable_columnstore = true, timescaledb.segmentby = 'symbol');
+ ```
+ * [Use ALTER MATERIALIZED VIEW for a continuous aggregate][compression_continuous-aggregate]
+ ```sql
+ ALTER MATERIALIZED VIEW stock_candlestick_daily set (timescaledb.enable_columnstore = true, timescaledb.segmentby = 'symbol' );
+ ```
+
+1. **Add a policy to move chunks to the columnstore at a specific time interval**
+
+ For example:
+
+ * 60 days after the data was added to the table:
+ ``` sql
+ CALL add_columnstore_policy('stocks_real_time', after => INTERVAL '60d');
+ ```
+ * 3 months prior to the moment you run the query:
+
+ ``` sql
+ CALL add_columnstore_policy('stocks_real_time', created_before => INTERVAL '3 months');
+ ```
+ * With an integer-based time column:
+
+ ``` sql
+ CALL add_columnstore_policy('table_with_bigint_time', BIGINT '600000');
+ ```
+ * Older than eight weeks:
+
+ ``` sql
+ CALL add_columnstore_policy('cpu_weekly', INTERVAL '8 weeks');
+ ```
+
+1. **View the policies that you set or the policies that already exist**
+
+ ``` sql
+ SELECT * FROM timescaledb_information.jobs
+ WHERE proc_name='policy_compression';
+ ```
+ See [timescaledb_information.jobs][informational-views].
+
+
+
+## Arguments
+
+Calls to `add_columnstore_policy` require either `after` or `created_before`, but cannot have both.
+
+
+
+
+| Name | Type | Default | Required | Description |
+|--|--|--|--|--|
+| `hypertable` |REGCLASS| - | ✔ | Name of the hypertable or continuous aggregate to run this [job][job] on.|
+| `after` |INTERVAL or INTEGER|- | ✖ | Add chunks containing data older than `now - {after}::interval` to the columnstore.
Use an object type that matchs the time column type in `hypertable`: TIMESTAMP
, TIMESTAMPTZ
, or DATE
: use an INTERVAL
type.- Integer-based timestamps : set an integer type using the [integer_now_func][set_integer_now_func].
`after` is mutually exclusive with `created_before`. |
+| `created_before` |INTERVAL| NULL | ✖ | Add chunks with a creation time of `now() - created_before` to the columnstore.
`created_before` is - Not supported for continuous aggregates.
- Mutually exclusive with `after`.
|
+| `schedule_interval` |INTERVAL| 12 hours when [chunk_time_interval][chunk_time_interval] >= `1 day` for `hypertable`. Otherwise `chunk_time_interval` / `2`. | ✖ | Set the interval between the finish time of the last execution of this policy and the next start.|
+| `initial_start` |TIMESTAMPTZ| The interval from the finish time of the last execution to the [next_start][next-start].| ✖| Set the time this job is first run. This is also the time that `next_start` is calculated from.|
+| `timezone` |TEXT| UTC. However, daylight savings time(DST) changes may shift this alignment. | ✖ | Set to a valid time zone to mitigate DST shifting. If `initial_start` is set, subsequent executions of this policy are aligned on `initial_start`.|
+| `if_not_exists` |BOOLEAN| `false` | ✖ | Set to `true` so this job fails with a warning rather than an error if a columnstore policy already exists on `hypertable` |
+
+
+
+
+
+
+[compression_alter-table]: /api/:currentVersion:/hypercore/alter_table/
+[compression_continuous-aggregate]: /api/:currentVersion:/continuous-aggregates/alter_materialized_view/
+[set_integer_now_func]: /api/:currentVersion:/hypertable/set_integer_now_func
+[informational-views]: /api/:currentVersion:/informational-views/jobs/
+[chunk_time_interval]: /api/:currentVersion:/hypertable/set_chunk_time_interval/
+[next-start]: /api/:currentVersion:/informational-views/jobs/#arguments
+[job]: /api/:currentVersion:/actions/add_job/
+[remove_columnstore_policy]: /api/:currentVersion:/hypercore/remove_columnstore_policy/
diff --git a/api/hypercore/alter_table.md b/api/hypercore/alter_table.md
new file mode 100644
index 0000000000..dd42cd4006
--- /dev/null
+++ b/api/hypercore/alter_table.md
@@ -0,0 +1,81 @@
+---
+api_name: ALTER TABLE (Hypercore)
+excerpt: Enable the columnstore for a hypertable.
+topics: [hypercore, columnstore]
+keywords: [columnstore, hypercore]
+tags: [settings, hypertables, alter, change]
+api:
+ license: community
+ type: command
+products: [cloud, self_hosted]
+---
+import Since2180 from "versionContent/_partials/_since_2_18_0.mdx";
+
+# ALTER TABLE (Hypercore)
+
+Enable the columnstore for a hypertable.
+
+After you have enabled the columnstore, either:
+- [add_columnstore_policy][add_columnstore_policy]: create a [job][job] that automatically moves chunks in a hypertable to the columnstore at a
+ specific time interval.
+- [convert_to_columnstore][convert_to_columnstore]: manually add a specific chunk in a hypertable to the columnstore.
+
+
+
+## Samples
+
+To enable the columnstore:
+
+- **Configure a hypertable that ingests device data to use the columnstore**:
+
+ In this example, the `metrics` hypertable is often queried about a specific device or set of devices.
+ Segment the hypertable by `device_id` to improve query performance.
+
+ ```sql
+ ALTER TABLE metrics SET (timescaledb.enable_columnstore, timescaledb.orderby = 'time DESC', timescaledb.segmentby = 'device_id');
+ ```
+
+- **Specify the chunk interval without changing other columnstore settings**:
+
+ - Set the time interval when chunks are added to the columnstore:
+
+ ```sql
+ ALTER TABLE metrics SET (timescaledb.compress_chunk_time_interval = '24 hours');
+ ```
+
+ - To disable the option you set previously, set the interval to 0:
+
+ ```sql
+ ALTER TABLE metrics SET (timescaledb.compress_chunk_time_interval = '0');
+ ```
+
+## Arguments
+
+The syntax is:
+
+``` sql
+ALTER TABLE SET (timescaledb.enable_columnstore,
+ timescaledb.orderby = ' [ASC | DESC] [ NULLS { FIRST | LAST } ] [, ...]',
+ timescaledb.segmentby = ' [, ...]',
+ timescaledb.compress_chunk_time_interval='interval'
+ timescaledb.enable_segmentwise_recompression = 'ON' | 'OFF'
+);
+```
+
+| Name | Type | Default | Required | Description |
+|--|--|------------------------------------------------------|--|--|
+|`table_name`|TEXT| - | ✖ | The hypertable to enable columstore for. |
+|`timescaledb.enable_columnstore`|BOOLEAN| `true` | ✖ | Enable columnstore. |
+|`timescaledb.orderby`|TEXT| Descending order on the time column in `table_name`. | ✖| The order in which items are used in the columnstore. Specified in the same way as an `ORDER BY` clause in a `SELECT` query. |
+|`timescaledb.segmentby`|TEXT| No segementation by column. | ✖| Set the list of columns used to segment data in the columnstore for `table`. An identifier representing the source of the data such as `device_id` or `tags_id` is usually a good candidate. |
+|`column_name`|TEXT| - | ✖ | The name of the column to `orderby` or `segmentby`. |
+|`timescaledb.compress_chunk_time_interval`|TEXT| - | ✖ | EXPERIMENTAL: reduce the total number of chunks in the columnstore for `table`. If you set `compress_chunk_time_interval`, chunks added to the columnstore are merged with the previous adjacent chunk within `chunk_time_interval` whenever possible. These chunks are irreversibly merged. If you call [convert_to_rowstore][convert_to_rowstore], merged chunks are not split up. You can call `compress_chunk_time_interval` independently of other compression settings; `timescaledb.enable_columnstore` is not required. |
+|`interval`|TEXT| - | ✖ | Set to a multiple of the [chunk_time_interval][chunk_time_interval] for `table`.|
+|`timescaledb.enable_segmentwise_recompression`|TEXT| ON | ✖| Set to `OFF` to disable segmentwise recompression on chunks in the columnstore. This can be beneficial for some user workloads where segmentwise recompression is slow, and full recompression is more performant. |
+
+
+[chunk_time_interval]: /api/:currentVersion:/hypertable/set_chunk_time_interval/
+[add_columnstore_policy]: /api/:currentVersion:/hypercore/add_columnstore_policy/
+[convert_to_columnstore]: /api/:currentVersion:/hypercore/convert_to_columnstore/
+[convert_to_rowstore]: /api/:currentVersion:/hypercore/convert_to_rowstore/
+[job]: /api/:currentVersion:/actions/add_job/
diff --git a/api/hypercore/chunk_columnstore_settings.md b/api/hypercore/chunk_columnstore_settings.md
new file mode 100644
index 0000000000..4940ca2599
--- /dev/null
+++ b/api/hypercore/chunk_columnstore_settings.md
@@ -0,0 +1,57 @@
+---
+api_name: timescaledb_information.chunk_columnstore_settings
+excerpt: Get information about settings on each chunk in the columnstore
+topics: [hypercore, information, columnstore, chunk]
+keywords: [columnstore, hypercore, chunk, information]
+tags: [chunk, columnstore settings]
+api:
+ license: community
+ type: view
+---
+import Since2180 from "versionContent/_partials/_since_2_18_0.mdx";
+
+# timescaledb_information.chunk_columnstore_settings
+
+Retrieve the compression settings for each chunk in the columnstore.
+
+
+
+## Samples
+
+To retrieve information about settings:
+
+```sql
+
+* **Show settings for all chunks in the columnstore**:
+
+ ```sql
+ SELECT * FROM timescaledb_information.chunk_columnstore_settings
+ ```
+ Returns:
+ ```sql
+ hypertable | chunk | segmentby | orderby
+ ------------+-------+-----------+---------
+ measurements | _timescaledb_internal._hyper_1_1_chunk| | "time" DESC
+ ```
+
+* **Find all chunk columnstore settings for a specific hypertable**:
+
+ ```sql
+ SELECT * FROM timescaledb_information.chunk_columnstore_settings WHERE hypertable::TEXT LIKE 'metrics';
+ ```
+ Returns:
+ ```sql
+ hypertable | chunk | segmentby | orderby
+ ------------+-------+-----------+---------
+ metrics | _timescaledb_internal._hyper_2_3_chunk | metric_id | "time"
+ ```
+
+## Returns
+
+| Name | Type | Default | Required | Description |
+|--|--|--|--|--|
+|`hypertable`|`REGCLASS`|-|✖| The name of a hypertable in the columnstore |
+|`chunk`|`REGCLASS`|-|✖| The name of a chunk in `hypertable` |
+|`segmentby`|`TEXT`|-|✖| A list of columns used to segment `hypertable` |
+|`orderby`|`TEXT`|-|✖| A list of columns used to order data in `hypertable`. Along with ordering and NULL ordering information. IAIN, I don't understand the second sentence. |
+
diff --git a/api/hypercore/chunk_columnstore_stats.md b/api/hypercore/chunk_columnstore_stats.md
new file mode 100644
index 0000000000..c903a95760
--- /dev/null
+++ b/api/hypercore/chunk_columnstore_stats.md
@@ -0,0 +1,109 @@
+---
+api_name: chunk_columnstore_stats()
+excerpt: Get statistics about chunks in the columnstore
+topics: [hypercore, columnstore]
+keywords: [columnstore, hypercore, statistics, chunks, information]
+tags: [disk space, schemas, size]
+api:
+ license: community
+ type: procedure
+---
+import Since2180 from "versionContent/_partials/_since_2_18_0.mdx";
+
+# chunk_columnstore_stats() Community
+
+Retrieve statistics about the chunks in the columnstore
+
+`chunk_columnstore_stats` returns the size of chunks in the columnstore, these values are computed when you call either:
+- [add_columnstore_policy][add_columnstore_policy]: create a [job][job] that automatically moves chunks in a hypertable to the columnstore at a
+ specific time interval.
+- [convert_to_columnstore][convert_to_columnstore]: manually add a specific chunk in a hypertable to the columnstore.
+
+
+Inserting into a chunk in the columnstore does not change the chunk size. For more information about how to compute
+chunk sizes, see [chunks_detailed_size][chunks_detailed_size].
+
+
+
+## Samples
+
+To retrieve statistics about chunks:
+
+- **Show the status of the first two chunks in the `conditions` hypertable**:
+ ```sql
+ SELECT * FROM chunk_columnstore_stats('conditions')
+ ORDER BY chunk_name LIMIT 2;
+ ```
+ Returns:
+ ```sql
+ -[ RECORD 1 ]------------------+----------------------
+ chunk_schema | _timescaledb_internal
+ chunk_name | _hyper_1_1_chunk
+ compression_status | Uncompressed
+ before_compression_table_bytes |
+ before_compression_index_bytes |
+ before_compression_toast_bytes |
+ before_compression_total_bytes |
+ after_compression_table_bytes |
+ after_compression_index_bytes |
+ after_compression_toast_bytes |
+ after_compression_total_bytes |
+ node_name |
+ -[ RECORD 2 ]------------------+----------------------
+ chunk_schema | _timescaledb_internal
+ chunk_name | _hyper_1_2_chunk
+ compression_status | Compressed
+ before_compression_table_bytes | 8192
+ before_compression_index_bytes | 32768
+ before_compression_toast_bytes | 0
+ before_compression_total_bytes | 40960
+ after_compression_table_bytes | 8192
+ after_compression_index_bytes | 32768
+ after_compression_toast_bytes | 8192
+ after_compression_total_bytes | 49152
+ node_name |
+ ```
+
+- **Use `pg_size_pretty` to return a more human friendly format**:
+
+ ```sql
+ SELECT pg_size_pretty(after_compression_total_bytes) AS total
+ FROM chunk_columnstore_stats('conditions')
+ WHERE compression_status = 'Compressed';
+ ```
+ Returns:
+ ```sql
+ -[ RECORD 1 ]--+------
+ total | 48 kB
+ ```
+
+
+## Arguments
+
+| Name | Type | Default | Required | Description |
+|--|--|--|--|--|
+|`hypertable`|`REGCLASS`|-|✖| The name of a hypertable |
+
+
+## Returns
+
+|Column|Type| Description |
+|-|-|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+|`chunk_schema`|TEXT| Schema name of the chunk. |
+|`chunk_name`|TEXT| Name of the chunk. |
+|`compression_status`|TEXT| Current compression status of the chunk. |
+|`before_compression_table_bytes`|BIGINT| Size of the heap before compression. Returns `NULL` if `compression_status` == `Uncompressed`. |
+|`before_compression_index_bytes`|BIGINT| Size of all the indexes before compression. Returns `NULL` if `compression_status` == `Uncompressed`. |
+|`before_compression_toast_bytes`|BIGINT| Size the TOAST table before compression. Returns `NULL` if `compression_status` == `Uncompressed`. |
+|`before_compression_total_bytes`|BIGINT| Size of the entire chunk table (`before_compression_table_bytes` + `before_compression_index_bytes` + `before_compression_toast_bytes`) before compression. Returns `NULL` if `compression_status` == `Uncompressed`.|
+|`after_compression_table_bytes`|BIGINT| Size of the heap after compression. Returns `NULL` if `compression_status` == `Uncompressed`. |
+|`after_compression_index_bytes`|BIGINT| Size of all the indexes after compression. Returns `NULL` if `compression_status` == `Uncompressed`. |
+|`after_compression_toast_bytes`|BIGINT| Size the TOAST table after compression. Returns `NULL` if `compression_status` == `Uncompressed`. |
+|`after_compression_total_bytes`|BIGINT| Size of the entire chunk table (`after_compression_table_bytes` + `after_compression_index_bytes `+ `after_compression_toast_bytes`) after compression. Returns `NULL` if `compression_status` == `Uncompressed`. |
+|`node_name`|TEXT| **DEPRECATED**: nodes the chunk is located on, applicable only to distributed hypertables. |
+
+
+[add_columnstore_policy]: /api/:currentVersion:/hypercore/add_columnstore_policy/
+[convert_to_columnstore]: /api/:currentVersion:/hypercore/convert_to_columnstore/
+[job]: /api/:currentVersion:/actions/add_job/
+[chunks_detailed_size]: /api/:currentVersion:/hypertable/chunks_detailed_size/
diff --git a/api/hypercore/convert_to_columnstore.md b/api/hypercore/convert_to_columnstore.md
new file mode 100644
index 0000000000..08b79dc5f2
--- /dev/null
+++ b/api/hypercore/convert_to_columnstore.md
@@ -0,0 +1,55 @@
+---
+api_name: convert_to_columnstore()
+excerpt: Manually add a chunk to thee columnstore
+topics: [hypercore, columnstore, rowstore]
+keywords: [columnstore, rowstore, hypercore]
+tags: [chunks, hypercore]
+api:
+ license: community
+ type: procedure
+---
+import Since2180 from "versionContent/_partials/_since_2_18_0.mdx";
+
+# convert_to_columnstore() Community
+
+Manually convert a specific chunk in the hypertable rowstore to the columnstore.
+
+Although `convert_to_columnstore` gives you more more fine grained control, best practice is to use
+[`add_columnstore_policy`][add_columnstore_policy]. You can also add chunks to the columnstore at a specific time
+[running the job associated with your columnstore policy][run-job] manually.
+
+To move a chunk from the columnstore back to the rowstore, use [`convert_to_rowstore`][convert_to_rowstore].
+
+
+
+## Samples
+
+To convert a single chunk to columnstore:
+
+``` sql
+CALL convert_to_columnstore('_timescaledb_internal._hyper_1_2_chunk');
+```
+
+To retrieve the chunks belonging to a hypertable, call [`show_chunks`](/api/latest/hypertable/show_chunks/).
+
+
+## Arguments
+
+| Name | Type | Default | Required | Description |
+|----------------------|--|---------|--|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| `chunk` | REGCLASS | - |✔| Name of the chunk to add to the columnstore. |
+| `if_not_columnstore` | BOOLEAN | `true` |✖| Set to `false` so this job fails with an error rather than a warning if `chunk` is already in the columnstore. |
+| `recompress` | BOOLEAN | `false` |✖| Set to `true` to recompress data that was partially compressed as a result of modifications to `chunk`. This is usually more efficient, but in some cases it can result is a more expensive operation.
Set to `false` to completely decompress and recompress the data in `chunk`. |
+
+## Returns
+
+Calls to `convert_to_columnstore` return:
+
+| Column | Type | Description |
+|-------------------|--------------------|----------------------------------------------------------------------------------------------------|
+| `chunk name` or `table` | REGCLASS or String | The name of the chunk added to the columnstore, or a table-like result set with zero or more rows. |
+
+
+[add_columnstore_policy]: /api/:currentVersion:/hypercore/add_columnstore_policy/
+[run-job]: /api/:currentVersion:/actions/run_job/
+[convert_to_rowstore]: /api/:currentVersion:/hypercore/convert_to_rowstore/
diff --git a/api/hypercore/convert_to_rowstore.md b/api/hypercore/convert_to_rowstore.md
new file mode 100644
index 0000000000..f388470442
--- /dev/null
+++ b/api/hypercore/convert_to_rowstore.md
@@ -0,0 +1,82 @@
+---
+api_name: convert_to_rowstore()
+excerpt: Move a chunk from the columnstore to the rowstore
+topics: [hypercore, columnstore]
+keywords: [columnstore, hypercore, rowstore, chunks, backfilling]
+api:
+ license: community
+ type: procedure
+---
+
+import Since2180 from "versionContent/_partials/_since_2_18_0.mdx";
+
+# convert_to_rowstore() Community
+
+Manually convert a specific chunk in the hypertable columnstore to the rowstore.
+
+If you need to modify or add a lot of data to a chunk in the columnstore, best practice is to stop
+any [jobs][job] moving chunks to the columnstore, convert the chunk back to the rowstore, then modify the
+data. After the update, [convert the chunk to the columnstore][convert_to_columnstore] and restart the jobs.
+This workflow is especially useful if you need to backfill old data.
+
+
+
+## Samples
+
+To modify or add a lot of data to a chunk:
+
+
+
+1. **Stop the jobs that are automatically adding chunks to the columnstore**
+
+ Retrieve the list of jobs from the [timescaledb_information.jobs][informational-views] view
+ to find the job you need to [alter_job][alter_job].
+
+ ``` sql
+ SELECT alter_job(JOB_ID, scheduled => false);
+ ```
+
+1. **Convert the chunks to update back to the rowstore**
+
+ ``` sql
+ CALL convert_to_rowstore('_timescaledb_internal._hyper_2_2_chunk');
+ ```
+
+1. **Update the data in the chunk you added to the rowstore**
+
+ Best practice is to structure your [INSERT][insert] statement to include appropriate
+ partition key values, such as the timestamp. TimescaleDB adds the data to the correct chunk:
+
+ ``` sql
+ INSERT INTO metrics (time, value)
+ VALUES ('2025-01-01T00:00:00', 42);
+ ```
+
+1. **Convert the updated chunks back to the columnstore**
+
+ ``` sql
+ CALL convert_to_columnstore('_timescaledb_internal._hyper_1_2_chunk');
+ ```
+
+1. **Restart the jobs that are automatically converting chunks to the columnstore**
+
+ ``` sql
+ SELECT alter_job(JOB_ID, scheduled => true);
+ ```
+
+
+
+## Arguments
+
+| Name | Type | Default | Required | Description|
+|--|----------|---------|----------|-|
+|`chunk`| REGCLASS | - | ✖ | Name of the chunk to be moved to the rowstore. |
+|`if_compressed`| BOOLEAN | `true` | ✔ | Set to `false` so this job fails with an error rather than an warning if `chunk` is not in the columnstore |
+
+[job]: /api/:currentVersion:/actions/
+[alter_job]: /api/:currentVersion:/actions/alter_job/
+[convert_to_columnstore]: /api/:currentVersion:/hypercore/convert_to_columnstore/
+[informational-views]: /api/:currentVersion:/informational-views/jobs/
+[insert]: /use-timescale/:currentVersion:/write-data/insert/
+
+
diff --git a/api/hypercore/hypertable_columnstore_settings.md b/api/hypercore/hypertable_columnstore_settings.md
new file mode 100644
index 0000000000..3752e7b8ae
--- /dev/null
+++ b/api/hypercore/hypertable_columnstore_settings.md
@@ -0,0 +1,62 @@
+---
+api_name: timescaledb_information.hypertable_columnstore_settings
+excerpt: Get information about columnstore settings for all hypertables
+topics: [hypercore, information, columnstore, hypertable]
+keywords: [columnstore, hypercore, hypertable, information]
+tags: [hypertable columnstore, columnstore settings]
+api:
+ license: community
+ type: view
+---
+import Since2180 from "versionContent/_partials/_since_2_18_0.mdx";
+
+# timescaledb_information.hypertable_columnstore_settings
+
+Retrieve information about the settings for all hypertables in the columnstore.
+
+
+
+## Samples
+
+To retrieve information about settings:
+
+- **Show columnstore settings for all hypertables**:
+
+ ```sql
+ SELECT * FROM timescaledb_information.hypertable_columnstore_settings'
+ ```
+ Returns:
+ ```sql
+ hypertable | measurements
+ segmentby |
+ orderby | "time" DESC
+ compress_interval_length |
+ ```
+
+- **Retrieve columnstore settings for a specific hypertable**:
+
+ ```sql
+ SELECT * FROM timescaledb_information.hypertable_columnstore_settings WHERE hypertable::TEXT LIKE 'metrics';
+ ```
+ Returns:
+ ```sql
+ hypertable | metrics
+ segmentby | metric_id
+ orderby | "time"
+ compress_interval_length |
+ ```
+
+## Returns
+
+|Name|Type| Description |
+|-|-|---------------------------------------------------------------------------------------------------------------------|
+|`hypertable`|`REGCLASS`| A hypertable which has the [columnstore enabled][compression_alter-table]. |
+|`segmentby`|`TEXT`| The list of columns used to segment data |
+|`orderby`|`TEXT`| List of columns used to order the data, along with ordering and NULL ordering information |
+|`compress_interval_length`|`TEXT`| Interval used for [rolling up chunks during compression][rollup-compression] IAIN, update when main doc is written. |
+
+
+
+[rollup-compression]: /use-timescale/:currentVersion:/compression/manual-compression/#roll-up-uncompressed-chunks-when-compressing
+[compression_alter-table]: /api/:currentVersion:/hypercore/alter_table/
+
diff --git a/api/hypercore/hypertable_columnstore_stats.md b/api/hypercore/hypertable_columnstore_stats.md
new file mode 100644
index 0000000000..a6558e3a59
--- /dev/null
+++ b/api/hypercore/hypertable_columnstore_stats.md
@@ -0,0 +1,82 @@
+---
+api_name: hypertable_columnstore_stats()
+excerpt: Get columnstore statistics related to the columnstore
+topics: [hypercore, columnstore]
+keywords: [hypercore, columnstore, hypertables, information]
+tags: [statistics, size]
+api:
+ license: community
+ type: procedure
+---
+import Since2180 from "versionContent/_partials/_since_2_18_0.mdx";
+
+# hypertable_columnstore_stats() Community
+
+Retrieve compression statistics for the columnstore.
+
+For more information about using hypertables, including chunk size partitioning,
+see [hypertables][hypertable-docs].
+
+
+
+## Samples
+
+To retrieve compression statistics:
+
+- **Show the compression status of the `conditions` hypertable**:
+
+ ```sql
+ SELECT * FROM hypertable_columnstore_stats('conditions');
+ ```
+ Returns:
+ ```sql
+ -[ RECORD 1 ]------------------+------
+ total_chunks | 4
+ number_compressed_chunks | 1
+ before_compression_table_bytes | 8192
+ before_compression_index_bytes | 32768
+ before_compression_toast_bytes | 0
+ before_compression_total_bytes | 40960
+ after_compression_table_bytes | 8192
+ after_compression_index_bytes | 32768
+ after_compression_toast_bytes | 8192
+ after_compression_total_bytes | 49152
+ node_name |
+ ```
+
+- **Use `pg_size_pretty` get the output in a more human friendly format**:
+
+ ```sql
+ SELECT pg_size_pretty(after_compression_total_bytes) as total
+ FROM hypertable_columnstore_stats('conditions');
+ ```
+ Returns:
+ ```sql
+ -[ RECORD 1 ]--+------
+ total | 48 kB
+ ```
+
+## Arguments
+
+|Name|Type|Description|
+|-|-|-|
+|`hypertable`|REGCLASS|Hypertable to show statistics for|
+
+## Returns
+
+|Column|Type|Description|
+|-|-|-|
+|`total_chunks`|BIGINT|The number of chunks used by the hypertable. Returns `NULL` if `compression_status` == `Uncompressed`. |
+|`number_compressed_chunks`|INTEGER|The number of chunks used by the hypertable that are currently compressed. Returns `NULL` if `compression_status` == `Uncompressed`. |
+|`before_compression_table_bytes`|BIGINT|Size of the heap before compression. Returns `NULL` if `compression_status` == `Uncompressed`. |
+|`before_compression_index_bytes`|BIGINT|Size of all the indexes before compression. Returns `NULL` if `compression_status` == `Uncompressed`. |
+|`before_compression_toast_bytes`|BIGINT|Size the TOAST table before compression. Returns `NULL` if `compression_status` == `Uncompressed`. |
+|`before_compression_total_bytes`|BIGINT|Size of the entire table (`before_compression_table_bytes` + `before_compression_index_bytes` + `before_compression_toast_bytes`) before compression. Returns `NULL` if `compression_status` == `Uncompressed`.|
+|`after_compression_table_bytes`|BIGINT|Size of the heap after compression. Returns `NULL` if `compression_status` == `Uncompressed`. |
+|`after_compression_index_bytes`|BIGINT|Size of all the indexes after compression. Returns `NULL` if `compression_status` == `Uncompressed`. |
+|`after_compression_toast_bytes`|BIGINT|Size the TOAST table after compression. Returns `NULL` if `compression_status` == `Uncompressed`. |
+|`after_compression_total_bytes`|BIGINT|Size of the entire table (`after_compression_table_bytes` + `after_compression_index_bytes `+ `after_compression_toast_bytes`) after compression. Returns `NULL` if `compression_status` == `Uncompressed`. |
+|`node_name`|TEXT|nodes on which the hypertable is located, applicable only to distributed hypertables. Returns `NULL` if `compression_status` == `Uncompressed`. |
+
+[hypertable-docs]: /use-timescale/:currentVersion:/hypertables/
+
diff --git a/api/hypercore/index.md b/api/hypercore/index.md
new file mode 100644
index 0000000000..599ef5810f
--- /dev/null
+++ b/api/hypercore/index.md
@@ -0,0 +1,92 @@
+---
+title: Hypercore
+excerpt: Reference information about the TimescaleDB hybrid row-columnar storage engine
+keywords: [hypercore]
+tags: [hypercore]
+products: [cloud, self_hosted]
+api:
+ license: community
+---
+
+import Since2180 from "versionContent/_partials/_since_2_18_0.mdx";
+
+# Hypercore
+
+Hypercore is the $TIMESCALE_DB hybrid row-columnar storage engine, designed specifically for
+real-time analytics and powered by time-series data. The advantage of hypercore is its ability
+to seamlessly switch between row-oriented and column-oriented storage. This flexibility enables
+$CLOUD_LONG to deliver the best of both worlds, solving the key challenges in real-time analytics.
+
+Hypercore’s hybrid approach combines the benefits of row-oriented and column-oriented formats
+in each $CLOUD_LONG service:
+
+- **Fast ingest with rowstore**: new data is initially written to the rowstore, which is optimized for
+ high-speed inserts and updates.
+
+- **Efficient analytics with columnstore**: you create [columnstore_policies][hypercore_workflow]
+ that automatically move your data to the columnstore as it _cools_. In columstore conversion, hypertable
+ chunks are compressed and organized for efficient, large-scale queries more suitable for analytics.
+
+- **Full mutability with transactional semantics**: regardless of where data is stored,
+ hypercore provides full ACID support.
+
+
+
+## Hypercore workflow
+
+Best practice for using Hypercore is to:
+
+
+
+1. **Enable columnstore**
+
+ * [Use `ALTER TABLE` for a hypertable][alter_table_hypercore]
+ ```sql
+ ALTER TABLE stocks_real_time SET (timescaledb.enable_columnstore = true, timescaledb.segmentby = 'symbol');
+ ```
+ * [Use ALTER MATERIALIZED VIEW for a continuous aggregate][compression_continuous-aggregate]
+ ```sql
+ ALTER MATERIALIZED VIEW stock_candlestick_daily set (timescaledb.enable_columnstore = true, timescaledb.segmentby = 'symbol' );
+ ```
+
+1. **Add a policy to move chunks to the columnstore at a specific time interval**
+
+ For example, 60 days after the data was added to the table:
+ ``` sql
+ CALL add_columnstore_policy('older_stock_prices', after => INTERVAL '60d');
+ ```
+ See [add_columnstore_policy][add_columnstore_policy].
+
+1. **View the policies that you set or the policies that already exist**
+
+ ``` sql
+ SELECT * FROM timescaledb_information.jobs
+ WHERE proc_name='policy_compression';
+ ```
+ See [timescaledb_information.jobs][informational-views].
+
+
+
+You can also [convert_to_columnstore][convert_to_columnstore] and [convert_to_rowstore][convert_to_rowstore] manually
+for more fine-grained control over your data.
+
+## Limitations
+
+Chunks in the columnstore have the following limitations:
+
+* `ROW LEVEL SECURITY` is not supported on chunks in the columnstore.
+* To add unique constraints on chunks in the columnstore [convert_the chunk to rowstore][convert_to_rowstore],
+ add the constraints to your data, then [convert the chunk back to the rowstore][convert_to_columnstore].
+* [SkipScan][skipscan] does not currently work on chunks in the columnstore.
+
+
+[alter_table_hypercore]: /api/:currentVersion:/hypercore/alter_table/
+[compression_continuous-aggregate]: /api/:currentVersion:/continuous-aggregates/alter_materialized_view/
+[convert_to_rowstore]: /api/:currentVersion:/hypercore/convert_to_rowstore/
+[convert_to_columnstore]: /api/:currentVersion:/hypercore/convert_to_columnstore/
+[informational-views]: /api/:currentVersion:/informational-views/jobs/
+[skipscan]: /use-timescale/:currentVersion:/query-data/skipscan/
+[add_columnstore_policy]: /api/:currentVersion:/hypercore/add_columnstore_policy/
+[hypercore_workflow]: /api/:currentVersion:/hypercore/#hypercore-workflow
+[alter_job]: /api/:currentVersion:/actions/alter_job/
+[remove_columnstore_policy]: /api/:currentVersion:/hypercore/remove_columnstore_policy/
diff --git a/api/hypercore/page-index/page-index.js b/api/hypercore/page-index/page-index.js
new file mode 100644
index 0000000000..4a79ed6439
--- /dev/null
+++ b/api/hypercore/page-index/page-index.js
@@ -0,0 +1,55 @@
+module.exports = [
+ {
+ title: "Hypercore",
+ href: "hypercore",
+ excerpt:
+ "Seamlessly switch between row-oriented and column-oriented storage",
+ children: [
+ {
+ title: "add_columnstore_policy",
+ href: "add_columnstore_policy",
+ excerpt: Convert a chunk to columnstore automatically in the background after it reaches a given age",
+ },
+ {
+ title: "chunk_columnstore_settings",
+ href: "chunk_columnstore_settings",
+ excerpt: "Show the columnstore settings for each chunk that is a columnstore",
+ },
+ {
+ title: "chunk_columnstore_stats",
+ href: "chunk_columnstore_stats",
+ excerpt: "Get statistics for columnstore chunks",
+ },
+ {
+ title: "columnstore_settings",
+ href: "columnstore_settings",
+ excerpt: "Get information about columnstore-related settings for hypertables",
+ },
+ {
+ title: "convert_to_columnstore",
+ href: "convert_to_columnstore",
+ excerpt: "Convert a specific chunk from rowstore to columnstore",
+ },
+ {
+ title: "convert_to_rowstore",
+ href: "convert_to_rowstore",
+ excerpt: "Convert a specific chunk from columnstore to rowstore",
+ },
+ {
+ title: "hypertable_columnstore_settings",
+ href: "hypertable_columnstore_settings",
+ excerpt: "Returns information about the columnstore settings for each hypertable",
+ },
+ {
+ title: "hypertable_columnstore_stats",
+ href: "hypertable_columnstore_stats",
+ excerpt: "Get columnstore statistics for hypertables",
+ },
+ {
+ title: "remove_columnstore_policy",
+ href: "remove_columnstore_policy",
+ excerpt: "Remove the columnstore policy",
+ },
+ ],
+ },
+];
diff --git a/api/hypercore/remove_columnstore_policy.md b/api/hypercore/remove_columnstore_policy.md
new file mode 100644
index 0000000000..08bb519745
--- /dev/null
+++ b/api/hypercore/remove_columnstore_policy.md
@@ -0,0 +1,47 @@
+---
+api_name: remove_columnstore_policy()
+excerpt: Remove a columnstore policy from a hypertable
+topics: [hypercore, columnstore, jobs]
+keywords: [hypercore, columnstore, policies, remove]
+tags: [delete, drop]
+api:
+ license: community
+ type: procedure
+---
+import Since2180 from "versionContent/_partials/_since_2_18_0.mdx";
+
+# remove_columnstore_policy()
+
+Remove a columnstore policy from a hypertable or continuous aggregate.
+
+To restart automatic chunk migration to the columnstore, you need to call
+[add_columnstore_policy][add_columnstore_policy] again.
+
+
+
+## Samples
+
+You see the columnstore policies in the [informational views][informational-views].
+
+- **Remove the columnstore policy from the `cpu` table**:
+
+ ``` sql
+ CALL remove_columnstore_policy('cpu');
+ ```
+
+- **Remove the columnstore policy from the `cpu_weekly` continuous aggregate**:
+
+ ``` sql
+ CALL remove_columnstore_policy('cpu_weekly');
+ ```
+
+## Arguments
+
+| Name | Type | Default | Required | Description |
+|--|--|--|--|-|
+|`hypertable`|REGCLASS|-|✔| Name of the hypertable or continuous aggregate to remove the policy from|
+| `if_exists` | BOOLEAN | `false` |✖| Set to `true` so this job fails with a warning rather than an error if a columnstore policy does not exist on `hypertable` |
+
+
+[informational-views]: /api/:currentVersion:/informational-views/jobs/
+[add_columnstore_policy]: /api/:currentVersion:/hypercore/add_columnstore_policy/
diff --git a/api/merge_chunks.md b/api/merge_chunks.md
new file mode 100644
index 0000000000..73924abae3
--- /dev/null
+++ b/api/merge_chunks.md
@@ -0,0 +1,56 @@
+---
+api_name: merge_chunks()
+excerpt: Merge two or more chunks into one chunk
+topics: [hypertables]
+keywords: [hypertables, chunk, merge]
+api:
+ license: community
+ type: procedure
+---
+
+# merge_chunks()
+
+Merge two or more chunks into one.
+
+The partition boundaries for the new chunk is the union of all partitions of the merged chunks.
+The new chunk retains the name, constraints, and triggers of the _first_ chunk in the partition order.
+
+You can only merge chunks that have directly adjacent partitions. It is not possible to merge
+chunks that have another chunk, or an empty range between them in any of the partitioning
+dimensions.
+
+In this first release, chunk merging has the following limitations. You cannot:
+
+* Merge compressed chunks
+* Merge chunks using table access methods other than heap
+* Merge chunks with tiered data
+* Read or write from the chunks while they are being merged
+
+
+
+## Samples
+
+- Merge two chunks:
+
+ ```sql
+ CALL merge_chunks('_timescaledb_internal._hyper_1_1_chunk', '_timescaledb_internal._hyper_1_2_chunk');
+ ```
+
+- Merge more than two chunks:
+
+ ```sql
+ CALL merge_chunks('{_timescaledb_internal._hyper_1_1_chunk, _timescaledb_internal._hyper_1_2_chunk, _timescaledb_internal._hyper_1_3_chunk}');
+ ```
+
+
+## Arguments
+
+You can merge either two chunks, or an arbitrary number of chunks specified as an array of chunk identifiers.
+When you call `merge_chunks`, you must specify either `chunk1` and `chunk2`, or `chunks`. You cannot use both
+arguments.
+
+
+| Name | Type | Default | Required | Description |
+|--------------------|-------------|--|--|------------------------------------------------|
+| `chunk1`, `chunk2` | REGCLASS | - | ✖ | The two chunk to merge in partition order |
+| `chunks` | REGCLASS[] |- | ✖ | The array of chunks to merge in partition order |
diff --git a/api/page-index/page-index.js b/api/page-index/page-index.js
index 408913ed06..34dcf872bd 100644
--- a/api/page-index/page-index.js
+++ b/api/page-index/page-index.js
@@ -34,6 +34,10 @@ module.exports = [
title: "reorder_chunk",
href: "reorder_chunk",
},
+ {
+ title: "merge_chunks",
+ href: "merge_chunks",
+ },
{
title: "move_chunk",
href: "move_chunk",
@@ -117,102 +121,54 @@ module.exports = [
],
},
{
- title: "Distributed hypertables",
- type: "directory",
- href: "distributed-hypertables",
+ title: "Hypercore",
+ excerpt: "Seamlessly switch between fast row-oriented storage and efficient column-oriented storage",
+ href: "hypercore",
children: [
{
- title: "create_distributed_hypertable",
- href: "create_distributed_hypertable",
- },
- {
- title: "add_data_node",
- href: "add_data_node",
- },
- {
- title: "attach_data_node",
- href: "attach_data_node",
- },
- {
- title: "alter_data_node",
- href: "alter_data_node",
- },
- {
- title: "detach_data_node",
- href: "detach_data_node",
- },
- {
- title: "delete_data_node",
- href: "delete_data_node",
- },
- {
- title: "distributed_exec",
- href: "distributed_exec",
- },
- {
- title: "set_number_partitions",
- href: "set_number_partitions",
- },
- {
- title: "set_replication_factor",
- href: "set_replication_factor",
- },
- {
- title: "copy_chunk",
- href: "copy_chunk_experimental",
- },
- {
- title: "move_chunk",
- href: "move_chunk_experimental",
- },
- {
- title: "cleanup_copy_chunk_operation",
- href: "cleanup_copy_chunk_operation_experimental",
+ title: "ALTER TABLE",
+ href: "alter_table",
+ excerpt: "Enable the columnstore for a hypertable.",
},
{
- title: "create_distributed_restore_point",
- href: "create_distributed_restore_point",
- },
- ],
- },
- {
- title: "Compression",
- type: "directory",
- href: "compression",
- description:
- "We highly recommend reading the blog post and tutorial about compression before trying to set it up for the first time.",
- children: [
- {
- title: "ALTER TABLE (Compression)",
- href: "alter_table_compression",
+ title: "add_columnstore_policy",
+ href: "add_columnstore_policy",
+ excerpt: "Automatically convert chunks in the hypertable rowstore to the columnstore after a specific time interval",
},
{
- title: "add_compression_policy",
- href: "add_compression_policy",
+ title: "remove_columnstore_policy",
+ href: "remove_columnstore_policy",
+ excerpt: "Remove a columnstore policy from a hypertable or continuous aggregate",
},
{
- title: "remove_compression_policy",
- href: "remove_compression_policy",
+ title: "convert_to_columnstore",
+ href: "convert_to_columnstore",
+ excerpt: "Manually convert a specific chunk in the hypertable rowstore to the columnstore",
},
{
- title: "compress_chunk",
- href: "compress_chunk",
+ title: "convert_to_rowstore",
+ href: "convert_to_rowstore",
+ excerpt: "Manually convert a specific chunk in the hypertable columnstore to the rowstore",
},
{
- title: "decompress_chunk",
- href: "decompress_chunk",
+ title: "hypertable_columnstore_settings",
+ href: "hypertable_columnstore_settings",
+ excerpt: "Retrieve information about the settings for all hypertables in the columnstore",
},
{
- title: "recompress_chunk",
- href: "recompress_chunk",
+ title: "hypertable_columnstore_stats",
+ href: "hypertable_columnstore_stats",
+ excerpt: "Retrieve compression statistics for the columnstore",
},
{
- title: "hypertable_compression_stats",
- href: "hypertable_compression_stats",
+ title: "chunk_columnstore_settings",
+ href: "chunk_columnstore_settings",
+ excerpt: "Retrieve the compression settings for each chunk in the columnstore",
},
{
- title: "chunk_compression_stats",
- href: "chunk_compression_stats",
+ title: "chunk_columnstore_stats",
+ href: "chunk_columnstore_stats",
+ excerpt: "Retrieve statistics about the chunks in the columnstore",
},
],
},
@@ -607,6 +563,105 @@ module.exports = [
description:
"An overview of what different tags represent in the API section of Timescale Documentation.",
},
+ {
+ title: "Compression (Old API, use Hypercore)",
+ href: "compression",
+ description:
+ "We highly recommend reading the blog post and tutorial about compression before trying to set it up for the first time.",
+ children: [
+ {
+ title: "ALTER TABLE (Compression)",
+ href: "alter_table_compression",
+ },
+ {
+ title: "add_compression_policy",
+ href: "add_compression_policy",
+ },
+ {
+ title: "remove_compression_policy",
+ href: "remove_compression_policy",
+ },
+ {
+ title: "compress_chunk",
+ href: "compress_chunk",
+ },
+ {
+ title: "decompress_chunk",
+ href: "decompress_chunk",
+ },
+ {
+ title: "recompress_chunk",
+ href: "recompress_chunk",
+ },
+ {
+ title: "hypertable_compression_stats",
+ href: "hypertable_compression_stats",
+ },
+ {
+ title: "chunk_compression_stats",
+ href: "chunk_compression_stats",
+ },
+ ],
+ },
+ {
+ title: "Distributed hypertables (Sunsetted v2.14.x)",
+ type: "directory",
+ href: "distributed-hypertables",
+ children: [
+ {
+ title: "create_distributed_hypertable",
+ href: "create_distributed_hypertable",
+ },
+ {
+ title: "add_data_node",
+ href: "add_data_node",
+ },
+ {
+ title: "attach_data_node",
+ href: "attach_data_node",
+ },
+ {
+ title: "alter_data_node",
+ href: "alter_data_node",
+ },
+ {
+ title: "detach_data_node",
+ href: "detach_data_node",
+ },
+ {
+ title: "delete_data_node",
+ href: "delete_data_node",
+ },
+ {
+ title: "distributed_exec",
+ href: "distributed_exec",
+ },
+ {
+ title: "set_number_partitions",
+ href: "set_number_partitions",
+ },
+ {
+ title: "set_replication_factor",
+ href: "set_replication_factor",
+ },
+ {
+ title: "copy_chunk",
+ href: "copy_chunk_experimental",
+ },
+ {
+ title: "move_chunk",
+ href: "move_chunk_experimental",
+ },
+ {
+ title: "cleanup_copy_chunk_operation",
+ href: "cleanup_copy_chunk_operation_experimental",
+ },
+ {
+ title: "create_distributed_restore_point",
+ href: "create_distributed_restore_point",
+ },
+ ],
+ },
],
},
];