diff --git a/cloud-account/change-your-password.md b/cloud-account/change-your-password.md index c3bba9d66..b18310dd7 100644 --- a/cloud-account/change-your-password.md +++ b/cloud-account/change-your-password.md @@ -8,7 +8,7 @@ applies: # Change your password [ec-change-password] -If you created a password when you signed up for a {{ecloud}} account, or you added the password-based login method to your account, then you can change your password if needed. +If you created a password when you signed up for an {{ecloud}} account, or you added the password-based login method to your account, then you can change your password if needed. If you know your current password: diff --git a/deploy-manage/api-keys/elastic-cloud-enterprise-api-keys.md b/deploy-manage/api-keys/elastic-cloud-enterprise-api-keys.md index c58e3a977..47e2088a8 100644 --- a/deploy-manage/api-keys/elastic-cloud-enterprise-api-keys.md +++ b/deploy-manage/api-keys/elastic-cloud-enterprise-api-keys.md @@ -70,5 +70,5 @@ To create a bearer token: { "token": "eyJ0eXa......MgBmsw4s" } ``` -2. Specify the bearer token in the Authentication header of your API requests. To learn more, check [accessing the API from the command line](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-enterprise/ece-api-command-line.md). +2. Specify the bearer token in the Authentication header of your API requests. To learn more, check [accessing the API from the command line](asciidocalypse://docs/cloud/docs/reference/cloud-enterprise/ece-api-command-line.md). diff --git a/deploy-manage/autoscaling/ec-autoscaling-api-example.md b/deploy-manage/autoscaling/ec-autoscaling-api-example.md index c80c59824..92e391558 100644 --- a/deploy-manage/autoscaling/ec-autoscaling-api-example.md +++ b/deploy-manage/autoscaling/ec-autoscaling-api-example.md @@ -5,9 +5,9 @@ mapped_pages: # Autoscaling through the API [ec-autoscaling-api-example] -This example demonstrates how to use the Elasticsearch Service RESTful API to create a deployment with autoscaling enabled. +This example demonstrates how to use the {{ecloud}} RESTful API to create a deployment with autoscaling enabled. -The example deployment has a hot data and content tier, warm data tier, cold data tier, and a machine learning node, all of which will scale within the defined parameters. To learn about the autoscaling settings, check [Deployment autoscaling](../autoscaling.md) and [Autoscaling example](ec-autoscaling-example.md). For more information about using the Elasticsearch Service API in general, check [RESTful API](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-api-restful.md). +The example deployment has a hot data and content tier, warm data tier, cold data tier, and a machine learning node, all of which will scale within the defined parameters. To learn about the autoscaling settings, check [Deployment autoscaling](../autoscaling.md) and [Autoscaling example](ec-autoscaling-example.md). For more information about using the {{ecloud}} API in general, check [RESTful API](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-api-restful.md). ## Requirements [ec_requirements] @@ -46,7 +46,7 @@ $$$ec-autoscaling-api-example-requirements-table$$$ + ✕ = Do not include the property. -+ These rules match the behavior of the Elasticsearch Service user console. ++ These rules match the behavior of the {{ecloud}} Console. + * The `elasticsearch` object must contain the property `"autoscaling_enabled": true`. diff --git a/deploy-manage/autoscaling/ec-autoscaling-example.md b/deploy-manage/autoscaling/ec-autoscaling-example.md index 80d7787ba..7ca8f25a5 100644 --- a/deploy-manage/autoscaling/ec-autoscaling-example.md +++ b/deploy-manage/autoscaling/ec-autoscaling-example.md @@ -5,7 +5,7 @@ mapped_pages: # Autoscaling example [ec-autoscaling-example] -To help you better understand the available autoscaling settings, this example describes a typical autoscaling workflow on sample Elasticsearch Service deployment. +To help you better understand the available autoscaling settings, this example describes a typical autoscaling workflow on sample {{ech}} deployment. 1. Enable autoscaling: diff --git a/deploy-manage/autoscaling/ec-autoscaling.md b/deploy-manage/autoscaling/ec-autoscaling.md index 999e7bd6b..ace565330 100644 --- a/deploy-manage/autoscaling/ec-autoscaling.md +++ b/deploy-manage/autoscaling/ec-autoscaling.md @@ -45,7 +45,7 @@ Currently, autoscaling behavior is as follows: ::::{note} -For any Elasticsearch Service Elasticsearch component the number of availability zones is not affected by autoscaling. You can always set the number of availability zones manually and the autoscaling mechanism will add or remove capacity per availability zone. +The number of availability zones for each component of your {{ech}} deployments is not affected by autoscaling. You can always set the number of availability zones manually and the autoscaling mechanism will add or remove capacity per availability zone. :::: @@ -85,10 +85,10 @@ The following are known limitations and restrictions with autoscaling: To enable or disable autoscaling on a deployment: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. On the **Deployments** page, select your deployment. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. In your deployment menu, select **Edit**. 4. Select desired autoscaling configuration for this deployment using **Enable Autoscaling for:** dropdown menu. @@ -103,10 +103,10 @@ When autoscaling has been disabled, you need to adjust the size of data tiers an Each autoscaling setting is configured with a default value. You can adjust these if necessary, as follows: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. On the **Deployments** page, select your deployment. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. In your deployment menu, select **Edit**. 4. To update a data tier: diff --git a/deploy-manage/autoscaling/ece-autoscaling-api-example.md b/deploy-manage/autoscaling/ece-autoscaling-api-example.md index dfdeb0cde..881ac18ff 100644 --- a/deploy-manage/autoscaling/ece-autoscaling-api-example.md +++ b/deploy-manage/autoscaling/ece-autoscaling-api-example.md @@ -7,7 +7,7 @@ mapped_pages: This example demonstrates how to use the Elastic Cloud Enterprise RESTful API to create a deployment with autoscaling enabled. -The example deployment has a hot data and content tier, warm data tier, cold data tier, and a machine learning node, all of which will scale within the defined parameters. To learn about the autoscaling settings, check [Deployment autoscaling](../autoscaling.md) and [Autoscaling example](ece-autoscaling-example.md). For more information about using the Elastic Cloud Enterprise API in general, check [RESTful API](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-enterprise/restful-api.md). +The example deployment has a hot data and content tier, warm data tier, cold data tier, and a machine learning node, all of which will scale within the defined parameters. To learn about the autoscaling settings, check [Deployment autoscaling](../autoscaling.md) and [Autoscaling example](ece-autoscaling-example.md). For more information about using the Elastic Cloud Enterprise API in general, check [RESTful API](asciidocalypse://docs/cloud/docs/reference/cloud-enterprise/restful-api.md). ## Requirements [ece_requirements_3] diff --git a/deploy-manage/cloud-organization.md b/deploy-manage/cloud-organization.md index e4c04687d..d78661682 100644 --- a/deploy-manage/cloud-organization.md +++ b/deploy-manage/cloud-organization.md @@ -1,9 +1,10 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-organizations.html -applies: +applies_to: + deployment: + ess: all serverless: all - hosted: all --- # Manage your Cloud organization [ec-organizations] diff --git a/deploy-manage/cloud-organization/billing.md b/deploy-manage/cloud-organization/billing.md index f547c4b58..894d1d589 100644 --- a/deploy-manage/cloud-organization/billing.md +++ b/deploy-manage/cloud-organization/billing.md @@ -2,9 +2,10 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud/current/ec-billing.html - https://www.elastic.co/guide/en/serverless/current/general-manage-billing.html -applies: +applies_to: + deployment: + ess: all serverless: all - hosted: all --- # Billing diff --git a/deploy-manage/cloud-organization/billing/add-billing-details.md b/deploy-manage/cloud-organization/billing/add-billing-details.md index 786ec8bd4..c927164f7 100644 --- a/deploy-manage/cloud-organization/billing/add-billing-details.md +++ b/deploy-manage/cloud-organization/billing/add-billing-details.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-billing-details.html -applies: - hosted: all +applies_to: + deployment: + ess: all serverless: all --- diff --git a/deploy-manage/cloud-organization/billing/billing-faq.md b/deploy-manage/cloud-organization/billing/billing-faq.md index c4f4a06a6..76318deff 100644 --- a/deploy-manage/cloud-organization/billing/billing-faq.md +++ b/deploy-manage/cloud-organization/billing/billing-faq.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-faq-billing.html -applies: - hosted: all +applies_to: + deployment: + ess: all serverless: all --- diff --git a/deploy-manage/cloud-organization/billing/billing-models.md b/deploy-manage/cloud-organization/billing/billing-models.md index 68b83724a..d7472a670 100644 --- a/deploy-manage/cloud-organization/billing/billing-models.md +++ b/deploy-manage/cloud-organization/billing/billing-models.md @@ -1,8 +1,9 @@ --- mapped_urls: - https://www.elastic.co/guide/en/cloud/current/ec-billing-models.html -applies: - hosted: all +applies_to: + deployment: + ess: all serverless: all --- diff --git a/deploy-manage/cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md b/deploy-manage/cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md index 84aec88cf..4e426bafb 100644 --- a/deploy-manage/cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md +++ b/deploy-manage/cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md @@ -2,8 +2,9 @@ navigation_title: "Hosted billing dimensions" mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-billing-dimensions.html -applies: - hosted: all +applies_to: + deployment: + ess: all --- # Cloud Hosted deployment billing dimensions [ec-billing-dimensions] @@ -89,7 +90,7 @@ As is common with Cloud providers, we meter and bill snapshot storage using two ### How can I control the storage cost? [ec_how_can_i_control_the_storage_cost] -Snapshots in Elasticsearch Service save data incrementally at each snapshot event. This means that the effective snapshot size may be larger than the size of the current indices. The snapshot size increases as data is added or updated in the cluster, and deletions do not reduce the snapshot size until the snapshot containing that data is removed. +Snapshots in {{ech}} save data incrementally at each snapshot event. This means that the effective snapshot size may be larger than the size of the current indices. The snapshot size increases as data is added or updated in the cluster, and deletions do not reduce the snapshot size until the snapshot containing that data is removed. API requests are executed every time a snapshot is taken or restored, affecting usage costs. In the event that you have any automated processes that use the Elasticsearch API to create or restore snapshots, these should be set so as to avoid unexpected charges. diff --git a/deploy-manage/cloud-organization/billing/ecu.md b/deploy-manage/cloud-organization/billing/ecu.md index e51da997c..ab6428922 100644 --- a/deploy-manage/cloud-organization/billing/ecu.md +++ b/deploy-manage/cloud-organization/billing/ecu.md @@ -1,8 +1,9 @@ --- mapped_urls: - https://www.elastic.co/guide/en/cloud/current/ec-billing-ecu.html -applies: - hosted: all +applies_to: + deployment: + ess: all serverless: all --- diff --git a/deploy-manage/cloud-organization/billing/elastic-observability-billing-dimensions.md b/deploy-manage/cloud-organization/billing/elastic-observability-billing-dimensions.md index d6cbdc358..d84a05778 100644 --- a/deploy-manage/cloud-organization/billing/elastic-observability-billing-dimensions.md +++ b/deploy-manage/cloud-organization/billing/elastic-observability-billing-dimensions.md @@ -2,7 +2,7 @@ navigation_title: "Elastic for Observability" mapped_pages: - https://www.elastic.co/guide/en/serverless/current/observability-billing.html -applies: +applies_to: serverless: all --- diff --git a/deploy-manage/cloud-organization/billing/elasticsearch-billing-dimensions.md b/deploy-manage/cloud-organization/billing/elasticsearch-billing-dimensions.md index b8a0744f9..e6833198f 100644 --- a/deploy-manage/cloud-organization/billing/elasticsearch-billing-dimensions.md +++ b/deploy-manage/cloud-organization/billing/elasticsearch-billing-dimensions.md @@ -2,7 +2,7 @@ navigation_title: "Elasticsearch" mapped_pages: - https://www.elastic.co/guide/en/serverless/current/elasticsearch-billing.html -applies: +applies_to: serverless: all --- diff --git a/deploy-manage/cloud-organization/billing/manage-subscription.md b/deploy-manage/cloud-organization/billing/manage-subscription.md index c28dbbde1..346b95958 100644 --- a/deploy-manage/cloud-organization/billing/manage-subscription.md +++ b/deploy-manage/cloud-organization/billing/manage-subscription.md @@ -3,8 +3,10 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/general-check-subscription.html - https://www.elastic.co/guide/en/cloud/current/ec-subscription-overview.html - https://www.elastic.co/guide/en/cloud/current/ec-select-subscription-level.html -applies: - hosted: all + - https://www.elastic.co/guide/en/cloud/current/ec-licensing.html +applies_to: + deployment: + ess: all serverless: all --- diff --git a/deploy-manage/cloud-organization/billing/monitor-analyze-usage.md b/deploy-manage/cloud-organization/billing/monitor-analyze-usage.md index d7fb57df7..2383d3d9c 100644 --- a/deploy-manage/cloud-organization/billing/monitor-analyze-usage.md +++ b/deploy-manage/cloud-organization/billing/monitor-analyze-usage.md @@ -2,8 +2,9 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud/current/ec-account-usage.html - https://www.elastic.co/guide/en/serverless/current/general-monitor-usage.html -applies: - hosted: all +applies_to: + deployment: + ess: all serverless: all --- diff --git a/deploy-manage/cloud-organization/billing/security-billing-dimensions.md b/deploy-manage/cloud-organization/billing/security-billing-dimensions.md index ec6a723a4..6552a7481 100644 --- a/deploy-manage/cloud-organization/billing/security-billing-dimensions.md +++ b/deploy-manage/cloud-organization/billing/security-billing-dimensions.md @@ -2,7 +2,7 @@ navigation_title: "Elastic for Security" mapped_pages: - https://www.elastic.co/guide/en/serverless/current/security-billing.html -applies: +applies_to: serverless: all --- diff --git a/deploy-manage/cloud-organization/billing/serverless-project-billing-dimensions.md b/deploy-manage/cloud-organization/billing/serverless-project-billing-dimensions.md index f414cda15..89a9538a1 100644 --- a/deploy-manage/cloud-organization/billing/serverless-project-billing-dimensions.md +++ b/deploy-manage/cloud-organization/billing/serverless-project-billing-dimensions.md @@ -2,7 +2,7 @@ navigation_title: "Serverless billing dimensions" mapped_pages: - https://www.elastic.co/guide/en/serverless/current/general-serverless-billing.html -applies: +applies_to: serverless: all --- diff --git a/deploy-manage/cloud-organization/billing/update-billing-operational-contacts.md b/deploy-manage/cloud-organization/billing/update-billing-operational-contacts.md index 546aea201..23650d95d 100644 --- a/deploy-manage/cloud-organization/billing/update-billing-operational-contacts.md +++ b/deploy-manage/cloud-organization/billing/update-billing-operational-contacts.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-billing-contacts.html -applies: - hosted: all +applies_to: + deployment: + ess: all serverless: all --- diff --git a/deploy-manage/cloud-organization/billing/view-billing-history.md b/deploy-manage/cloud-organization/billing/view-billing-history.md index 058d79e5e..6766833a8 100644 --- a/deploy-manage/cloud-organization/billing/view-billing-history.md +++ b/deploy-manage/cloud-organization/billing/view-billing-history.md @@ -2,8 +2,9 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud/current/ec-billing-history.html - https://www.elastic.co/guide/en/serverless/current/general-billing-history.html -applies: - hosted: all +applies_to: + deployment: + ess: all serverless: all --- diff --git a/deploy-manage/cloud-organization/operational-emails.md b/deploy-manage/cloud-organization/operational-emails.md index edb4dfc2c..33c40d46e 100644 --- a/deploy-manage/cloud-organization/operational-emails.md +++ b/deploy-manage/cloud-organization/operational-emails.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-operational-emails.html -applies: - hosted: all +applies_to: + deployment: + ess: all --- # Operational emails [ec-operational-emails] @@ -18,4 +19,4 @@ We also send an email alert if one of the nodes in your cluster is restarted due These alerts are sent to all users within an {{ecloud}} organization, as well as to the email addresses listed as operational email contacts. This means that an external distribution list or automated service can receive notifications without the need to be added to the organization directly. -To configure recipients external to an {{ecloud}} organization for these notifications Elasticsearch Service, update the list of [operational email contacts](/deploy-manage/cloud-organization/billing/update-billing-operational-contacts.md). +To configure recipients external to an {{ecloud}} organization for these notifications, update the list of [operational email contacts](/deploy-manage/cloud-organization/billing/update-billing-operational-contacts.md). diff --git a/deploy-manage/cloud-organization/service-status.md b/deploy-manage/cloud-organization/service-status.md index 738b64d44..67c69039f 100644 --- a/deploy-manage/cloud-organization/service-status.md +++ b/deploy-manage/cloud-organization/service-status.md @@ -4,8 +4,9 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud/current/ec_subscribe_to_individual_regionscomponents.html - https://www.elastic.co/guide/en/cloud/current/ec_service_status_api.html - https://www.elastic.co/guide/en/serverless/current/general-serverless-status.html -applies: - hosted: all +applies_to: + deployment: + ess: all serverless: all --- diff --git a/deploy-manage/cloud-organization/tools-and-apis.md b/deploy-manage/cloud-organization/tools-and-apis.md index 7d5946690..e1a502ebf 100644 --- a/deploy-manage/cloud-organization/tools-and-apis.md +++ b/deploy-manage/cloud-organization/tools-and-apis.md @@ -5,30 +5,30 @@ mapped_pages: # Tools and APIs [ec-work-with-apis] -Most Elastic resources can be accessed and managed through RESTful APIs. While the Elasticsearch Service API is used to manage your deployments and their components, the Elasticsearch and Kibana APIs provide direct access to your data and your visualizations, respectively. +Most Elastic resources can be accessed and managed through RESTful APIs. While the {{ecloud}} API is used to manage your deployments and their components, the Elasticsearch and Kibana APIs provide direct access to your data and your visualizations, respectively. -Elasticsearch Service API -: You can use the Elasticsearch Service API to manage your deployments and all of the resources associated with them. This includes performing deployment CRUD operations, scaling or autoscaling resources, and managing traffic filters, deployment extensions, remote clusters, and Elastic Stack versions. You can also access cost data by deployment and by organization. +{{ecloud}} API +: You can use the {{ecloud}} API to manage your deployments and all of the resources associated with them. This includes performing deployment CRUD operations, scaling or autoscaling resources, and managing traffic filters, deployment extensions, remote clusters, and Elastic Stack versions. You can also access cost data by deployment and by organization. - To learn more about the Elasticsearch Service API, read through the [API overview](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-api-restful.md), try out some [getting started examples](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/api-examples.md), and check our [API reference documentation](https://www.elastic.co/docs/api/doc/cloud). + To learn more about the {{ecloud}} API, read through the [API overview](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-api-restful.md), try out some [getting started examples](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-api-examples.md), and check our [API reference documentation](https://www.elastic.co/docs/api/doc/cloud). - Calls to the Elasticsearch Service API are subject to [Rate limiting](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-api-rate-limiting.md). + Calls to the {{ecloud}} API are subject to [Rate limiting](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-api-rate-limiting.md). Elasticsearch APIs : This set of APIs allows you to interact directly with the Elasticsearch nodes in your deployment. You can ingest data, run search queries, check the health of your clusters, manage snapshots, and more. - To use these APIs in Elasticsearch Service read our topic [Access the Elasticsearch API console](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-api-console.md), and to learn about all of the available endpoints check the [Elasticsearch API reference documentation](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/rest-apis/index.md). + To use these APIs on {{ecloud}} read our topic [Access the API console](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-api-console.md), and to learn about all of the available endpoints check the [Elasticsearch API reference documentation](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/rest-apis/index.md). - Some [restrictions](../deploy/elastic-cloud/restrictions-known-problems.md#ec-restrictions-apis-elasticsearch) apply when using the Elasticsearch APIs in Elasticsearch Service. + Some [restrictions](../deploy/elastic-cloud/restrictions-known-problems.md#ec-restrictions-apis-elasticsearch) apply when using the Elasticsearch APIs on {{ecloud}}. Kibana APIs : Many Kibana features can be accessed through these APIs, including Kibana objects, patterns, and dashboards, as well as user roles and user sessions. You can use these APIs to configure alerts and actions, and to access health details for the Kibana Task Manager. - The Kibana APIs cannot be accessed directly from the Elasticsearch Service console; you need to use `curl` or another HTTP tool to connect. Check the [Kibana API reference documentation](https://www.elastic.co/guide/en/kibana/current/api.html) to learn about using the APIs and for details about all available endpoints. + The Kibana APIs cannot be accessed directly from the {{ecloud}} Console; you need to use `curl` or another HTTP tool to connect. Check the [Kibana API reference documentation](https://www.elastic.co/guide/en/kibana/current/api.html) to learn about using the APIs and for details about all available endpoints. - Some [restrictions](../deploy/elastic-cloud/restrictions-known-problems.md#ec-restrictions-apis-kibana) apply when using these APIs with an Elasticsearch Service Kibana instance as compared to an on-prem installation. + Some [restrictions](../deploy/elastic-cloud/restrictions-known-problems.md#ec-restrictions-apis-kibana) apply when using these APIs with Kibana on {{ecloud}} as compared to an on-prem installation. Other Products diff --git a/deploy-manage/deploy.md b/deploy-manage/deploy.md index 2802fa92f..52702cced 100644 --- a/deploy-manage/deploy.md +++ b/deploy-manage/deploy.md @@ -101,4 +101,4 @@ Learn more about [versioning and availability](/get-started/versioning-availabil :::::{tip} For a detailed comparison of features and capabilities across deployment types, see the [Deployment comparison reference](./deploy/deployment-comparison.md). -::::: \ No newline at end of file +::::: diff --git a/deploy-manage/deploy/cloud-enterprise.md b/deploy-manage/deploy/cloud-enterprise.md index ebb43529d..41258c4f4 100644 --- a/deploy-manage/deploy/cloud-enterprise.md +++ b/deploy-manage/deploy/cloud-enterprise.md @@ -1,28 +1,74 @@ --- +applies_to: + deployment: + ece: all mapped_urls: - https://www.elastic.co/guide/en/cloud-enterprise/current/index.html - https://www.elastic.co/guide/en/cloud-enterprise/current/Elastic-Cloud-Enterprise-overview.html - - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-administering-ece.html --- -# Elastic Cloud Enterprise +# Elastic Cloud Enterprise [Elastic-Cloud-Enterprise-overview] -% What needs to be done: Refine +{{ece}} (ECE) is an Elastic self-managed solution for deploying, orchestrating, and managing {{es}} clusters at scale. It provides a centralized platform that allows organizations to run {{es}}, {{kib}}, and other {{stack}} components across multiple machines. -% GitHub issue: https://github.com/elastic/docs-projects/issues/339 +ECE evolves from the Elastic hosted Cloud SaaS offering into a standalone product. You can deploy ECE on public or private clouds, virtual machines, or your own premises. -% Scope notes: Ensure the landing page makes sense and its aligned with the section overview and the overview about orchestators. What content should be in deployment types overview or in the main overview and what in the ECE landing page... +With {{ece}}, you can: -% Use migrated content from existing pages that map to this page: +* Host your regulated or sensitive data on your internal network. +* Reuse your existing investment in on-premise infrastructure and reduce total cost. +* Maximize the hardware utilization for the various clusters. +* Centralize the management of multiple Elastic deployments across teams or geographies. -% - [ ] ./raw-migrated-files/cloud/cloud-enterprise/Elastic-Cloud-Enterprise-overview.md -% Notes: 2 child docs -% - [ ] ./raw-migrated-files/cloud/cloud-enterprise/ece-administering-ece.md -% Notes: redirect only +Refer to [](./cloud-enterprise/ece-architecture.md) for details about the ECE platform architecture and the technologies used. -⚠️ **This page is a work in progress.** ⚠️ +## ECE features -The documentation team is working to combine content pulled from the following pages: +- **Automated scaling & orchestration**: Handles cluster provisioning, scaling, and upgrades automatically. +- **High availability & resilience**: Ensures uptime through multiple Availability Zones, data replication, and automated restore and snapshot. +- **Centralized monitoring & logging**: Provides insights into cluster performance, resource usage, and logs. +- **Single Sign-On (SSO) & role-based access control (RBAC)**: Allows organizations to manage access and security policies. +- **API & UI management**: Offers a web interface and API to create and manage clusters easily. +- **Air-gapped installations**: Support for off-line installations. +- **Microservices architecture**: All services are containerized through Docker. -* [/raw-migrated-files/cloud/cloud-enterprise/Elastic-Cloud-Enterprise-overview.md](/raw-migrated-files/cloud/cloud-enterprise/Elastic-Cloud-Enterprise-overview.md) -* [/raw-migrated-files/cloud/cloud-enterprise/ece-administering-ece.md](/raw-migrated-files/cloud/cloud-enterprise/ece-administering-ece.md) \ No newline at end of file +Check the [glossary](asciidocalypse:///docs-content/docs/reference/glossary.md) to get familiar with the terminology for ECE as well as other Elastic products and solutions. + +## Section overview + +This section focuses on deploying the ECE platform, as well as orchestrating and configuring {{es}} clusters, referred to as deployments. + +In ECE, a deployment is a managed {{stack}} environment that provides users with an {{es}} cluster along with supporting components such as {{kib}} and other optional services like APM and {{fleet}}. + +The section covers the following tasks: + +* [Deploy ECE orchestrator](./cloud-enterprise/deploy-an-orchestrator.md) + - [Prepare the environment](./cloud-enterprise/prepare-environment.md) + - [Install ECE](./cloud-enterprise/install.md) + - [Air gapped installations](./cloud-enterprise/air-gapped-install.md) + - [Configure ECE](./cloud-enterprise/configure.md) + +* [Work with deployments](./cloud-enterprise/working-with-deployments.md) + - Use [](./cloud-enterprise/deployment-templates.md) to [](./cloud-enterprise/create-deployment.md) + - [](./cloud-enterprise/customize-deployment.md) + - Use the deployment [Cloud ID](./cloud-enterprise/find-cloud-id.md) and [Endpoint URLs](./cloud-enterprise/find-endpoint-url.md) for clients connection + +* Learn about [](./cloud-enterprise/tools-apis.md) that you can use with ECE + +Other sections of the documentation provide guidance on additional important tasks related to ECE: + +* Platform security and management: + * [Secure your ECE installation](../security/secure-your-elastic-cloud-enterprise-installation.md) + * [Users and roles](../users-roles/cloud-enterprise-orchestrator.md) + * [ECE platform maintenance operations](../maintenance/ece.md) + * [Manage licenses](../license/manage-your-license-in-ece.md) + +* Deployments security and management: + * [Secure your deployments](../security/secure-your-cluster-deployment.md) + * [Manage snapshot repositories](../tools/snapshot-and-restore.md) + +To learn about other deployment options, refer to [](../deploy.md). + +## Supported versions [ece-supported-versions] + +Refer to the [Elastic Support Matrix](https://www.elastic.co/support/matrix#elastic-cloud-enterprise) for more information about supported Operating Systems, Docker, and Podman versions. diff --git a/deploy-manage/deploy/cloud-enterprise/add-custom-bundles-plugins.md b/deploy-manage/deploy/cloud-enterprise/add-custom-bundles-plugins.md new file mode 100644 index 000000000..74e917ddd --- /dev/null +++ b/deploy-manage/deploy/cloud-enterprise/add-custom-bundles-plugins.md @@ -0,0 +1,415 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-add-custom-bundle-plugin.html +navigation_title: "Custom bundles and plugins" +applies_to: + deployment: + ece: +--- + +# Add custom bundles and plugins to your deployment [ece-add-custom-bundle-plugin] + +Follow these steps to upload custom bundles and plugins to your Elasticsearch clusters, so that it uses your custom bundles or plugins. + +* Update your Elasticsearch cluster in the [advanced configuration editor](../../../deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md): +* For bundles, modify the `resources.elasticsearch.plan.elasticsearch.user_bundles` JSON attribute. +* For plugins, modify the `resources.elasticsearch.plan.elasticsearch.user_plugins` JSON attribute. + +We’ve provided some examples, including [LDAP bundles](../../../solutions/search/full-text/search-with-synonyms.md#ece-add-custom-bundle-example-LDAP), [SAML bundles](../../../solutions/search/full-text/search-with-synonyms.md#ece-add-custom-bundle-example-SAML), [synonym bundles](../../../solutions/search/full-text/search-with-synonyms.md#ece-add-custom-bundle-example-synonyms), and adding a [JVM trustore](../../../solutions/search/full-text/search-with-synonyms.md#ece-add-custom-bundle-example-cacerts). + +::::{tip} +As part of the ECE [high availability](../../../deploy-manage/deploy/cloud-enterprise/ece-ha.md) strategy, it’s a good idea to make sure that any web servers hosting custom bundles or plugins required by ECE are available to all allocators, so that they can continue to be accessed in the event of a network partition or zone outage. An instance that requires custom bundles or plugins will be unable to start in the event that it can’t access the plugin web server. +:::: + +## Add custom plugins to your deployment [ece-add-custom-plugin] + +Custom plugins can include the official Elasticsearch plugins not provided with Elastic Cloud Enterprise, any of the community-sourced plugins, or plugins that you write yourself. + +1. [Log into the Cloud UI](../../../deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). +2. From the **Deployments** page, select your deployment. + + Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. + +3. In the left side navigation select **Edit** from your deployment menu, then go to the bottom of the page and select [**Advanced Edit**](../../../deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md). +4. Within the **Deployment configuration** JSON find the section: + + `resources` > `elasticsearch` > `plan` > `elasticsearch` + + If there is an existing `user_plugins` section, then add the new plugin there, otherwise add a `user_plugins` section. + + ```sh + { + ... + "resources": { + "elasticsearch": [ + ... + "plan": { + ... + "elasticsearch": { + ... + "user_bundles": [ + { + .... + } ] , + "user_plugins": [ + { + "url" : "", <1> + "name" : "plugin_name", + "elasticsearch_version" : "" <2> + }, + { + "url": "http://www.MYURL.com/my-custom-plugin.zip", + "name": "my-custom-plugin", + "elasticsearch_version": "7.17.1" + } + ] + } + ``` + + 1. The URL for the plugin must be always available. Make sure you host the plugin artifacts internally in a highly available environment. The URL must use the scheme `http` or `https` + 2. The version must match exactly your Elasticsearch version, such as `7.17.1`. Wildcards (*) are not allowed. + + + ::::{important} + If the plugin URL becomes unreachable (if the URL changes at remote end, or connectivity to the remote web server has issues) you might encounter boot loops. + :::: + + + ::::{important} + Don’t use the same URL to serve newer versions of the plugin. This may result in different nodes of the same cluster running different plugin versions. + :::: + + + ::::{important} + A plugin URL that uses an `https` endpoint with a certificate signed by an internal Certificate Authority (CA) is not supported. Either use a publicly trusted certificate, or fall back to the `http` scheme. + :::: + +5. Save your changes. +6. To verify that all nodes have the plugins installed, use one of these commands: `GET /_nodes/plugins?filter_path=nodes.*.plugins` or `GET _cat/plugins?v` + + +## Example: Custom LDAP bundle [ece-add-custom-bundle-example-LDAP] + +This example adds a custom LDAP bundle for deployment level role-based access control (RBAC). To set platform level RBAC, check [Configure RBAC](../../../deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md). + +1. Prepare a custom bundle as a ZIP file that contains your keystore file with the private key and certificate inside of a `truststore` folder [in the same way that you would on Elastic Cloud](/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md). This bundle allows all Elasticsearch containers to access the same keystore file through your `ssl.truststore` settings. +2. In the [advanced configuration editor](../../../deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md), update your new Elasticsearch cluster with the custom bundle you have just created. Modify the `user_bundles` JSON attribute of **each** Elasticsearch instance type as shown in the following example: + + ```sh + { + ... + "resources": { + "elasticsearch": [ + ... + "plan": { + ... + "elasticsearch": { + ... + "user_bundles": [ + { + "name": "ldap-cert", + "url": "http://www.MYURL.com/ldapcert.zip", <1> + "elasticsearch_version": "*" + } + ] + } + ... + ``` + + 1. The URLs for the bundle ZIP files (`ldapcert.zip`) must be always available. Make sure you host the plugin artifacts internally in a highly available environment. + + + ::::{important} + If the bundle URL becomes unreachable (if the URL changes at remote end, or connectivity to the remote web server has issues) you might encounter boot loops. + :::: + + + ::::{important} + Don’t use the same URL to serve newer versions of the bundle. This may result in different nodes of the same cluster running different bundle versions. + :::: + +3. Custom bundles are unzipped in `/app/config/BUNDLE_DIRECTORY_STRUCTURE`, where `BUNDLE_DIRECTORY_STRUCTURE` is the directory structure within the bundle ZIP file itself. These file locations are needed in the next step. + + ```sh + $ tree . + . + └── truststore + └── keystore.ks + ``` + + In this example, the unzipped keystore file gets placed under `/app/config/truststore/keystore.ks`. + + + +## Example: Custom SAML bundle [ece-add-custom-bundle-example-SAML] + +This example adds a custom SAML bundle for deployment level role-based access control (RBAC). To set platform level RBAC, check [Configure RBAC](../../../deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md). + +1. If your Identity Provider doesn’t publish its SAML metadata at an HTTP URL, or if your Elasticsearch cluster cannot reach that URL, you can upload the SAML metadata as a file. + + 1. Prepare a ZIP file with a [custom bundle](../../../solutions/search/full-text/search-with-synonyms.md) that contains your Identity Provider’s metadata (`metadata.xml`) and store it in the `saml` folder. + + This bundle allows all Elasticsearch containers to access the metadata file. + + 2. In the [advanced configuration editor](../../../deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md), update your Elasticsearch cluster configuration with the bundle you prepared in the previous step. Modify the `user_bundles` JSON attribute of **each** Elasticsearch instance type as shown in the following example: + + ```text + { + ... + "resources": { + "elasticsearch": [ + ... + "plan": { + ... + "elasticsearch": { + ... + "user_bundles": [ + { + "name": "saml-metadata", + "url": "http://www.MYURL.com/saml-metadata.zip", <1> + "elasticsearch_version": "*" + } + ] + } + ... + ``` + + 1. The URL for the bundle ZIP file must be always available. Make sure you host the plugin artifacts internally in a highly available environment.::::{important} + If the bundle URL becomes unreachable (if the URL changes at remote end, or connectivity to the remote web server has issues) you might encounter boot loops. + :::: + + + + + ::::{important} + Don’t use the same URL to serve newer versions of the bundle. This may result in different nodes of the same cluster running different bundle versions. + :::: + + + Custom bundles are unzipped in `/app/config/BUNDLE_DIRECTORY_STRUCTURE`, where `BUNDLE_DIRECTORY_STRUCTURE` is the directory structure within the ZIP file itself. These file locations are needed in the next step. + + In this example, the SAML metadata file is located in the path `/app/config/saml/metadata.xml`: + + ```sh + $ tree . + . + └── saml + └── metadata.xml + ``` + + 3. Adjust your `saml` realm configuration accordingly: + + ```sh + idp.metadata.path: /app/config/saml/metadata.xml <1> + ``` + + 1. The path to the SAML metadata file that was uploaded + + + +## Example: Custom JVM trust store bundle [ece-add-custom-bundle-example-cacerts] + +If you are using SSL certificates signed by non-public certificate authorities, Elasticsearch is not able to communicate with the services using those certificates unless you import a custom JVM trust store containing the certificates of your signing authority into your Elastic Cloud Enterprise installation. You’ll need the trust store to access snapshot repositories like Minio, for your Elastic Cloud Enterprise proxy, or to reindex from remote. + +To import a JVM trust store: + +1. Prepare the custom JVM trust store: + + 1. Pull the certificate from the service you want to make accessible: + + ```sh + openssl s_client -connect -showcerts <1> + ``` + + 1. The server address (name and port number) of the service that you want Elasticsearch to be able to access. This command prints the entire certificate chain to `stdout`. You can choose a certificate at any level to be added to the trust store. + + 2. Save it to a file with as a PEM extension. + 3. Locate your JRE’s default trust store, and copy it to the current directory: + + ```sh + cp cacerts <1> + ``` + + 1. Default JVM trust store is typically located in `$JAVA_HOME/jre/libs/security/cacerts` + + + ::::{tip} + Default trust store contains certificates of many well known root authorities that are trusted by default. If you only want to include a limited list of CAs to trust, skip this step, and simply import specific certificates you want to trust into an empty store as shown next + :::: + + 4. Use keytool command from your JRE to import certificate(s) into the keystore: + + ```sh + $JAVA_HOME/bin/keytool -keystore cacerts -storepass changeit -noprompt -importcert -file .pem -alias <1> + ``` + + 1. The file where you saved the certificate to import, and an alias you assign to it, that is descriptive of the origin of the certificate + + + ::::{important} + We recommend that you keep file name and password for the trust store as JVM defaults (`cacerts` and `changeit` respectively). If you need to use different values, you need to add extra configuration, as detailed later in this document, in addition to adding the bundle. + :::: + + + You can have multiple certificates to the trust store, repeating the same command. There is only one trust store per cluster currently supported. You cannot, for example, add multiple bundles with different trust stores to the same cluster, they will not get merged. Add all certificates to be trusted to the same trust store + +2. Create the bundle: + + ```sh + zip cacerts.zip cacerts <1> + ``` + + 1. The name of the zip archive is not significant + + + ::::{tip} + A bundle may contain other contents beyond the trust store if you prefer, but we recommend creating separate bundles for different purposes. + :::: + +3. In the [advanced configuration editor](../../../deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md), update your Elasticsearch cluster configuration with the bundle you prepared in the previous step. Modify the `user_bundles` JSON attribute of **each** Elasticsearch instance type as shown in the following example: + + ```sh + { + ... + "resources": { + "elasticsearch": [ + ... + "plan": { + ... + "elasticsearch": { + ... + "user_bundles": [ + { + "name": "custom-ca-certs", + "url": "http://www.MYURL.com/cacerts.zip", <1> + "elasticsearch_version": "*" <2> + } + ] + } + ... + ``` + + 1. The URL for the bundle ZIP file must be always available. Make sure you host the plugin artefacts internally in a highly available environment. + 2. Wildcards are allowed here, since the certificates are independent from the Elasticsearch version. + + + ::::{important} + If the bundle URL becomes unreachable (if the URL changes at remote end, or connectivity to the remote web server has issues) you might encounter boot loops. + :::: + + +::::{important} +Don’t use the same URL to serve newer versions of the bundle, i.e. when updating certificates. This may result in different nodes of the same cluster running different bundle versions. +:::: + + +1. If you prefer to use a different file name and/or password for the trust store, you also need to add an additional configuration section to the cluster metadata before adding the bundle. This configuration should be added to the `Elasticsearch cluster data` section of the page: + + ```sh + "jvm_trust_store": { + "name": "", + "password": "" + } + ``` + + +1. The name of the trust store must match the filename included into the archive +2. Password used to create the trust store::::{important} +Use only alphanumeric characters, dashes, and underscores in both file name and password. +:::: + + +You do not need to do this step if you are using default filename and password (`cacerts` and `changeit` respectively) in your bundle. + + + + +## Example: Custom GeoIP database bundle [ece-add-custom-bundle-example-geoip] + +1. Prepare a ZIP file with a [custom bundle](../../../solutions/search/full-text/search-with-synonyms.md) that contains a: [GeoLite2 database](https://dev.maxmind.com/geoip/geoip2/geolite2). The folder has to be named `ingest-geoip`, and the file name can be anything that is appended `-(City|Country|ASN)` with the `mmdb` file extension, and it must have a different name than the original name `GeoLite2-City.mmdb`. + + The file `my-geoip-file.zip` should look like this: + + ```sh + $ tree . + . + └── ingest-geoip + └── MyGeoLite2-City.mmdb + ``` + +2. Copy the ZIP file to a webserver that is reachable from any allocator in your environment. +3. In the [advanced configuration editor](../../../deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md), update your Elasticsearch cluster configuration with the bundle you prepared in the previous step. Modify the `user_bundles` JSON attribute of **each** Elasticsearch instance type as shown in the following example. + + ```sh + { + ... + "resources": { + "elasticsearch": [ + ... + "plan": { + ... + "elasticsearch": { + ... + "user_bundles": [ + { + "name": "custom-geoip-db", + "url": "http://www.MYURL.com/my-geoip-file.zip", + "elasticsearch_version": "*" + } + ] + } + ``` + +4. To use this bundle, you can refer it in the [GeoIP processor](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/enrich-processor/geoip-processor.md) of an ingest pipeline as `MyGeoLite2-City.mmdb` under `database_file` such as: + + ```sh + ... + { + "geoip": { + "field": ... + "database_file": "MyGeoLite2-City.mmdb", + ... + } + } + ... + ``` + + + +## Example: Custom synonyms bundle [ece-add-custom-bundle-example-synonyms] + +1. Prepare a ZIP file with a [custom bundle](../../../solutions/search/full-text/search-with-synonyms.md) that contains a dictionary of synonyms in a text file. + + The file `synonyms.zip` should look like this: + + ```sh + $ tree . + . + └── dictionaries + └── synonyms.txt + ``` + +2. Copy the ZIP file to a webserver that is reachable from any allocator in your environment. +3. In the [advanced configuration editor](../../../deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md), update your Elasticsearch cluster configuration with the bundle you prepared in the previous step. Modify the `user_bundles` JSON attribute of **each** Elasticsearch instance type as shown in the following example. + + ```sh + { + ... + "resources": { + "elasticsearch": [ + ... + "plan": { + ... + "elasticsearch": { + ... + "user_bundles": [ + { + "name": "custom-synonyms", + "url": "http://www.MYURL.com/synonyms.zip", + "elasticsearch_version": "*" + } + ] + } + ``` + + diff --git a/deploy-manage/deploy/cloud-enterprise/add-plugins.md b/deploy-manage/deploy/cloud-enterprise/add-plugins.md new file mode 100644 index 000000000..632fc4395 --- /dev/null +++ b/deploy-manage/deploy/cloud-enterprise/add-plugins.md @@ -0,0 +1,43 @@ +--- +applies_to: + deployment: + ece: +--- + +# Add plugins and extensions [ece-adding-plugins] + +Plugins extend the core functionality of {{es}}. {{ece}} makes it easy to add plugins to your deployment by providing a number of plugins that work with your version of {{es}}. One advantage of these plugins is that you generally don’t have to worry about upgrading plugins when upgrading to a new {{es}} version, unless there are breaking changes. The plugins are upgraded along with the rest of your deployment. + +Adding plugins to a deployment is as simple as selecting it from the list of available plugins, but different versions of {{es}} support different plugins. Plugins are available for different purposes, such as: + +* National language support, phonetic analysis, and extended unicode support +* Ingesting attachments in common formats and ingesting information about the geographic location of IP addresses +* Adding new field datatypes to {{es}} + +Additional plugins might be available. If a plugin is listed for your version of {{es}}, it can be used. + +You can also [create](asciidocalypse://elasticsearch/docs/extend/create-elasticsearch-plugins.md) and add custom plugins. + +To add plugins when creating a new deployment: + +1. [Log into the Cloud UI](/deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md) and select **Create deployment**. +2. Make your initial deployment selections, then select **Customize Deployment**. +3. Beneath the {{es}} master node, expand the **Manage plugins and settings** caret. +4. Select the plugins you want. +5. Select **Create deployment**. + +The deployment spins up with the plugins installed. + +To add plugins to an existing deployment: + +1. [Log into the Cloud UI](/deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). +2. On the **Deployments** page, select your deployment. + + Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. + +3. From your deployment menu, go to the **Edit** page. +4. Beneath the {{es}} master node, expand the **Manage plugins and settings** caret. +5. Select the plugins that you want. +6. Select **Save changes**. + +There is no downtime when adding plugins to highly available deployments. The deployment is updated with new nodes that have the plugins installed. \ No newline at end of file diff --git a/deploy-manage/deploy/cloud-enterprise/ce-add-support-for-node-roles-autoscaling.md b/deploy-manage/deploy/cloud-enterprise/ce-add-support-for-node-roles-autoscaling.md index e6b3adca8..d3c4db11e 100644 --- a/deploy-manage/deploy/cloud-enterprise/ce-add-support-for-node-roles-autoscaling.md +++ b/deploy-manage/deploy/cloud-enterprise/ce-add-support-for-node-roles-autoscaling.md @@ -12,7 +12,7 @@ System owned deployment templates have already been updated to support both data ## Adding support for node_roles [ece_adding_support_for_node_roles] -The `node_roles` field defines the roles that an Elasticsearch topology element can have, which is used in place of `node_type` when a new feature such as autoscaling is enabled, or when a new data tier is added. This field is supported on [Elastic stack versions 7.10 and above](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-enterprise/changes-to-index-allocation-api.md). +The `node_roles` field defines the roles that an Elasticsearch topology element can have, which is used in place of `node_type` when a new feature such as autoscaling is enabled, or when a new data tier is added. This field is supported on [Elastic stack versions 7.10 and above](asciidocalypse://docs/cloud/docs/reference/cloud-enterprise/changes-to-index-allocation-api.md). There are a number of fields that need to be added to each Elasticsearch node in order to support `node_roles`: diff --git a/deploy-manage/deploy/cloud-enterprise/configure.md b/deploy-manage/deploy/cloud-enterprise/configure.md index 2ad081b96..7a71714ea 100644 --- a/deploy-manage/deploy/cloud-enterprise/configure.md +++ b/deploy-manage/deploy/cloud-enterprise/configure.md @@ -1,9 +1,15 @@ --- +applies_to: + deployment: + ece: all mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-configuring-ece.html + - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-administering-ece.html --- -# Configure [ece-configuring-ece] +# Configure ECE [ece-configuring-ece] + +⚠️ **This page is a work in progress.** ⚠️ Now that you have Elastic Cloud Enterprise up and running, take a look at some of the additional features that you can configure: @@ -16,3 +22,17 @@ Now that you have Elastic Cloud Enterprise up and running, take a look at some o * [Change allocator disconnect timeout](change-allocator-disconnect-timeout.md) - Configure how long ECE waits before considering allocators to be disconnected. * [Migrate ECE on Podman hosts to SELinux in enforcing mode](migrate-ece-on-podman-hosts-to-selinux-enforce.md) - Migrate ECE to SELinux in `enforcing` mode using Podman. +## Administering your installation [ece-administering-ece] + +Now that you have Elastic Cloud Enterprise up and running, take a look at the things you can do to keep your installation humming along, from adding more capacity to dealing with hosts that require maintenance or have failed. They are all presented in the [](../../maintenance.md) section. + +* [Scale Out Your Installation](../../../deploy-manage/maintenance/ece/scale-out-installation.md) - Need to add more capacity? Here’s how. +* [Assign Roles to Hosts](../../../deploy-manage/deploy/cloud-enterprise/assign-roles-to-hosts.md) - Make sure new hosts can be used for their intended purpose after you install ECE on them. +* [Enable Maintenance Mode](../../../deploy-manage/maintenance/ece/enable-maintenance-mode.md) - Perform administrative actions on allocators safely by putting them into maintenance mode first. +* [Move Nodes From Allocators](../../../deploy-manage/maintenance/ece/move-nodes-instances-from-allocators.md) - Moves all Elasticsearch clusters and Kibana instances to another allocator, so that the allocator is no longer used for handling user requests. +* [Delete Hosts](../../../deploy-manage/maintenance/ece/delete-ece-hosts.md) - Remove a host from your ECE installation, either because it is no longer needed or because it is faulty. +* [Perform Host Maintenance](../../../deploy-manage/maintenance/ece/perform-ece-hosts-maintenance.md) - Apply operating system patches and other maintenance to hosts safely without removing them from your ECE installation. +* [Manage Elastic Stack Versions](../../../deploy-manage/deploy/cloud-enterprise/manage-elastic-stack-versions.md) - View, add, or update versions of the Elastic Stack that are available on your ECE installation. +* [Upgrade Your Installation](../../../deploy-manage/upgrade/orchestrator/upgrade-cloud-enterprise.md) - A new version of Elastic Cloud Enterprise is available and you want to upgrade. Here’s how. + + diff --git a/deploy-manage/deploy/cloud-enterprise/deploy-an-orchestrator.md b/deploy-manage/deploy/cloud-enterprise/deploy-an-orchestrator.md index 4ba2431a8..a23a4873d 100644 --- a/deploy-manage/deploy/cloud-enterprise/deploy-an-orchestrator.md +++ b/deploy-manage/deploy/cloud-enterprise/deploy-an-orchestrator.md @@ -1,9 +1,54 @@ +--- +applies_to: + deployment: + ece: all +--- # Deploy an orchestrator -% What needs to be done: Write from scratch +Elastic Cloud Enterprise (ECE) provides a centralized platform that allows organizations to run Elasticsearch, Kibana, and other Elastic Stack components across multiple machines, whether in a private or public cloud, virtual machines, or your own premises. -% GitHub issue: https://github.com/elastic/docs-projects/issues/339 +::::{note} +This section focuses on deploying the ECE orchestrator. If you want to deploy {{es}}, {{kib}} or other {{stack}} applications on ECE, refer to [](./working-with-deployments.md). +:::: -% Scope notes: Introduction about the content of this big section (which covers install and configuration possibilities of the orchestrator) +## Deployment tasks -⚠️ **This page is a work in progress.** ⚠️ \ No newline at end of file +This section provides step-by-step guidance on: + +* [Prepare the environment](./prepare-environment.md): Follow the hardware, software, and networking prerequisites before the installation. + +* [Install ECE](./install.md): Identify the deployment scenario that best fits your needs, choose an installation method, and complete the setup. + * [Install ECE on a public cloud](./install-ece-on-public-cloud.md) + * [Install ECE on your own premises](./install-ece-on-own-premises.md) + * [Alternative: install ECE with Ansible](./alternative-install-ece-with-ansible.md) + +* [Air-gapped installations](./air-gapped-install.md): Review the different options for air-gapped environments. + * [With your private Docker registry](./ece-install-offline-with-registry.md) + * [Without any Docker registry](./ece-install-offline-no-registry.md) + +* [Configure ECE](./configure.md): Explore the most common tasks to configure your ECE platform. + * [System deployments configuration](./system-deployments-configuration.md) + * [Configure deployment templates](./deployment-templates.md) + * [Configure endpoint URLs](./change-endpoint-urls.md) + * [Manage {{stack}} versions](./manage-elastic-stack-versions.md) + +## Additional topics + +After deploying the ECE platform, you may need to configure custom proxy certificates, manage snapshot repositories, or perform maintenance operations, among other tasks. Refer to the following sections for more details: + +* [Secure your ECE installation](../../security/secure-your-elastic-cloud-enterprise-installation.md) +*[](/deploy-manage/security/secure-your-cluster-deployment.md) +* [Users and roles](../../users-roles/cloud-enterprise-orchestrator.md) +* [Manage snapshot repositories](../../tools/snapshot-and-restore.md) +* [Manage licenses](../../license/manage-your-license-in-ece.md) +* [ECE platform maintenance operations](../../maintenance/ece.md) + +To start orchestrating your {{es}} clusters, refer to [](./working-with-deployments.md). + +## Advanced tasks + +The following tasks are only needed on certain circumstances: + +* [Migrate ECE to Podman hosts](./migrate-ece-to-podman-hosts.md) +* [Migrate ECE on Podman hosts to SELinux enforce](./migrate-ece-on-podman-hosts-to-selinux-enforce.md) +* [Change allocator disconnect timeout](./change-allocator-disconnect-timeout.md) diff --git a/deploy-manage/deploy/cloud-enterprise/ece-architecture.md b/deploy-manage/deploy/cloud-enterprise/ece-architecture.md index 6aa1d3256..cb3235be0 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-architecture.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-architecture.md @@ -1,6 +1,10 @@ --- +applies_to: + deployment: + ece: all mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-architecture.html + - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-containerization.html --- # Service-oriented architecture [ece-architecture] @@ -15,7 +19,6 @@ Elastic Cloud Enterprise has a service-oriented architecture that lets you: :alt: Elastic Cloud Enterprise high level architecture ::: - ## Control plane [ece_control_plane] The *control plane* of ECE include the following management services: @@ -65,3 +68,19 @@ Provide web and API access for administrators to manage and monitor the ECE inst * Advertise the memory capacity of the underlying host machine to ZooKeeper so that the Constructor can make an informed decision on where to deploy. +## Services as Docker containers [ece-containerization] + +Services are deployed as Docker containers, which simplifies the operational effort and makes it easy to provision similar environments for development and staging. Using Docker containers has the following advantages: + +* **Shares of resources** + + Each cluster node is run within a Docker container to make sure that all of the nodes have access to a guaranteed share of host resources. This mitigates the *noisy neighbor effect* where one busy deployment can overwhelm the entire host. The CPU resources are relative to the size of the Elasticsearch cluster they get assigned to. For example, a cluster with 32GB of RAM gets assigned twice as many CPU resources as a cluster with 16GB of RAM. + +* **Better security** + + On the assumption that any cluster can be compromised, containers are given no access to the platform. The same is true for the services: each service can read or write only those parts of the system state that are relevant to it. Even if some services are compromised, the attacker won’t get hold of the keys to the rest of them and will not compromise the whole platform. + +* **Secure communication through Stunnel** + + Docker containers communicate securely with one another through Transport Layer Security, provided by [Stunnel](https://www.stunnel.org/) (as not all of the services or components support TLS natively). Tunneling all traffic between containers makes sure that it is not possible to eavesdrop, even when someone else has access to the underlying cloud or network infrastructure. + diff --git a/deploy-manage/deploy/cloud-enterprise/ece-configure-templates-index-management.md b/deploy-manage/deploy/cloud-enterprise/ece-configure-templates-index-management.md index bf00eb5bf..0c0525241 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-configure-templates-index-management.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-configure-templates-index-management.md @@ -39,7 +39,7 @@ To configure index management when you create a deployment template: Index curation : Creates new indices on hot nodes first and moves them to warm nodes later on, based on the data views (formerly *index patterns*) you specify. Also manages replica counts for you, so that all shards of an index can fit on the right data nodes. Compared to index lifecycle management, index curation for time-based indices supports only one action, to move indices from nodes on one data configuration to another, but it is more straightforward to set up initially and all setup can be done directly from the Cloud UI. - If your user need to delete indices once they are no longer useful to them, they can run [Curator](asciidocalypse://docs/curator/docs/reference/elasticsearch/elasticsearch-client-curator/index.md) on-premise to manage indices for Elasticsearch clusters hosted on Elastic Cloud Enterprise. + If your user need to delete indices once they are no longer useful to them, they can run [Curator](asciidocalypse://docs/curator/docs/reference/index.md) on-premise to manage indices for Elasticsearch clusters hosted on Elastic Cloud Enterprise. To configure index curation: diff --git a/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-tag-allocators.md b/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-tag-allocators.md index 0ce8fc9a6..5e44fd9a2 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-tag-allocators.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-configuring-ece-tag-allocators.md @@ -39,7 +39,7 @@ $$$allocator-sample-tags$$$Tags are simple key-value pairs. A small sampling of : Indicates allocators that can run CPU-intensive workloads faster than others. `instanceFamily: i3`, `instanceFamily: m5` -: Indicates the host type, used extensively on our hosted Elasticsearch Service to identify hosts with specific hardware characteristics. If you run your own hardware on-premise and have standardized on several specific host configurations, you could use similar tags. If you are deploying ECE on another cloud platform, you could use the instance type or machine type names from your provider. +: Indicates the host type, used extensively on {{ech}} to identify hosts with specific hardware characteristics. If you run your own hardware on-premise and have standardized on several specific host configurations, you could use similar tags. If you are deploying ECE on another cloud platform, you could use the instance type or machine type names from your provider. Avoid tags that describe a particular use case or an Elastic Stack component you plan to run on these allocators. Examples of tags to avoid include `elasticsearch: false` or `kibana: true`. You should define the intended use at the level of instance configurations instead and tag your allocators only to describe hardware characteristics. diff --git a/deploy-manage/deploy/cloud-enterprise/ece-containerization.md b/deploy-manage/deploy/cloud-enterprise/ece-containerization.md deleted file mode 100644 index d17e19ea3..000000000 --- a/deploy-manage/deploy/cloud-enterprise/ece-containerization.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-containerization.html ---- - -# Services as Docker containers [ece-containerization] - -Services are deployed as Docker containers, which simplifies the operational effort and makes it easy to provision similar environments for development and staging. Using Docker containers has the following advantages: - -* **Shares of resources** - - Each cluster node is run within a Docker container to make sure that all of the nodes have access to a guaranteed share of host resources. This mitigates the *noisy neighbor effect* where one busy deployment can overwhelm the entire host. The CPU resources are relative to the size of the Elasticsearch cluster they get assigned to. For example, a cluster with 32GB of RAM gets assigned twice as many CPU resources as a cluster with 16GB of RAM. - -* **Better security** - - On the assumption that any cluster can be compromised, containers are given no access to the platform. The same is true for the services: each service can read or write only those parts of the system state that are relevant to it. Even if some services are compromised, the attacker won’t get hold of the keys to the rest of them and will not compromise the whole platform. - -* **Secure communication through Stunnel** - - Docker containers communicate securely with one another through Transport Layer Security, provided by [Stunnel](https://www.stunnel.org/) (as not all of the services or components support TLS natively). Tunneling all traffic between containers makes sure that it is not possible to eavesdrop, even when someone else has access to the underlying cloud or network infrastructure. diff --git a/deploy-manage/deploy/cloud-enterprise/ece-manage-capacity.md b/deploy-manage/deploy/cloud-enterprise/ece-manage-capacity.md index d7db12cfb..ab8440506 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-manage-capacity.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-manage-capacity.md @@ -36,7 +36,7 @@ curl -X PUT \ -d '{"capacity":}' ``` -For more information on how to use API keys for authentication, check the section [Access the API from the Command Line](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-enterprise/ece-api-command-line.md). +For more information on how to use API keys for authentication, check the section [Access the API from the Command Line](asciidocalypse://docs/cloud/docs/reference/cloud-enterprise/ece-api-command-line.md). ::::{important} Prior to ECE 3.5.0, regardless of the use of this API, the [CPU quota](#ece-alloc-cpu) used the memory specified at installation time. diff --git a/deploy-manage/deploy/cloud-enterprise/ece-regional-deployment-aliases.md b/deploy-manage/deploy/cloud-enterprise/ece-regional-deployment-aliases.md index a91b23dbd..e0e152b0c 100644 --- a/deploy-manage/deploy/cloud-enterprise/ece-regional-deployment-aliases.md +++ b/deploy-manage/deploy/cloud-enterprise/ece-regional-deployment-aliases.md @@ -99,5 +99,5 @@ While the `TransportClient` is deprecated, your custom endpoint aliases still wo ``` -For more information on configuring the `TransportClient`, see [Configure the Java Transport Client](asciidocalypse://docs/elasticsearch-java/docs/reference/elasticsearch/elasticsearch-client-java-api-client/index.md). +For more information on configuring the `TransportClient`, see [Configure the Java Transport Client](asciidocalypse://docs/elasticsearch-java/docs/reference/index.md). diff --git a/deploy-manage/deploy/cloud-enterprise/find-cloud-id.md b/deploy-manage/deploy/cloud-enterprise/find-cloud-id.md index 26208d832..15cd0a4bc 100644 --- a/deploy-manage/deploy/cloud-enterprise/find-cloud-id.md +++ b/deploy-manage/deploy/cloud-enterprise/find-cloud-id.md @@ -41,14 +41,14 @@ To use the Cloud ID, you need: * The unique Cloud ID for your deployment, available from the deployment overview page. * A user ID and password that has permission to send data to your cluster. - In our examples, we use the `elastic` superuser that every Elasticsearch cluster comes with. The password for the `elastic` user is provided when you create a deployment (and can also be [reset](../../users-roles/cluster-or-deployment-auth/built-in-users.md) if you forget it). On a production system, you should adapt these examples by creating a user that can write to and access only the minimally required indices. For each Beat, review the specific feature and role table, similar to the one in [Metricbeat](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/feature-roles.md) documentation. + In our examples, we use the `elastic` superuser that every Elasticsearch cluster comes with. The password for the `elastic` user is provided when you create a deployment (and can also be [reset](../../users-roles/cluster-or-deployment-auth/built-in-users.md) if you forget it). On a production system, you should adapt these examples by creating a user that can write to and access only the minimally required indices. For each Beat, review the specific feature and role table, similar to the one in [Metricbeat](asciidocalypse://docs/beats/docs/reference/metricbeat/feature-roles.md) documentation. ## Configure Beats with your Cloud ID [ece-cloud-id-beats] The following example shows how you can send operational data from Metricbeat to {{ece}} by using the Cloud ID. Any of the available Beats will work, but we had to pick one for this example. ::::{tip} -For others, you can learn more about [getting started](asciidocalypse://docs/beats/docs/reference/ingestion-tools/index.md) with each Beat. +For others, you can learn more about [getting started](asciidocalypse://docs/beats/docs/reference/index.md) with each Beat. :::: To get started with Metricbeat and {{ece}}: @@ -56,8 +56,8 @@ To get started with Metricbeat and {{ece}}: 1. [Log into the Cloud UI](log-into-cloud-ui.md). 2. [Create a new deployment](create-deployment.md) and copy down the password for the `elastic` user. 3. On the deployment overview page, copy down the Cloud ID. -4. Set up the Beat of your choice, such as [Metricbeat](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/metricbeat-installation-configuration.md). -5. [Configure the Beat output to send to Elastic Cloud](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/configure-cloud-id.md). +4. Set up the Beat of your choice, such as [Metricbeat](asciidocalypse://docs/beats/docs/reference/metricbeat/metricbeat-installation-configuration.md). +5. [Configure the Beat output to send to Elastic Cloud](asciidocalypse://docs/beats/docs/reference/metricbeat/configure-cloud-id.md). ::::{note} Make sure you replace the values for `cloud.id` and `cloud.auth` with your own information. diff --git a/deploy-manage/deploy/cloud-enterprise/tools-apis.md b/deploy-manage/deploy/cloud-enterprise/tools-apis.md index dc9b34e50..e427c5852 100644 --- a/deploy-manage/deploy/cloud-enterprise/tools-apis.md +++ b/deploy-manage/deploy/cloud-enterprise/tools-apis.md @@ -4,4 +4,14 @@ % GitHub issue: https://github.com/elastic/docs-projects/issues/310 -⚠️ **This page is a work in progress.** ⚠️ \ No newline at end of file + ⚠️ **This page is a work in progress.** ⚠️ + +You can use these tools and APIs to interact with the following {{ece}} features: + +* [{{ecloud}} Control (ecctl)](asciidocalypse://docs/ecctl/docs/reference/index.md): Wraps typical operations commonly needed by operators within a single command line tool. +* [ECE scripts](asciidocalypse://docs/cloud/docs/reference/cloud-enterprise/scripts.md): Use the `elastic-cloud-enterprise.sh` script to install {{ece}} or modify your installation. +* [ECE diagnostics tool](/troubleshoot/deployments/cloud-enterprise/run-ece-diagnostics-tool.md): Collect logs and metrics that you can send to Elastic Support for troubleshooting and investigation purposes. + + + + \ No newline at end of file diff --git a/deploy-manage/deploy/cloud-on-k8s.md b/deploy-manage/deploy/cloud-on-k8s.md index 15c26f1e5..23d535a50 100644 --- a/deploy-manage/deploy/cloud-on-k8s.md +++ b/deploy-manage/deploy/cloud-on-k8s.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_urls: - https://www.elastic.co/guide/en/cloud-on-k8s/current/index.html - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-overview.html diff --git a/deploy-manage/deploy/cloud-on-k8s/accessing-services.md b/deploy-manage/deploy/cloud-on-k8s/accessing-services.md index deb0a8086..564a1c02d 100644 --- a/deploy-manage/deploy/cloud-on-k8s/accessing-services.md +++ b/deploy-manage/deploy/cloud-on-k8s/accessing-services.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_urls: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-accessing-elastic-services.html - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-request-elasticsearch-endpoint.html diff --git a/deploy-manage/deploy/cloud-on-k8s/advanced-configuration-logstash.md b/deploy-manage/deploy/cloud-on-k8s/advanced-configuration-logstash.md index 037676cab..eb5304d47 100644 --- a/deploy-manage/deploy/cloud-on-k8s/advanced-configuration-logstash.md +++ b/deploy-manage/deploy/cloud-on-k8s/advanced-configuration-logstash.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-logstash-advanced-configuration.html --- @@ -36,7 +37,7 @@ spec: You can specify sensitive settings with Kubernetes secrets. ECK automatically injects these settings into the keystore before it starts Logstash. The ECK operator continues to watch the secrets for changes and will restart Logstash Pods when it detects a change. -The Logstash Keystore can be password protected by setting an environment variable called `LOGSTASH_KEYSTORE_PASS`. Check out [Logstash Keystore](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/keystore.md#keystore-password) documentation for details. +The Logstash Keystore can be password protected by setting an environment variable called `LOGSTASH_KEYSTORE_PASS`. Check out [Logstash Keystore](asciidocalypse://docs/logstash/docs/reference/keystore.md#keystore-password) documentation for details. ```yaml apiVersion: v1 diff --git a/deploy-manage/deploy/cloud-on-k8s/advanced-configuration-maps-server.md b/deploy-manage/deploy/cloud-on-k8s/advanced-configuration-maps-server.md index b17813f2f..58a724876 100644 --- a/deploy-manage/deploy/cloud-on-k8s/advanced-configuration-maps-server.md +++ b/deploy-manage/deploy/cloud-on-k8s/advanced-configuration-maps-server.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-maps-advanced-configuration.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/advanced-configuration.md b/deploy-manage/deploy/cloud-on-k8s/advanced-configuration.md index 302fcb396..6acdc8a8d 100644 --- a/deploy-manage/deploy/cloud-on-k8s/advanced-configuration.md +++ b/deploy-manage/deploy/cloud-on-k8s/advanced-configuration.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-apm-advanced-configuration.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/advanced-elasticsearch-node-scheduling.md b/deploy-manage/deploy/cloud-on-k8s/advanced-elasticsearch-node-scheduling.md index e2f58e549..8e0e8d2c8 100644 --- a/deploy-manage/deploy/cloud-on-k8s/advanced-elasticsearch-node-scheduling.md +++ b/deploy-manage/deploy/cloud-on-k8s/advanced-elasticsearch-node-scheduling.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-advanced-node-scheduling.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/air-gapped-install.md b/deploy-manage/deploy/cloud-on-k8s/air-gapped-install.md index 2da23b5f8..8af7b3778 100644 --- a/deploy-manage/deploy/cloud-on-k8s/air-gapped-install.md +++ b/deploy-manage/deploy/cloud-on-k8s/air-gapped-install.md @@ -1,7 +1,8 @@ --- navigation_title: Air gapped environments -applies: - eck: all +applies_to: + deployment: + eck: all mapped_urls: - https://www.elastic.co/guide/en/elastic-stack/current/air-gapped-install.html - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-air-gapped.html diff --git a/deploy-manage/deploy/cloud-on-k8s/apm-server.md b/deploy-manage/deploy/cloud-on-k8s/apm-server.md index 288d73905..1f07e4937 100644 --- a/deploy-manage/deploy/cloud-on-k8s/apm-server.md +++ b/deploy-manage/deploy/cloud-on-k8s/apm-server.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-apm-server.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/beats.md b/deploy-manage/deploy/cloud-on-k8s/beats.md index 88685012e..d6494cb83 100644 --- a/deploy-manage/deploy/cloud-on-k8s/beats.md +++ b/deploy-manage/deploy/cloud-on-k8s/beats.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-beat.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/configuration-beats.md b/deploy-manage/deploy/cloud-on-k8s/configuration-beats.md index e66eb7c66..e0913e4c2 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configuration-beats.md +++ b/deploy-manage/deploy/cloud-on-k8s/configuration-beats.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-beat-configuration.html --- @@ -72,7 +73,7 @@ stringData: hosts: ["quickstart-es-http.default.svc:9200"] ``` -For more details, check the [Beats configuration](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-libbeat/config-file-format.md) section. +For more details, check the [Beats configuration](asciidocalypse://docs/beats/docs/reference/libbeat/config-file-format.md) section. ## Customize the connection to an Elasticsearch cluster [k8s-beat-connect-es] @@ -153,7 +154,7 @@ stringData: AGENT_NAME_VAR: id_007 ``` -Check [Beats documentation](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/keystore.md) for more details. +Check [Beats documentation](asciidocalypse://docs/beats/docs/reference/filebeat/keystore.md) for more details. ## Set Beat output [k8s-beat-set-beat-output] @@ -203,7 +204,7 @@ Consider picking the `Recreate` strategy if you are using a `hostPath` volume as ## Role Based Access Control for Beats [k8s-beat-role-based-access-control-for-beats] -Some Beats features (such as [autodiscover](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/configuration-autodiscover.md) or Kubernetes module [metricsets](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/metricbeat-metricset-kubernetes-apiserver.md)) require that Beat Pods interact with Kubernetes APIs. Specific permissions are needed to allow this functionality. Standard Kubernetes [RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) rules apply. For example, to allow for autodiscover: +Some Beats features (such as [autodiscover](asciidocalypse://docs/beats/docs/reference/filebeat/configuration-autodiscover.md) or Kubernetes module [metricsets](asciidocalypse://docs/beats/docs/reference/metricbeat/metricbeat-metricset-kubernetes-apiserver.md)) require that Beat Pods interact with Kubernetes APIs. Specific permissions are needed to allow this functionality. Standard Kubernetes [RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) rules apply. For example, to allow for autodiscover: ```yaml apiVersion: beat.k8s.elastic.co/v1beta1 diff --git a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-beats.md b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-beats.md index 7eca5d1e1..d5277b2ec 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-beats.md +++ b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-beats.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-beat-configuration-examples.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-fleet.md b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-fleet.md index bb9f2388c..425917579 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-fleet.md +++ b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-fleet.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-elastic-agent-fleet-configuration-examples.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-logstash.md b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-logstash.md index 9640071f3..f9536eac5 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-logstash.md +++ b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-logstash.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-logstash-configuration-examples.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-standalone.md b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-standalone.md index b83d77e0b..a673a2e54 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configuration-examples-standalone.md +++ b/deploy-manage/deploy/cloud-on-k8s/configuration-examples-standalone.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-elastic-agent-configuration-examples.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/configuration-fleet.md b/deploy-manage/deploy/cloud-on-k8s/configuration-fleet.md index 05bf3bad7..35c3857f5 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configuration-fleet.md +++ b/deploy-manage/deploy/cloud-on-k8s/configuration-fleet.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-elastic-agent-fleet-configuration.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/configuration-logstash.md b/deploy-manage/deploy/cloud-on-k8s/configuration-logstash.md index eaa6135de..e6461f4b1 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configuration-logstash.md +++ b/deploy-manage/deploy/cloud-on-k8s/configuration-logstash.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-logstash-configuration.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/configuration-standalone.md b/deploy-manage/deploy/cloud-on-k8s/configuration-standalone.md index 269778b1f..b2656f8c7 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configuration-standalone.md +++ b/deploy-manage/deploy/cloud-on-k8s/configuration-standalone.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-elastic-agent-configuration.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/configure-deployments.md b/deploy-manage/deploy/cloud-on-k8s/configure-deployments.md index ec4294b64..80c25b604 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configure-deployments.md +++ b/deploy-manage/deploy/cloud-on-k8s/configure-deployments.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-orchestrating-elastic-stack-applications.html - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-update-deployment.html diff --git a/deploy-manage/deploy/cloud-on-k8s/configure-eck.md b/deploy-manage/deploy/cloud-on-k8s/configure-eck.md index 798fc1f2a..9dc95ca9b 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configure-eck.md +++ b/deploy-manage/deploy/cloud-on-k8s/configure-eck.md @@ -1,7 +1,8 @@ --- navigation_title: Apply configuration settings -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-operator-config.html --- @@ -11,7 +12,7 @@ mapped_pages: This page explains the various methods for configuring and applying ECK settings. ::::{tip} -For a detailed list and description of all available settings in ECK, refer to [ECK configuration flags](asciidocalypse://docs/cloud-on-k8s/docs/reference/cloud/cloud-on-k8s/eck-configuration-flags.md). +For a detailed list and description of all available settings in ECK, refer to [ECK configuration flags](asciidocalypse://docs/cloud-on-k8s/docs/reference/eck-configuration-flags.md). :::: By default, the ECK installation includes a [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/) with an `eck.yaml` key where you can add, remove, or update configuration settings. This ConfigMap is mounted into the operator’s container as a file, and provided to the application through the `--config` flag. @@ -55,7 +56,7 @@ If you installed ECK using the manifests and the commands listed in [Deploy ECK] You can update the ConfigMap directly using the command `kubectl edit configmap elastic-operator -n elastic-operator` or modify the installation manifests and reapply them with `kubectl apply -f `. -The following shows the default `elastic-operator` ConfigMap, for reference purposes. Refer to [ECK configuration flags](asciidocalypse://docs/cloud-on-k8s/docs/reference/cloud/cloud-on-k8s/eck-configuration-flags.md) for a complete list of available settings. +The following shows the default `elastic-operator` ConfigMap, for reference purposes. Refer to [ECK configuration flags](asciidocalypse://docs/cloud-on-k8s/docs/reference/eck-configuration-flags.md) for a complete list of available settings. ```yaml apiVersion: v1 diff --git a/deploy-manage/deploy/cloud-on-k8s/configure-validating-webhook.md b/deploy-manage/deploy/cloud-on-k8s/configure-validating-webhook.md index fb35b29fe..009b3b505 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configure-validating-webhook.md +++ b/deploy-manage/deploy/cloud-on-k8s/configure-validating-webhook.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-webhook.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/configure.md b/deploy-manage/deploy/cloud-on-k8s/configure.md index d7c8f7f22..e2e8be8ea 100644 --- a/deploy-manage/deploy/cloud-on-k8s/configure.md +++ b/deploy-manage/deploy/cloud-on-k8s/configure.md @@ -1,7 +1,8 @@ --- navigation_title: Configure -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-operating-eck.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/connect-to-apm-server.md b/deploy-manage/deploy/cloud-on-k8s/connect-to-apm-server.md index 83bb20453..98e878fe8 100644 --- a/deploy-manage/deploy/cloud-on-k8s/connect-to-apm-server.md +++ b/deploy-manage/deploy/cloud-on-k8s/connect-to-apm-server.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-apm-connecting.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/connect-to-external-elastic-resources.md b/deploy-manage/deploy/cloud-on-k8s/connect-to-external-elastic-resources.md index 9ec084e1f..b2b36f1d5 100644 --- a/deploy-manage/deploy/cloud-on-k8s/connect-to-external-elastic-resources.md +++ b/deploy-manage/deploy/cloud-on-k8s/connect-to-external-elastic-resources.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-connect-to-unmanaged-resources.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/create-custom-images.md b/deploy-manage/deploy/cloud-on-k8s/create-custom-images.md index e76e6d603..806783180 100644 --- a/deploy-manage/deploy/cloud-on-k8s/create-custom-images.md +++ b/deploy-manage/deploy/cloud-on-k8s/create-custom-images.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-custom-images.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/custom-configuration-files-plugins.md b/deploy-manage/deploy/cloud-on-k8s/custom-configuration-files-plugins.md index 2895c5527..4e68b62a3 100644 --- a/deploy-manage/deploy/cloud-on-k8s/custom-configuration-files-plugins.md +++ b/deploy-manage/deploy/cloud-on-k8s/custom-configuration-files-plugins.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-bundles-plugins.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/customize-pods.md b/deploy-manage/deploy/cloud-on-k8s/customize-pods.md index 3bee22931..facb6be64 100644 --- a/deploy-manage/deploy/cloud-on-k8s/customize-pods.md +++ b/deploy-manage/deploy/cloud-on-k8s/customize-pods.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-customize-pods.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/deploy-an-orchestrator.md b/deploy-manage/deploy/cloud-on-k8s/deploy-an-orchestrator.md index b64f75198..3ac01e3b4 100644 --- a/deploy-manage/deploy/cloud-on-k8s/deploy-an-orchestrator.md +++ b/deploy-manage/deploy/cloud-on-k8s/deploy-an-orchestrator.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/deploy-eck-on-gke-autopilot.md b/deploy-manage/deploy/cloud-on-k8s/deploy-eck-on-gke-autopilot.md index 077fd7069..a5eef9d16 100644 --- a/deploy-manage/deploy/cloud-on-k8s/deploy-eck-on-gke-autopilot.md +++ b/deploy-manage/deploy/cloud-on-k8s/deploy-eck-on-gke-autopilot.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-autopilot.html - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-autopilot-setting-virtual-memory.html diff --git a/deploy-manage/deploy/cloud-on-k8s/deploy-eck-on-openshift.md b/deploy-manage/deploy/cloud-on-k8s/deploy-eck-on-openshift.md index f9c67282f..59c9736ae 100644 --- a/deploy-manage/deploy/cloud-on-k8s/deploy-eck-on-openshift.md +++ b/deploy-manage/deploy/cloud-on-k8s/deploy-eck-on-openshift.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-openshift.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/deploy-elastic-maps-server.md b/deploy-manage/deploy/cloud-on-k8s/deploy-elastic-maps-server.md index 96bf3c19a..08fb206bd 100644 --- a/deploy-manage/deploy/cloud-on-k8s/deploy-elastic-maps-server.md +++ b/deploy-manage/deploy/cloud-on-k8s/deploy-elastic-maps-server.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-maps-es.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/deploy-fips-compatible-version-of-eck.md b/deploy-manage/deploy/cloud-on-k8s/deploy-fips-compatible-version-of-eck.md index e52a3607b..e0377646a 100644 --- a/deploy-manage/deploy/cloud-on-k8s/deploy-fips-compatible-version-of-eck.md +++ b/deploy-manage/deploy/cloud-on-k8s/deploy-fips-compatible-version-of-eck.md @@ -1,7 +1,8 @@ --- navigation_title: FIPS compatibility -applies: - eck: all +applies_to: + deployment: + eck: all mapped_urls: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-fips.html - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s_installation.html diff --git a/deploy-manage/deploy/cloud-on-k8s/elastic-maps-server.md b/deploy-manage/deploy/cloud-on-k8s/elastic-maps-server.md index e4aa0ce30..4a119d231 100644 --- a/deploy-manage/deploy/cloud-on-k8s/elastic-maps-server.md +++ b/deploy-manage/deploy/cloud-on-k8s/elastic-maps-server.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-maps.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/elastic-stack-configuration-policies.md b/deploy-manage/deploy/cloud-on-k8s/elastic-stack-configuration-policies.md index b6d64309c..89016b078 100644 --- a/deploy-manage/deploy/cloud-on-k8s/elastic-stack-configuration-policies.md +++ b/deploy-manage/deploy/cloud-on-k8s/elastic-stack-configuration-policies.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-stack-config-policy.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/elasticsearch-configuration.md b/deploy-manage/deploy/cloud-on-k8s/elasticsearch-configuration.md index a0a1f099a..0ba4474c6 100644 --- a/deploy-manage/deploy/cloud-on-k8s/elasticsearch-configuration.md +++ b/deploy-manage/deploy/cloud-on-k8s/elasticsearch-configuration.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-elasticsearch-specification.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md b/deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md index 55c1879cb..4a7ac2b47 100644 --- a/deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md +++ b/deploy-manage/deploy/cloud-on-k8s/elasticsearch-deployment-quickstart.md @@ -1,7 +1,8 @@ --- navigation_title: Deploy an Elasticsearch cluster -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html --- @@ -43,7 +44,7 @@ The cluster that you deployed in this quickstart guide only allocates a persiste :::: -For a full description of each `CustomResourceDefinition` (CRD), refer to the [*API Reference*](asciidocalypse://docs/cloud-on-k8s/docs/reference/cloud/cloud-on-k8s/k8s-api-reference.md) or view the CRD files in the [project repository](https://github.com/elastic/cloud-on-k8s/tree/2.16/config/crds). You can also retrieve information about a CRD from the cluster. For example, describe the {{es}} CRD specification with [`describe`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/): +For a full description of each `CustomResourceDefinition` (CRD), refer to the [*API Reference*](asciidocalypse://docs/cloud-on-k8s/docs/reference/k8s-api-reference.md) or view the CRD files in the [project repository](https://github.com/elastic/cloud-on-k8s/tree/2.16/config/crds). You can also retrieve information about a CRD from the cluster. For example, describe the {{es}} CRD specification with [`describe`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/): ```sh kubectl describe crd elasticsearch diff --git a/deploy-manage/deploy/cloud-on-k8s/fleet-managed-elastic-agent.md b/deploy-manage/deploy/cloud-on-k8s/fleet-managed-elastic-agent.md index b3fb75721..e12036cc8 100644 --- a/deploy-manage/deploy/cloud-on-k8s/fleet-managed-elastic-agent.md +++ b/deploy-manage/deploy/cloud-on-k8s/fleet-managed-elastic-agent.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-elastic-agent-fleet.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/http-configuration.md b/deploy-manage/deploy/cloud-on-k8s/http-configuration.md index 6974660fe..3b1f11268 100644 --- a/deploy-manage/deploy/cloud-on-k8s/http-configuration.md +++ b/deploy-manage/deploy/cloud-on-k8s/http-configuration.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-maps-http-configuration.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/init-containers-for-plugin-downloads.md b/deploy-manage/deploy/cloud-on-k8s/init-containers-for-plugin-downloads.md index df09d3adf..147337098 100644 --- a/deploy-manage/deploy/cloud-on-k8s/init-containers-for-plugin-downloads.md +++ b/deploy-manage/deploy/cloud-on-k8s/init-containers-for-plugin-downloads.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-init-containers-plugin-downloads.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/install-using-helm-chart.md b/deploy-manage/deploy/cloud-on-k8s/install-using-helm-chart.md index 1a71f2a6a..917858358 100644 --- a/deploy-manage/deploy/cloud-on-k8s/install-using-helm-chart.md +++ b/deploy-manage/deploy/cloud-on-k8s/install-using-helm-chart.md @@ -2,8 +2,9 @@ navigation_title: Helm chart mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-install-helm.html -applies: - eck: all +applies_to: + deployment: + eck: all --- # Install using a Helm chart [k8s-install-helm] diff --git a/deploy-manage/deploy/cloud-on-k8s/install-using-yaml-manifest-quickstart.md b/deploy-manage/deploy/cloud-on-k8s/install-using-yaml-manifest-quickstart.md index df39d205c..3b8c740e3 100644 --- a/deploy-manage/deploy/cloud-on-k8s/install-using-yaml-manifest-quickstart.md +++ b/deploy-manage/deploy/cloud-on-k8s/install-using-yaml-manifest-quickstart.md @@ -3,8 +3,9 @@ navigation_title: YAML manifests mapped_urls: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-install-yaml-manifests.html - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-eck.html -applies: - eck: all +applies_to: + deployment: + eck: all --- # Install ECK using the YAML manifests [k8s-install-yaml-manifests] diff --git a/deploy-manage/deploy/cloud-on-k8s/install.md b/deploy-manage/deploy/cloud-on-k8s/install.md index 4968136ce..972a5901e 100644 --- a/deploy-manage/deploy/cloud-on-k8s/install.md +++ b/deploy-manage/deploy/cloud-on-k8s/install.md @@ -1,7 +1,8 @@ --- navigation_title: Install -applies: - eck: all +applies_to: + deployment: + eck: all mapped_urls: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-installing-eck.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-advanced-configuration.md b/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-advanced-configuration.md index a495a73ab..89ad871f1 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-advanced-configuration.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-advanced-configuration.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-kibana-advanced-configuration.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-es.md b/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-es.md index 3caf76f2f..edd8270ec 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-es.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-es.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-kibana-es.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-http-configuration.md b/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-http-configuration.md index c7e629c97..9335203fa 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-http-configuration.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-http-configuration.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-kibana-http-configuration.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-plugins.md b/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-plugins.md index eaef57d59..87c9d926a 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-plugins.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-plugins.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-kibana-plugins.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-secure-settings.md b/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-secure-settings.md index 4083fc31a..ddbf5aaa1 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-secure-settings.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-kibana-secure-settings.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-kibana-secure-settings.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-agent.md b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-agent.md index 1cb906b87..52ac57e07 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-agent.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-agent.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-openshift-agent.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-anyuid-workaround.md b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-anyuid-workaround.md index bef18244f..8b3b92dc2 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-anyuid-workaround.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-anyuid-workaround.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-openshift-anyuid-workaround.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-beats.md b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-beats.md index 9d041c506..ff171d0c0 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-beats.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-beats.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-openshift-beats.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-elasticsearch.md b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-elasticsearch.md index 242fe6ec3..7b2be0826 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-elasticsearch.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-elasticsearch.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-openshift-deploy-elasticsearch.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-kibana.md b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-kibana.md index 1175d6127..afe5fb293 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-kibana.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-kibana.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-openshift-deploy-kibana.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-operator.md b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-operator.md index 80fdc2d14..90ce478b6 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-operator.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-openshift-deploy-operator.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-openshift-deploy-the-operator.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-istio.md b/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-istio.md index 34813dc22..ff5f06c4d 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-istio.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-istio.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-service-mesh-istio.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-linkerd.md b/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-linkerd.md index 7a19e6762..84a30062e 100644 --- a/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-linkerd.md +++ b/deploy-manage/deploy/cloud-on-k8s/k8s-service-mesh-linkerd.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-service-mesh-linkerd.html --- @@ -42,7 +43,7 @@ kubectl annotate namespace elastic-stack linkerd.io/inject=enabled Any Elasticsearch, Kibana, or APM Server resources deployed to a namespace with the above annotation will automatically join the mesh. -Alternatively, if you only want specific resources to join the mesh, add the `linkerd.io/inject: enabled` annotation to the `podTemplate` (check [API documentation](asciidocalypse://docs/cloud-on-k8s/docs/reference/cloud/cloud-on-k8s/k8s-api-reference.md)) of the resource as follows: +Alternatively, if you only want specific resources to join the mesh, add the `linkerd.io/inject: enabled` annotation to the `podTemplate` (check [API documentation](asciidocalypse://docs/cloud-on-k8s/docs/reference/k8s-api-reference.md)) of the resource as follows: ```yaml podTemplate: diff --git a/deploy-manage/deploy/cloud-on-k8s/kibana-configuration.md b/deploy-manage/deploy/cloud-on-k8s/kibana-configuration.md index 086b015f6..569de652b 100644 --- a/deploy-manage/deploy/cloud-on-k8s/kibana-configuration.md +++ b/deploy-manage/deploy/cloud-on-k8s/kibana-configuration.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-kibana.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/kibana-instance-quickstart.md b/deploy-manage/deploy/cloud-on-k8s/kibana-instance-quickstart.md index 43393b7a9..9364c9067 100644 --- a/deploy-manage/deploy/cloud-on-k8s/kibana-instance-quickstart.md +++ b/deploy-manage/deploy/cloud-on-k8s/kibana-instance-quickstart.md @@ -1,7 +1,8 @@ --- navigation_title: Deploy a Kibana instance -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-kibana.html --- @@ -65,7 +66,7 @@ To deploy a simple [{{kib}}](/get-started/the-stack.md#stack-components-kibana) ``` -For a full description of each `CustomResourceDefinition` (CRD), refer to the [*API Reference*](asciidocalypse://docs/cloud-on-k8s/docs/reference/cloud/cloud-on-k8s/k8s-api-reference.md) or view the CRD files in the [project repository](https://github.com/elastic/cloud-on-k8s/tree/2.16/config/crds). You can also retrieve information about a CRD from the instance. For example, describe the {{kib}} CRD specification with [`describe`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/): +For a full description of each `CustomResourceDefinition` (CRD), refer to the [*API Reference*](asciidocalypse://docs/cloud-on-k8s/docs/reference/k8s-api-reference.md) or view the CRD files in the [project repository](https://github.com/elastic/cloud-on-k8s/tree/2.16/config/crds). You can also retrieve information about a CRD from the instance. For example, describe the {{kib}} CRD specification with [`describe`](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_describe/): ```sh kubectl describe crd kibana diff --git a/deploy-manage/deploy/cloud-on-k8s/known-limitations.md b/deploy-manage/deploy/cloud-on-k8s/known-limitations.md index d1acac522..db927f1c5 100644 --- a/deploy-manage/deploy/cloud-on-k8s/known-limitations.md +++ b/deploy-manage/deploy/cloud-on-k8s/known-limitations.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-elastic-agent-fleet-known-limitations.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md b/deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md index 28aa0c985..0fce2f887 100644 --- a/deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md +++ b/deploy-manage/deploy/cloud-on-k8s/logstash-plugins.md @@ -1,13 +1,14 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-logstash-plugins.html --- # Logstash plugins [k8s-logstash-plugins] -The power of {{ls}} is in the plugins--[inputs](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/input-plugins.md), [outputs](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/output-plugins.md), [filters,]((asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/filter-plugins.md) and [codecs](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/codec-plugins.md). +The power of {{ls}} is in the plugins--[inputs](asciidocalypse://docs/logstash/docs/reference/input-plugins.md), [outputs](asciidocalypse://docs/logstash/docs/reference/output-plugins.md), [filters](asciidocalypse://docs/logstash/docs/reference/filter-plugins.md), and [codecs](asciidocalypse://docs/logstash/docs/reference/codec-plugins.md). In {{ls}} on ECK, you can use the same plugins that you use for other {{ls}} instances—​including Elastic-supported, community-supported, and custom plugins. However, you may have other factors to consider, such as how you configure your {{k8s}} resources, how you specify additional resources, and how you scale your {{ls}} installation. @@ -89,7 +90,7 @@ spec: **Static read-only files** -Some plugins require or allow access to small static read-only files. You can use these for a variety of reasons. Examples include adding custom `grok` patterns for [`logstash-filter-grok`](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-filters-grok.md) to use for lookup, source code for [`logstash-filter-ruby`], a dictionary for [`logstash-filter-translate`](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-filters-translate.md) or the location of a SQL statement for [`logstash-input-jdbc`](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-jdbc.md). Make these files available to the {{ls}} resource in your manifest. +Some plugins require or allow access to small static read-only files. You can use these for a variety of reasons. Examples include adding custom `grok` patterns for [`logstash-filter-grok`](asciidocalypse://docs/logstash/docs/reference/plugins-filters-grok.md) to use for lookup, source code for [`logstash-filter-ruby`], a dictionary for [`logstash-filter-translate`](asciidocalypse://docs/logstash/docs/reference/plugins-filters-translate.md) or the location of a SQL statement for [`logstash-input-jdbc`](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-jdbc.md). Make these files available to the {{ls}} resource in your manifest. ::::{tip} In the plugin documentation, these plugin settings are typically identified by `path` or an `array` of `paths`. @@ -98,7 +99,7 @@ In the plugin documentation, these plugin settings are typically identified by ` To use these in your manifest, create a ConfigMap or Secret representing the asset, a Volume in your `podTemplate.spec` containing the ConfigMap or Secret, and mount that Volume with a VolumeMount in your `podTemplateSpec.container` section of your {{ls}} resource. -This example illustrates configuring a ConfigMap from a ruby source file, and including it in a [`logstash-filter-ruby`](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-filters-ruby.md) plugin. +This example illustrates configuring a ConfigMap from a ruby source file, and including it in a [`logstash-filter-ruby`](asciidocalypse://docs/logstash/docs/reference/plugins-filters-ruby.md) plugin. First, create the ConfigMap. @@ -142,7 +143,7 @@ spec: ### Larger read-only assets (1 MiB+) [k8s-logstash-working-with-plugins-large-ro] -Some plugins require or allow access to static read-only files that exceed the 1 MiB (mebibyte) limit imposed by ConfigMap and Secret. For example, you may need JAR files to load drivers when using a JDBC or JMS plugin, or a large [`logstash-filter-translate`](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-filters-translate.md) dictionary. +Some plugins require or allow access to static read-only files that exceed the 1 MiB (mebibyte) limit imposed by ConfigMap and Secret. For example, you may need JAR files to load drivers when using a JDBC or JMS plugin, or a large [`logstash-filter-translate`](asciidocalypse://docs/logstash/docs/reference/plugins-filters-translate.md) dictionary. You can add files using: @@ -238,7 +239,7 @@ After you build and deploy the custom image, include it in the {{ls}} manifest. ### Writable storage [k8s-logstash-working-with-plugins-writable] -Some {{ls}} plugins need access to writable storage. This could be for checkpointing to keep track of events already processed, a place to temporarily write events before sending a batch of events, or just to actually write events to disk in the case of [`logstash-output-file`](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-outputs-file.md). +Some {{ls}} plugins need access to writable storage. This could be for checkpointing to keep track of events already processed, a place to temporarily write events before sending a batch of events, or just to actually write events to disk in the case of [`logstash-output-file`](asciidocalypse://docs/logstash/docs/reference/plugins-outputs-file.md). {{ls}} on ECK by default supplies a small 1.5 GiB (gibibyte) default persistent volume to each pod. This volume is called `logstash-data` and is located at `/usr/logstash/data`, and is typically the default location for most plugin use cases. This volume is stable across restarts of {{ls}} pods and is suitable for many use cases. @@ -332,7 +333,7 @@ spec: ::::{admonition} Horizontal scaling for {{ls}} plugins * Not all {{ls}} deployments can be scaled horizontally by increasing the number of {{ls}} Pods defined in the {{ls}} resource. Depending on the types of plugins in a {{ls}} installation, increasing the number of pods may cause data duplication, data loss, incorrect data, or may waste resources with pods unable to be utilized correctly. -* The ability of a {{ls}} installation to scale horizontally is bound by its most restrictive plugin(s). Even if all pipelines are using [`logstash-input-elastic_agent`](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-elastic_agent.md) or [`logstash-input-beats`](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-beats.md) which should enable full horizontal scaling, introducing a more restrictive input or filter plugin forces the restrictions for pod scaling associated with that plugin. +* The ability of a {{ls}} installation to scale horizontally is bound by its most restrictive plugin(s). Even if all pipelines are using [`logstash-input-elastic_agent`](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-elastic_agent.md) or [`logstash-input-beats`](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-beats.md) which should enable full horizontal scaling, introducing a more restrictive input or filter plugin forces the restrictions for pod scaling associated with that plugin. :::: @@ -344,12 +345,12 @@ spec: * They **must** specify `pipeline.workers=1` for any pipelines that use them. * The number of pods cannot be scaled above 1. -Examples of aggregating filters include [`logstash-filter-aggregate`](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-filters-aggregate.md), [`logstash-filter-csv`](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-filters-csv.md) when `autodetect_column_names` set to `true`, and any [`logstash-filter-ruby`](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-filters-ruby.md) implementations that perform aggregations. +Examples of aggregating filters include [`logstash-filter-aggregate`](asciidocalypse://docs/logstash/docs/reference/plugins-filters-aggregate.md), [`logstash-filter-csv`](asciidocalypse://docs/logstash/docs/reference/plugins-filters-csv.md) when `autodetect_column_names` set to `true`, and any [`logstash-filter-ruby`](asciidocalypse://docs/logstash/docs/reference/plugins-filters-ruby.md) implementations that perform aggregations. ### Input plugins: events pushed to {{ls}} [k8s-logstash-inputs-data-pushed] -{{ls}} installations with inputs that enable {{ls}} to receive data should be able to scale freely and have load spread across them horizontally. These plugins include [`logstash-input-beats`](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-beats.md), [`logstash-input-elastic_agent`](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-elastic_agent.md), [`logstash-input-tcp`](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-tcp.md), and [`logstash-input-http`](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-http.md). +{{ls}} installations with inputs that enable {{ls}} to receive data should be able to scale freely and have load spread across them horizontally. These plugins include [`logstash-input-beats`](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-beats.md), [`logstash-input-elastic_agent`](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-elastic_agent.md), [`logstash-input-tcp`](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-tcp.md), and [`logstash-input-http`](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-http.md). ### Input plugins: {{ls}} maintains state [k8s-logstash-inputs-local-checkpoints] @@ -360,16 +361,16 @@ Note that plugins that retrieve data from external sources, and require some lev Input plugins that include configuration settings such as `sincedb`, `checkpoint` or `sql_last_run_metadata` may fall into this category. -Examples of these plugins include [`logstash-input-jdbc`](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-jdbc.md) (which has no automatic way to split queries across {{ls}} instances), [`logstash-input-s3`](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-s3.md) (which has no way to split which buckets to read across {{ls}} instances), or [`logstash-input-file`](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-file.md). +Examples of these plugins include [`logstash-input-jdbc`](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-jdbc.md) (which has no automatic way to split queries across {{ls}} instances), [`logstash-input-s3`](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-s3.md) (which has no way to split which buckets to read across {{ls}} instances), or [`logstash-input-file`](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-file.md). ### Input plugins: external source stores state [k8s-logstash-inputs-external-state] {{ls}} installations that use input plugins that retrieve data from an external source, and **rely on the external source to store state** can scale based on the parameters of the external source. -For example, a {{ls}} installation that uses a [`logstash-input-kafka`](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-kafka.md) plugin to retrieve data can scale the number of pods up to the number of partitions used, as a partition can have at most one consumer belonging to the same consumer group. Any pods created beyond that threshold cannot be scheduled to receive data. +For example, a {{ls}} installation that uses a [`logstash-input-kafka`](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-kafka.md) plugin to retrieve data can scale the number of pods up to the number of partitions used, as a partition can have at most one consumer belonging to the same consumer group. Any pods created beyond that threshold cannot be scheduled to receive data. -Examples of these plugins include [`logstash-input-kafka`](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-kafka.md), [`logstash-input-azure_event_hubs`](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-azure_event_hubs.md), and [`logstash-input-kinesis`](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-kinesis.md). +Examples of these plugins include [`logstash-input-kafka`](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-kafka.md), [`logstash-input-azure_event_hubs`](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-azure_event_hubs.md), and [`logstash-input-kinesis`](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-kinesis.md). @@ -389,12 +390,12 @@ Use these guidelines *in addition* to the general guidelines provided in [Scalin ### {{ls}} integration plugin [k8s-logstash-plugin-considerations-ls-integration] -When your pipeline uses the [`Logstash integration`](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-integrations-logstash.md) plugin, add `keepalive=>false` to the [logstash-output](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-outputs-logstash.md) definition to ensure that load balancing works correctly rather than keeping affinity to the same pod. +When your pipeline uses the [`Logstash integration`](asciidocalypse://docs/logstash/docs/reference/plugins-integrations-logstash.md) plugin, add `keepalive=>false` to the [logstash-output](asciidocalypse://docs/logstash/docs/reference/plugins-outputs-logstash.md) definition to ensure that load balancing works correctly rather than keeping affinity to the same pod. ### Elasticsearch output plugin [k8s-logstash-plugin-considerations-es-output] -The [`elasticsearch output`](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-outputs-elasticsearch.md) plugin requires certain roles to be configured in order to enable {{ls}} to communicate with {{es}}. +The [`elasticsearch output`](asciidocalypse://docs/logstash/docs/reference/plugins-outputs-elasticsearch.md) plugin requires certain roles to be configured in order to enable {{ls}} to communicate with {{es}}. You can customize roles in {{es}}. Check out [creating custom roles](../../users-roles/cluster-or-deployment-auth/native.md) @@ -418,7 +419,7 @@ stringData: ### Elastic_integration filter plugin [k8s-logstash-plugin-considerations-integration-filter] -The [`elastic_integration filter`](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-filters-elastic_integration.md) plugin allows the use of [`ElasticsearchRef`](configuration-logstash.md#k8s-logstash-esref) and environment variables. +The [`elastic_integration filter`](asciidocalypse://docs/logstash/docs/reference/plugins-filters-elastic_integration.md) plugin allows the use of [`ElasticsearchRef`](configuration-logstash.md#k8s-logstash-esref) and environment variables. ```json elastic_integration { @@ -447,7 +448,7 @@ stringData: ### Elastic Agent input and Beats input plugins [k8s-logstash-plugin-considerations-agent-beats] -When you use the [Elastic Agent input](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-elastic_agent.md) or the [Beats input](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-beats.md), set the [`ttl`](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/logstash-output.md#_ttl) value on the Agent or Beat to ensure that load is distributed appropriately. +When you use the [Elastic Agent input](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-elastic_agent.md) or the [Beats input](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-beats.md), set the [`ttl`](asciidocalypse://docs/beats/docs/reference/filebeat/logstash-output.md#_ttl) value on the Agent or Beat to ensure that load is distributed appropriately. @@ -455,7 +456,7 @@ When you use the [Elastic Agent input](asciidocalypse://docs/logstash/docs/refer If you need plugins in addition to those included in the standard {{ls}} distribution, you can add them. Create a custom Docker image that includes the installed plugins, using the `bin/logstash-plugin install` utility to add more plugins to the image so that they can be used by {{ls}} pods. -This sample Dockerfile installs the [`logstash-filter-tld`](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-filters-tld.md) plugin to the official {{ls}} Docker image: +This sample Dockerfile installs the [`logstash-filter-tld`](asciidocalypse://docs/logstash/docs/reference/plugins-filters-tld.md) plugin to the official {{ls}} Docker image: ```shell FROM docker.elastic.co/logstash/logstash:8.16.1 diff --git a/deploy-manage/deploy/cloud-on-k8s/logstash.md b/deploy-manage/deploy/cloud-on-k8s/logstash.md index 74b078da4..c3edd5be1 100644 --- a/deploy-manage/deploy/cloud-on-k8s/logstash.md +++ b/deploy-manage/deploy/cloud-on-k8s/logstash.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-logstash.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/manage-compute-resources.md b/deploy-manage/deploy/cloud-on-k8s/manage-compute-resources.md index d0bdbe21e..31103f59b 100644 --- a/deploy-manage/deploy/cloud-on-k8s/manage-compute-resources.md +++ b/deploy-manage/deploy/cloud-on-k8s/manage-compute-resources.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-managing-compute-resources.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/manage-deployments.md b/deploy-manage/deploy/cloud-on-k8s/manage-deployments.md index 6e3f51aae..f261c4924 100644 --- a/deploy-manage/deploy/cloud-on-k8s/manage-deployments.md +++ b/deploy-manage/deploy/cloud-on-k8s/manage-deployments.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all --- # Manage deployments diff --git a/deploy-manage/deploy/cloud-on-k8s/managing-deployments-using-helm-chart.md b/deploy-manage/deploy/cloud-on-k8s/managing-deployments-using-helm-chart.md index 883c9d780..46a8daa5d 100644 --- a/deploy-manage/deploy/cloud-on-k8s/managing-deployments-using-helm-chart.md +++ b/deploy-manage/deploy/cloud-on-k8s/managing-deployments-using-helm-chart.md @@ -1,7 +1,8 @@ --- navigation_title: Elastic Stack Helm chart -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-stack-helm-chart.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/map-data.md b/deploy-manage/deploy/cloud-on-k8s/map-data.md index 2915a9a8c..e3f9bfc3b 100644 --- a/deploy-manage/deploy/cloud-on-k8s/map-data.md +++ b/deploy-manage/deploy/cloud-on-k8s/map-data.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-maps-data.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/network-policies.md b/deploy-manage/deploy/cloud-on-k8s/network-policies.md index 2c26e0633..b6483e644 100644 --- a/deploy-manage/deploy/cloud-on-k8s/network-policies.md +++ b/deploy-manage/deploy/cloud-on-k8s/network-policies.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-network-policies.html - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s_prerequisites.html diff --git a/deploy-manage/deploy/cloud-on-k8s/node-configuration.md b/deploy-manage/deploy/cloud-on-k8s/node-configuration.md index 49fa301aa..ddd7e5064 100644 --- a/deploy-manage/deploy/cloud-on-k8s/node-configuration.md +++ b/deploy-manage/deploy/cloud-on-k8s/node-configuration.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-node-configuration.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/nodes-orchestration.md b/deploy-manage/deploy/cloud-on-k8s/nodes-orchestration.md index b1bf5835f..11384843e 100644 --- a/deploy-manage/deploy/cloud-on-k8s/nodes-orchestration.md +++ b/deploy-manage/deploy/cloud-on-k8s/nodes-orchestration.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-orchestration.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/pod-disruption-budget.md b/deploy-manage/deploy/cloud-on-k8s/pod-disruption-budget.md index f19af44c6..8de50d320 100644 --- a/deploy-manage/deploy/cloud-on-k8s/pod-disruption-budget.md +++ b/deploy-manage/deploy/cloud-on-k8s/pod-disruption-budget.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-pod-disruption-budget.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/pod-prestop-hook.md b/deploy-manage/deploy/cloud-on-k8s/pod-prestop-hook.md index 627bc5734..63e905808 100644 --- a/deploy-manage/deploy/cloud-on-k8s/pod-prestop-hook.md +++ b/deploy-manage/deploy/cloud-on-k8s/pod-prestop-hook.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-prestop.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/quickstart-beats.md b/deploy-manage/deploy/cloud-on-k8s/quickstart-beats.md index e55d5e6e5..73d895c4e 100644 --- a/deploy-manage/deploy/cloud-on-k8s/quickstart-beats.md +++ b/deploy-manage/deploy/cloud-on-k8s/quickstart-beats.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-beat-quickstart.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/quickstart-fleet.md b/deploy-manage/deploy/cloud-on-k8s/quickstart-fleet.md index 4d64c41a6..e79d53a8c 100644 --- a/deploy-manage/deploy/cloud-on-k8s/quickstart-fleet.md +++ b/deploy-manage/deploy/cloud-on-k8s/quickstart-fleet.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-elastic-agent-fleet-quickstart.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/quickstart-logstash.md b/deploy-manage/deploy/cloud-on-k8s/quickstart-logstash.md index e382b877f..2573b4103 100644 --- a/deploy-manage/deploy/cloud-on-k8s/quickstart-logstash.md +++ b/deploy-manage/deploy/cloud-on-k8s/quickstart-logstash.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-logstash-quickstart.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/quickstart-standalone.md b/deploy-manage/deploy/cloud-on-k8s/quickstart-standalone.md index 8ec7c87b7..ce4dafe4d 100644 --- a/deploy-manage/deploy/cloud-on-k8s/quickstart-standalone.md +++ b/deploy-manage/deploy/cloud-on-k8s/quickstart-standalone.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-elastic-agent-quickstart.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/readiness-probe.md b/deploy-manage/deploy/cloud-on-k8s/readiness-probe.md index 760f12c06..8b39c91c2 100644 --- a/deploy-manage/deploy/cloud-on-k8s/readiness-probe.md +++ b/deploy-manage/deploy/cloud-on-k8s/readiness-probe.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-readiness.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/recipes.md b/deploy-manage/deploy/cloud-on-k8s/recipes.md index d4fb87b4b..c4ae1a0ab 100644 --- a/deploy-manage/deploy/cloud-on-k8s/recipes.md +++ b/deploy-manage/deploy/cloud-on-k8s/recipes.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-recipes.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/requests-routing-to-elasticsearch-nodes.md b/deploy-manage/deploy/cloud-on-k8s/requests-routing-to-elasticsearch-nodes.md index 8a1e6becd..7626121c8 100644 --- a/deploy-manage/deploy/cloud-on-k8s/requests-routing-to-elasticsearch-nodes.md +++ b/deploy-manage/deploy/cloud-on-k8s/requests-routing-to-elasticsearch-nodes.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-traffic-splitting.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/required-rbac-permissions.md b/deploy-manage/deploy/cloud-on-k8s/required-rbac-permissions.md index d13ec1070..e5ec89ca3 100644 --- a/deploy-manage/deploy/cloud-on-k8s/required-rbac-permissions.md +++ b/deploy-manage/deploy/cloud-on-k8s/required-rbac-permissions.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-eck-permissions.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/restrict-cross-namespace-resource-associations.md b/deploy-manage/deploy/cloud-on-k8s/restrict-cross-namespace-resource-associations.md index 60b96bbda..c9238c673 100644 --- a/deploy-manage/deploy/cloud-on-k8s/restrict-cross-namespace-resource-associations.md +++ b/deploy-manage/deploy/cloud-on-k8s/restrict-cross-namespace-resource-associations.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-restrict-cross-namespace-associations.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/securing-logstash-api.md b/deploy-manage/deploy/cloud-on-k8s/securing-logstash-api.md index 37e462c38..371177c3a 100644 --- a/deploy-manage/deploy/cloud-on-k8s/securing-logstash-api.md +++ b/deploy-manage/deploy/cloud-on-k8s/securing-logstash-api.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-logstash-securing-api.html --- @@ -44,7 +45,7 @@ spec: 1. Store the username and password in a Secret. 2. Map the username and password to the environment variables of the Pod. -3. At Logstash startup, `${API_USERNAME}` and `${API_PASSWORD}` are replaced by the value of environment variables. Check [using environment variables](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/environment-variables.md) for more details. +3. At Logstash startup, `${API_USERNAME}` and `${API_PASSWORD}` are replaced by the value of environment variables. Check [using environment variables](asciidocalypse://docs/logstash/docs/reference/environment-variables.md) for more details. An alternative is to set up [keystore](advanced-configuration-logstash.md#k8s-logstash-keystore) to resolve `${API_USERNAME}` and `${API_PASSWORD}` diff --git a/deploy-manage/deploy/cloud-on-k8s/security-context.md b/deploy-manage/deploy/cloud-on-k8s/security-context.md index 3abb59085..d0a06b3f0 100644 --- a/deploy-manage/deploy/cloud-on-k8s/security-context.md +++ b/deploy-manage/deploy/cloud-on-k8s/security-context.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-security-context.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/service-meshes.md b/deploy-manage/deploy/cloud-on-k8s/service-meshes.md index 4530b8dd3..12918acef 100644 --- a/deploy-manage/deploy/cloud-on-k8s/service-meshes.md +++ b/deploy-manage/deploy/cloud-on-k8s/service-meshes.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-service-meshes.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/settings-managed-by-eck.md b/deploy-manage/deploy/cloud-on-k8s/settings-managed-by-eck.md index 617935e8b..208e5abcb 100644 --- a/deploy-manage/deploy/cloud-on-k8s/settings-managed-by-eck.md +++ b/deploy-manage/deploy/cloud-on-k8s/settings-managed-by-eck.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-reserved-settings.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/standalone-elastic-agent.md b/deploy-manage/deploy/cloud-on-k8s/standalone-elastic-agent.md index 7806e6469..00dbeb3e1 100644 --- a/deploy-manage/deploy/cloud-on-k8s/standalone-elastic-agent.md +++ b/deploy-manage/deploy/cloud-on-k8s/standalone-elastic-agent.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-elastic-agent.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/storage-recommendations.md b/deploy-manage/deploy/cloud-on-k8s/storage-recommendations.md index 2a3f64f97..8d26f6783 100644 --- a/deploy-manage/deploy/cloud-on-k8s/storage-recommendations.md +++ b/deploy-manage/deploy/cloud-on-k8s/storage-recommendations.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-storage-recommendations.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/tools-apis.md b/deploy-manage/deploy/cloud-on-k8s/tools-apis.md index dc9b34e50..514087354 100644 --- a/deploy-manage/deploy/cloud-on-k8s/tools-apis.md +++ b/deploy-manage/deploy/cloud-on-k8s/tools-apis.md @@ -4,4 +4,8 @@ % GitHub issue: https://github.com/elastic/docs-projects/issues/310 -⚠️ **This page is a work in progress.** ⚠️ \ No newline at end of file +⚠️ **This page is a work in progress.** ⚠️ + +You can use these tools and APIs to interact with the following {{eck}} features: + +* [ECK diagnostics tool](/troubleshoot/deployments/cloud-on-k8s/run-eck-diagnostics.md): Use the `eck-diagnostics` command line tool to create a diagnostic archive to help troubleshoot issues with ECK. \ No newline at end of file diff --git a/deploy-manage/deploy/cloud-on-k8s/transport-settings.md b/deploy-manage/deploy/cloud-on-k8s/transport-settings.md index 28db6b0bd..e0887773e 100644 --- a/deploy-manage/deploy/cloud-on-k8s/transport-settings.md +++ b/deploy-manage/deploy/cloud-on-k8s/transport-settings.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-transport-settings.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/troubleshooting-beats.md b/deploy-manage/deploy/cloud-on-k8s/troubleshooting-beats.md index 3380fd1ea..7bee4a856 100644 --- a/deploy-manage/deploy/cloud-on-k8s/troubleshooting-beats.md +++ b/deploy-manage/deploy/cloud-on-k8s/troubleshooting-beats.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-beat-troubleshooting.html --- @@ -14,7 +15,7 @@ When `kibanaRef` is specified, Beat tries to connect to the Kibana instance. If ## Configuration containing key: null is malformed [k8s-beat-configuration-containing-key-null-is-malformed] -When `kubectl` is used to modify a resource, it calculates the diff between the user applied and the existing configuration. This diff has special [semantics](https://tools.ietf.org/html/rfc7396#section-1) that forces the removal of keys if they have special values. For example, if the user-applied configuration contains `some_key: null` (or equivalent `some_key: ~`), this is interpreted as an instruction to remove `some_key`. In Beats configurations, this is often a problem when it comes to defining things like [processors](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/add-cloud-metadata.md). To avoid this problem: +When `kubectl` is used to modify a resource, it calculates the diff between the user applied and the existing configuration. This diff has special [semantics](https://tools.ietf.org/html/rfc7396#section-1) that forces the removal of keys if they have special values. For example, if the user-applied configuration contains `some_key: null` (or equivalent `some_key: ~`), this is interpreted as an instruction to remove `some_key`. In Beats configurations, this is often a problem when it comes to defining things like [processors](asciidocalypse://docs/beats/docs/reference/filebeat/add-cloud-metadata.md). To avoid this problem: * Use `some_key: {}` (empty map) or `some_key: []` (empty array) instead of `some_key: null` if doing so does not affect the behaviour. This might not be possible in all cases as some applications distinguish between null values and empty values and behave differently. * Instead of using `config` to define configuration inline, use `configRef` and store the configuration in a Secret. diff --git a/deploy-manage/deploy/cloud-on-k8s/update-deployments.md b/deploy-manage/deploy/cloud-on-k8s/update-deployments.md index 8ce6fabaa..c837ae56f 100644 --- a/deploy-manage/deploy/cloud-on-k8s/update-deployments.md +++ b/deploy-manage/deploy/cloud-on-k8s/update-deployments.md @@ -1,7 +1,8 @@ --- navigation_title: Applying updates -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-update-deployment.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/update-strategy-logstash.md b/deploy-manage/deploy/cloud-on-k8s/update-strategy-logstash.md index 6a93577fe..72ce5855b 100644 --- a/deploy-manage/deploy/cloud-on-k8s/update-strategy-logstash.md +++ b/deploy-manage/deploy/cloud-on-k8s/update-strategy-logstash.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-logstash-update-strategy.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/update-strategy.md b/deploy-manage/deploy/cloud-on-k8s/update-strategy.md index 09d1c393f..3b375f058 100644 --- a/deploy-manage/deploy/cloud-on-k8s/update-strategy.md +++ b/deploy-manage/deploy/cloud-on-k8s/update-strategy.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-update-strategy.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/use-an-elasticsearch-cluster-managed-by-eck.md b/deploy-manage/deploy/cloud-on-k8s/use-an-elasticsearch-cluster-managed-by-eck.md index 8810ca43e..c831df1fe 100644 --- a/deploy-manage/deploy/cloud-on-k8s/use-an-elasticsearch-cluster-managed-by-eck.md +++ b/deploy-manage/deploy/cloud-on-k8s/use-an-elasticsearch-cluster-managed-by-eck.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-apm-eck-managed-es.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/virtual-memory.md b/deploy-manage/deploy/cloud-on-k8s/virtual-memory.md index b8221ccc2..fad49f599 100644 --- a/deploy-manage/deploy/cloud-on-k8s/virtual-memory.md +++ b/deploy-manage/deploy/cloud-on-k8s/virtual-memory.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-virtual-memory.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/volume-claim-templates.md b/deploy-manage/deploy/cloud-on-k8s/volume-claim-templates.md index 635452c03..5b54945e1 100644 --- a/deploy-manage/deploy/cloud-on-k8s/volume-claim-templates.md +++ b/deploy-manage/deploy/cloud-on-k8s/volume-claim-templates.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-volume-claim-templates.html --- diff --git a/deploy-manage/deploy/cloud-on-k8s/webhook-namespace-selectors.md b/deploy-manage/deploy/cloud-on-k8s/webhook-namespace-selectors.md index 661a32e7e..6f9fd8efa 100644 --- a/deploy-manage/deploy/cloud-on-k8s/webhook-namespace-selectors.md +++ b/deploy-manage/deploy/cloud-on-k8s/webhook-namespace-selectors.md @@ -1,6 +1,7 @@ --- -applies: - eck: all +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-webhook-namespace-selectors.html --- diff --git a/deploy-manage/deploy/deployment-comparison.md b/deploy-manage/deploy/deployment-comparison.md index c4e4bec29..d7113f32e 100644 --- a/deploy-manage/deploy/deployment-comparison.md +++ b/deploy-manage/deploy/deployment-comparison.md @@ -3,55 +3,57 @@ This reference provides detailed comparisons of features and capabilities across Elastic's deployment options: self-managed deployments, Elastic Cloud Hosted, and Serverless. For a high-level overview of deployment types and guidance on choosing between them, see the [overview](../deploy.md). -## Security features +For more details about feature availability in Serverless, check [](elastic-cloud/differences-from-other-elasticsearch-offerings.md#elasticsearch-differences-serverless-feature-categories). + +## Security | Feature/capability | Self-managed | Elastic Cloud Hosted | Serverless | |-------------------|-------------|--------------------------------|-------------------------| -| Custom security configurations | Yes | Limited | No | -| Authentication realms and custom roles | Yes | Yes | No | -| Audit logging | Yes | Yes | No | +| [Security configurations](/deploy-manage/security.md) | Full control | Limited control | Limited control | +| [Authentication realms](/deploy-manage/users-roles.md) | Available | Available | Available, through Elastic Cloud only | +| [Custom roles](/deploy-manage/users-roles.md) | Available | Available | Available | +| [Audit logging](/deploy-manage/monitor/logging-configuration/configuring-audit-logs.md) | Available | Available | No | -## Management features +## Infrastructure and cluster management | Feature/capability | Self-managed | Elastic Cloud Hosted | Serverless | |-------------------|-------------|--------------------------------|-------------------------| -| Full control over configuration | Yes | Limited | No | -| Infrastructure flexibility | Yes | No | No | -| Autoscaling | No | Yes | Yes | -| Data tiers management | No | Yes | No | -| Snapshot management | No | Yes | No | -| High availability and disaster recovery | Yes | Yes | Yes | -| Multi-cloud support | No | Yes | Yes | -| Shard management and replicas | Yes | Yes | No | +| Hosting | Any infrastructure | Elastic Cloud through AWS, Azure, or GCP | Elastic Cloud through AWS or Azure | +| Hardware configuration | Full control | Limited control | Managed by Elastic | +| Autoscaling | No | Available | Automatic | +| Data tiers management | Through ILM policies | Available | No data tiers | +| Snapshot management | Custom | Available | Managed by Elastic | +| High availability and disaster recovery | Available | Available | Managed by Elastic | +| Shard management and replicas | Available | Available | Managed by Elastic | -## Monitoring features +## Monitoring | Feature/capability | Self-managed | Elastic Cloud Hosted | Serverless | |-------------------|-------------|--------------------------------|-------------------------| -| Watcher | Yes | Yes | No | +| [Deployment health monitoring](/deploy-manage/monitor.md) | Monitoring cluster | AutoOps or monitoring cluster | Managed by Elastic | +| [Alerting](/explore-analyze/alerts-cases.md) | Watcher or Kibana alerts | Watcher or Kibana alerts | Alerts ([why?](/deploy-manage/deploy/elastic-cloud/differences-from-other-elasticsearch-offerings.md#elasticsearch-differences-serverless-features-replaced)) | -## Data lifecycle features +## Data lifecycle | Feature/capability | Self-managed | Elastic Cloud Hosted | Serverless | |-------------------|-------------|--------------------------------|-------------------------| -| Index lifecycle management (ILM) | Yes | Yes | No (uses data streams) | -| Data tiers management | No | Yes | No | -| Snapshot management | No | Yes | No | +| [Data lifecycle management](/manage-data/lifecycle.md) | ILM, data tiers, data stream lifecycle | ILM, data tiers, data stream lifecycle | Data stream lifecycle ([why?](/deploy-manage/deploy/elastic-cloud/differences-from-other-elasticsearch-offerings.md#elasticsearch-differences-serverless-features-replaced)) | +| [Snapshot management](/deploy-manage/tools/snapshot-and-restore.md) | Custom | Available | Managed by Elastic | -## Integration features +## Integrations and extensions | Feature/capability | Self-managed | Elastic Cloud Hosted | Serverless | |-------------------|-------------|--------------------------------|-------------------------| -| Custom plugins | Yes | No | No | -| Self-managed connectors | Yes | No | Limited | -| Elasticsearch-Hadoop integration | Yes | Yes | No | -| Cross cluster search (CCS) | Yes | Yes | No | -| Cross cluster replication | Yes | Yes | Yes | +| Custom plugins and bundles | Available | Available | No | +| Self-managed connectors | Available | Limited | Limited | +| Elasticsearch-Hadoop integration | Available | Available | No | +| Cross cluster search (CCS) | Available | Available | [Planned](/deploy-manage/deploy/elastic-cloud/differences-from-other-elasticsearch-offerings.md#elasticsearch-differences-serverless-feature-planned) | +| Cross cluster replication | Available | Available | [Planned](/deploy-manage/deploy/elastic-cloud/differences-from-other-elasticsearch-offerings.md#elasticsearch-differences-serverless-feature-planned) | ## Development and testing features | Feature/capability | Self-managed | Elastic Cloud Hosted | Serverless | |-------------------|-------------|--------------------------------|-------------------------| -| Advanced testing and development | Yes | No | No | -| Java (JVM) customization | Yes | No | No | +| Advanced testing and development | Available | No | No | +| Java (JVM) customization | Available | No | No | diff --git a/deploy-manage/deploy/elastic-cloud.md b/deploy-manage/deploy/elastic-cloud.md index 3b2ceab15..c5e1ccfc0 100644 --- a/deploy-manage/deploy/elastic-cloud.md +++ b/deploy-manage/deploy/elastic-cloud.md @@ -1,44 +1,42 @@ --- +applies_to: + serverless: ga + deployment: + ess: ga mapped_pages: - https://www.elastic.co/guide/en/serverless/current/intro.html#general-what-is-serverless-elastic-differences-between-serverless-projects-and-hosted-deployments-on-ecloud --- # Elastic Cloud [intro] -{{serverless-full}} is a fully managed solution that allows you to deploy and use Elastic for your use cases without managing the underlying infrastructure. It represents a shift in how you interact with {{es}} - instead of managing clusters, nodes, data tiers, and scaling, you create **serverless projects** that are fully managed and automatically scaled by Elastic. This abstraction of infrastructure decisions allows you to focus solely on gaining value and insight from your data. +{{ecloud}} allows you to centrally manage [hosted deployments](elastic-cloud/cloud-hosted.md) of the {{stack}} and [serverless projects](elastic-cloud/serverless.md) for your Observability, Security, and Search use cases. -{{serverless-full}} automatically provisions, manages, and scales your {{es}} resources based on your actual usage. Unlike traditional deployments where you need to predict and provision resources in advance, serverless adapts to your workload in real-time, ensuring optimal performance while eliminating the need for manual capacity planning. +These hosted deployments and serverless projects are hosted on Elastic Cloud, through the cloud provider and regions of your choice, and are tied to your organization account. -Serverless projects use the core components of the {{stack}}, such as {{es}} and {{kib}}, and are based on an architecture that decouples compute and storage. Search and indexing operations are separated, which offers high flexibility for scaling your workloads while ensuring a high level of performance. +You can check the operational status of {{ecloud}} at any time from [status.elastic.co](https://status.elastic.co/). -Elastic provides three serverless solutions available on {{ecloud}}: +## Sign up -* **/solutions/search.md[{{es-serverless}}]**: Build powerful applications and search experiences using a rich ecosystem of vector search capabilities, APIs, and libraries. -% See solutions/search/serverless-elasticsearch-get-started.md -* **/solutions/observability.md[{{obs-serverless}}]**: Monitor your own platforms and services using powerful machine learning and analytics tools with your logs, metrics, traces, and APM data. -* **/solutions/security/elastic-security-serverless.md[{{sec-serverless}}]**: Detect, investigate, and respond to threats with SIEM, endpoint protection, and AI-powered analytics capabilities. +You can get started by creating an {{ecloud}} organization on [cloud.elastic.co](https://cloud.elastic.co/registration). -[Learn more about {{serverless-full}} in our blog](https://www.elastic.co/blog/elastic-cloud-serverless). +For more details on the available sign up options and trial information, go to [](elastic-cloud/create-an-organization.md). +## Benefits of {{ecloud}} -## Benefits of serverless projects [_benefits_of_serverless_projects] +Some of the unique benefits of {{ecloud}} include: -**Management free.** Elastic manages the underlying Elastic cluster, so you can focus on your data. With serverless projects, Elastic is responsible for automatic upgrades, data backups, and business continuity. +- Regular updates and improvements automatically deployed or made available. +- Built-in security, including encryption at rest. +- Central management of billing and licensing. +- Built-in tools for monitoring and scaling your {{ecloud}} resources. +- Central management of users, roles, and authentication, including integration with SSO providers. -**Autoscaled.** To meet your performance requirements, the system automatically adjusts to your workloads. For example, when you have a short time spike on the data you ingest, more resources are allocated for that period of time. When the spike is over, the system uses less resources, without any action on your end. +For more information, refer to [](/deploy-manage/cloud-organization.md). -**Optimized data storage.** Your data is stored in cost-efficient, general storage. A cache layer is available on top of the general storage for recent and frequently queried data that provides faster search speed. The size of the cache layer and the volume of data it holds depend on [settings](elastic-cloud/project-settings.md) that you can configure for each project. +## Differences between serverless projects and hosted deployments[general-what-is-serverless-elastic-differences-between-serverless-projects-and-hosted-deployments-on-ecloud] -**Dedicated experiences.** All serverless solutions are built on the Elastic Search Platform and include the core capabilities of the Elastic Stack. They also each offer a distinct experience and specific capabilities that help you focus on your data, goals, and use cases. +You can have multiple hosted deployments and serverless projects in the same {{ecloud}} organization, and each deployment type has its own specificities. -**Pay per usage.** Each serverless project type includes product-specific and usage-based pricing. - -**Data and performance control**. Control your project data and query performance against your project data. * Data. Choose the data you want to ingest and the method to ingest it. By default, data is stored indefinitely in your project, and you define the retention settings for your data streams. * Performance. For granular control over costs and query performance against your project data, serverless projects come with a set of predefined settings you can edit. - - -## Differences between serverless projects and hosted deployments on {{ecloud}} [general-what-is-serverless-elastic-differences-between-serverless-projects-and-hosted-deployments-on-ecloud] - -You can run [hosted deployments](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md) of the {{stack}} on {{ecloud}}. These hosted deployments provide more provisioning and advanced configuration options. | | | | | --- | --- | --- | @@ -54,39 +52,8 @@ You can run [hosted deployments](/deploy-manage/deploy/elastic-cloud/cloud-hoste | **Backups** | Projects automatically backed up by Elastic. | Your responsibility with Snapshot & Restore. | | **Data retention** | Editable on data streams. | Index Lifecycle Management. | +## APIs -## Answers to common serverless questions [general-what-is-serverless-elastic-answers-to-common-serverless-questions] - -**Is there migration support between hosted deployments and serverless projects?** - -Migration paths between hosted deployments and serverless projects are currently unsupported. - -**How can I move data to or from serverless projects?** - -We are working on data migration tools! In the interim, [use Logstash](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/index.md) with Elasticsearch input and output plugins to move data to and from serverless projects. - -**How does serverless ensure compatibility between software versions?** - -Connections and configurations are unaffected by upgrades. To ensure compatibility between software versions, quality testing and API versioning are used. - -**Can I convert a serverless project into a hosted deployment, or a hosted deployment into a serverless project?** - -Projects and deployments are based on different architectures, and you are unable to convert. - -**Can I convert a serverless project into a project of a different type?** - -You are unable to convert projects into different project types, but you can create as many projects as you’d like. You will be charged only for your usage. - -**How can I create serverless service accounts?** - -Create API keys for service accounts in your serverless projects. Options to automate the creation of API keys with tools such as Terraform will be available in the future. - -To raise a Support case with Elastic, raise a case for your subscription the same way you do today. In the body of the case, make sure to mention you are working in serverless to ensure we can provide the appropriate support. - -**Where can I learn about pricing for serverless?** - -See serverless pricing information for [Search](https://www.elastic.co/pricing/serverless-search), [Observability](https://www.elastic.co/pricing/serverless-observability), and [Security](https://www.elastic.co/pricing/serverless-security). - -**Can I request backups or restores for my projects?** +{{ecloud}} offers APIs to manage your organization and its resources. Check the [{{ecloud}}](https://www.elastic.co/docs/api/doc/cloud/) and [{{ecloud}} serverless](https://www.elastic.co/docs/api/doc/elastic-cloud-serverless/) APIs. -It is not currently possible to request backups or restores for projects, but we are working on data migration tools to better support this. +More tools are available for you to make the most of your {{ecloud}} organization and {{es}}. Refer to [](/deploy-manage/deploy/elastic-cloud/tools-apis.md). \ No newline at end of file diff --git a/deploy-manage/deploy/elastic-cloud/access-kibana.md b/deploy-manage/deploy/elastic-cloud/access-kibana.md index 28af45ef2..cd9ec6d9d 100644 --- a/deploy-manage/deploy/elastic-cloud/access-kibana.md +++ b/deploy-manage/deploy/elastic-cloud/access-kibana.md @@ -1,11 +1,14 @@ --- +applies_to: + deployment: + ess: ga mapped_urls: - https://www.elastic.co/guide/en/cloud/current/ec-access-kibana.html - https://www.elastic.co/guide/en/cloud-heroku/current/ech-access-kibana.html - https://www.elastic.co/guide/en/cloud-heroku/current/ech-enable-kibana2.html --- -# Access Kibana +# Access Kibana [ec-access-kibana] % What needs to be done: Lift-and-shift @@ -20,8 +23,29 @@ mapped_urls: $$$ec-enable-kibana2$$$ -**This page is a work in progress.** The documentation team is working to combine content pulled from the following pages: -* [/raw-migrated-files/cloud/cloud/ec-access-kibana.md](/raw-migrated-files/cloud/cloud/ec-access-kibana.md) -* [/raw-migrated-files/cloud/cloud-heroku/ech-access-kibana.md](/raw-migrated-files/cloud/cloud-heroku/ech-access-kibana.md) -* [/raw-migrated-files/cloud/cloud-heroku/ech-enable-kibana2.md](/raw-migrated-files/cloud/cloud-heroku/ech-enable-kibana2.md) \ No newline at end of file +Kibana is an open source analytics and visualization platform designed to search, view, and interact with data stored in Elasticsearch indices. The use of Kibana is included with your subscription. + +For new Elasticsearch clusters, we automatically create a Kibana instance for you. + +To access Kibana: + +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. On the **Deployments** page, select your deployment. + + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + +3. Under **Applications**, select the Kibana **Launch** link and wait for Kibana to open. + + ::::{note} + Both ports 443 and 9243 can be used to access Kibana. SSO only works with 9243 on older deployments, where you will see an option in the Cloud UI to migrate the default to port 443. In addition, any version upgrade will automatically migrate the default port to 443. + :::: + +4. Log into Kibana. Single sign-on (SSO) is enabled between your Cloud account and the Kibana instance. If you’re logged in already, then Kibana opens without requiring you to log in again. However, if your token has expired, choose from one of these methods to log in: + + * Select **Login with Cloud**. You’ll need to log in with your Cloud account credentials and then you’ll be redirected to Kibana. + * Log in with the `elastic` superuser. The password was provided when you created your cluster or [can be reset](../../../deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md). + * Log in with any users you created in Kibana already. + + +In production systems, you might need to control what Elasticsearch data users can access through Kibana, so you need create credentials that can be used to access the necessary Elasticsearch resources. This means granting read access to the necessary indexes, as well as access to update the `.kibana` index. \ No newline at end of file diff --git a/deploy-manage/deploy/elastic-cloud/add-plugins-extensions.md b/deploy-manage/deploy/elastic-cloud/add-plugins-extensions.md index 29117dce5..65598caf1 100644 --- a/deploy-manage/deploy/elastic-cloud/add-plugins-extensions.md +++ b/deploy-manage/deploy/elastic-cloud/add-plugins-extensions.md @@ -1,6 +1,10 @@ --- +applies_to: + deployment: + ess: ga mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-adding-plugins.html + - https://www.elastic.co/guide/en/cloud-heroku/current/ech-adding-elastic-plugins.html --- # Add plugins and extensions [ec-adding-plugins] @@ -13,16 +17,16 @@ Plugins extend the core functionality of {{es}}. There are many suitable plugins Plugins can come from different sources: the official ones created or at least maintained by Elastic, community-sourced plugins from other users, and plugins that you provide. Some of the official plugins are always provided with our service, and can be [enabled per deployment](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch-plugins/cloud/ec-adding-elastic-plugins.md). -There are two ways to add plugins to a deployment in Elasticsearch Service: +There are two ways to add plugins to a hosted deployment in {{ecloud}}: -* [Enable one of the official plugins already available in Elasticsearch Service](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch-plugins/cloud/ec-adding-elastic-plugins.md). +* [Enable one of the official plugins already available in {{ecloud}}](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch-plugins/cloud/ec-adding-elastic-plugins.md). * [Upload a custom plugin and then enable it per deployment](upload-custom-plugins-bundles.md). -Custom plugins can include the official {{es}} plugins not provided with Elasticsearch Service, any of the community-sourced plugins, or [plugins that you write yourself](asciidocalypse://docs/elasticsearch/docs/extend/create-elasticsearch-plugins/index.md). Uploading custom plugins is available only to Gold, Platinum, and Enterprise subscriptions. For more information, check [Upload custom plugins and bundles](upload-custom-plugins-bundles.md). +Custom plugins can include the official {{es}} plugins not provided with {{ecloud}}, any of the community-sourced plugins, or [plugins that you write yourself](asciidocalypse://docs/elasticsearch/docs/extend/index.md). Uploading custom plugins is available only to Gold, Platinum, and Enterprise subscriptions. For more information, check [Upload custom plugins and bundles](upload-custom-plugins-bundles.md). To learn more about the official and community-sourced plugins, refer to [{{es}} Plugins and Integrations](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch-plugins/index.md). -For a detailed guide with examples of using the Elasticsearch Service API to create, get information about, update, and delete extensions and plugins, check [Managing plugins and extensions through the API](manage-plugins-extensions-through-api.md). +For a detailed guide with examples of using the {{ecloud}} API to create, get information about, update, and delete extensions and plugins, check [Managing plugins and extensions through the API](manage-plugins-extensions-through-api.md). Plugins are not supported for {{kib}}. To learn more, check [Restrictions for {{es}} and {{kib}} plugins](restrictions-known-problems.md#ec-restrictions-plugins). diff --git a/deploy-manage/deploy/elastic-cloud/add-plugins-provided-with-elastic-cloud-hosted.md b/deploy-manage/deploy/elastic-cloud/add-plugins-provided-with-elastic-cloud-hosted.md deleted file mode 100644 index f2545619e..000000000 --- a/deploy-manage/deploy/elastic-cloud/add-plugins-provided-with-elastic-cloud-hosted.md +++ /dev/null @@ -1,21 +0,0 @@ ---- -mapped_urls: - - https://www.elastic.co/guide/en/cloud-heroku/current/ech-adding-plugins.html - - https://www.elastic.co/guide/en/cloud-heroku/current/ech-adding-elastic-plugins.html ---- - -# Add plugins provided with Elastic Cloud Hosted - -% What needs to be done: Lift-and-shift - -% Use migrated content from existing pages that map to this page: - -% - [ ] ./raw-migrated-files/cloud/cloud-heroku/ech-adding-plugins.md -% - [ ] ./raw-migrated-files/cloud/cloud-heroku/ech-adding-elastic-plugins.md - -⚠️ **This page is a work in progress.** ⚠️ - -The documentation team is working to combine content pulled from the following pages: - -* [/raw-migrated-files/cloud/cloud-heroku/ech-adding-plugins.md](/raw-migrated-files/cloud/cloud-heroku/ech-adding-plugins.md) -* [/raw-migrated-files/cloud/cloud-heroku/ech-adding-elastic-plugins.md](/raw-migrated-files/cloud/cloud-heroku/ech-adding-elastic-plugins.md) \ No newline at end of file diff --git a/deploy-manage/deploy/elastic-cloud/available-stack-versions.md b/deploy-manage/deploy/elastic-cloud/available-stack-versions.md index d8126081e..fa8628954 100644 --- a/deploy-manage/deploy/elastic-cloud/available-stack-versions.md +++ b/deploy-manage/deploy/elastic-cloud/available-stack-versions.md @@ -1,11 +1,14 @@ --- +applies_to: + deployment: + ess: ga mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-version-policy.html --- # Available stack versions [ec-version-policy] -This section describes our version policy for Elasticsearch Service, including: +This section describes our version policy for {{ech}}, including: * [What Elastic Stack versions are available](#ec-version-policy-available) * [When we make new Elastic Stack versions available](#ec-version-policy-new) @@ -18,14 +21,14 @@ This section describes our version policy for Elasticsearch Service, including: Elastic Stack uses a versions code that is constructed of three numbers separated by dots: the leftmost number is the number of the major release, the middle number is the number of the minor release and the rightmost number is the number of the maintenance release (e.g., 8.3.2 means major release 8, minor release 3 and maintenance release 2). -You might sometimes notice additional versions listed in the user interface beyond the versions we currently support and maintain, such as [release candidate builds](#ec-release-builds) and older versions. If a version is listed in the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body), it can be deployed. +You might sometimes notice additional versions listed in the user interface beyond the versions we currently support and maintain, such as [release candidate builds](#ec-release-builds) and older versions. If a version is listed in the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body), it can be deployed. ## New Elastic Stack versions [ec-version-policy-new] Whenever a new Elastic Stack version is released, we do our best to provide the new version on our hosted service at the same time. We send you an email and add a notice to the console, recommending an upgrade. You’ll need to decide whether to upgrade to the new version with new features and bug fixes or to stay with a version you know works for you a while longer. -There can be [breaking changes](asciidocalypse://docs/elasticsearch/docs/release-notes/breaking-changes/elasticsearch.md) in some new versions of Elasticsearch that break what used to work in older versions. Before upgrading, you’ll want to check if the new version introduces any changes that might affect your applications. A breaking change might be a function that was previously deprecated and that has been removed in the latest version, for example. If you have an application that depends on the removed function, the application will need to be updated to continue working with the new version of Elasticsearch. +There can be [breaking changes](asciidocalypse://docs/elasticsearch/docs/release-notes/breaking-changes.md) in some new versions of Elasticsearch that break what used to work in older versions. Before upgrading, you’ll want to check if the new version introduces any changes that might affect your applications. A breaking change might be a function that was previously deprecated and that has been removed in the latest version, for example. If you have an application that depends on the removed function, the application will need to be updated to continue working with the new version of Elasticsearch. To learn more about upgrading to newer versions of the Elastic Stack on our hosted service, check [Upgrade Versions](../../upgrade/deployment-or-cluster.md). @@ -44,7 +47,7 @@ A forced upgrade or restart might become necessary in a situation that: ## Release candidates and cutting-edge releases [ec-release-builds] -Interested in kicking the tires of Elasticsearch releases at the cutting edge? We sometimes make release candidate builds and other cutting-edge releases available in Elasticsearch Service for you to try out. +Interested in kicking the tires of Elasticsearch releases at the cutting edge? We sometimes make release candidate builds and other cutting-edge releases available in {{ecloud}} for you to try out. ::::{warning} Remember that cutting-edge releases are used to test new function fully. These releases might still have issues and might be less stable than the GA version. There’s also no guaranteed upgrade path to the GA version when it becomes available. @@ -58,4 +61,4 @@ Cutting-edge releases do not remain available forever. Once the GA version of El ## Version Policy and Product End of Life [ec-version-policy-eol] -For Elasticsearch Service, we follow the [Elastic Version Maintenance and Support Policy](https://www.elastic.co/support/eol), which defines the support and maintenance policy of the Elastic Stack. +For {{ecloud}}, we follow the [Elastic Version Maintenance and Support Policy](https://www.elastic.co/support/eol), which defines the support and maintenance policy of the Elastic Stack. diff --git a/deploy-manage/deploy/elastic-cloud/aws-marketplace.md b/deploy-manage/deploy/elastic-cloud/aws-marketplace.md index ac3e419d8..044e4dad6 100644 --- a/deploy-manage/deploy/elastic-cloud/aws-marketplace.md +++ b/deploy-manage/deploy/elastic-cloud/aws-marketplace.md @@ -1,27 +1,31 @@ --- +applies_to: + deployment: + ess: ga + serverless: ga mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-billing-aws.html --- # AWS Marketplace [ec-billing-aws] -7-Day Free Trial Sign-Up: On the [Elasticsearch Service AWS marketplace page](https://aws.amazon.com/marketplace/pp/prodview-voru33wi6xs7k), click **View purchase options**, sign into your AWS account, then start using Elastic Cloud. +7-Day Free Trial Sign-Up: On the [{{ecloud}} AWS marketplace page](https://aws.amazon.com/marketplace/pp/prodview-voru33wi6xs7k), click **View purchase options**, sign into your AWS account, then start using Elastic Cloud. ::::{tip} The free trial includes provisioning of a single deployment and you are not charged for the first 7 days. Billing starts automatically after the 7-day trial period ends. Get started today! :::: -You can subscribe to Elasticsearch Service directly from the AWS Marketplace. You then have the convenience of viewing your Elasticsearch Service subscription as part of your AWS bill, and you do not have to supply any additional billing information to Elastic. +You can subscribe to {{ecloud}} directly from the AWS Marketplace. You then have the convenience of viewing your {{ecloud}} subscription as part of your AWS bill, and you do not have to supply any additional billing information to Elastic. -Some differences exist when you subscribe to Elasticsearch Service through the AWS Marketplace: +Some differences exist when you subscribe to {{ecloud}} through the AWS Marketplace: * Billing starts automatically after the 7-day trial period. -* Previous Elasticsearch Service accounts cannot be converted to use the AWS Marketplace. If you already have an account, you must use a different email address when you sign up for a subscription through the AWS Marketplace. +* Previous {{ecloud}} accounts cannot be converted to use the AWS Marketplace. If you already have an account, you must use a different email address when you sign up for a subscription through the AWS Marketplace. * Pricing is based on the AWS region, the size of your deployment, as well as some other parameters such as data transfer out, data transfer internode, snapshot storage, and snapshot APIs. For more details, check [Billing Dimensions](../../cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md). -* The consolidated charges for your Elasticsearch Service subscription display in the AWS Marketplace billing console. It can take a day or two before new charges show up. +* The consolidated charges for your {{ecloud}} subscription display in the AWS Marketplace billing console. It can take a day or two before new charges show up. * Regardless of where your deployment is hosted (visible in the Elastic Cloud console), the AWS Marketplace charges for all AWS regions are metered in US East (Northern Virginia). As a result, US East (Northern Virginia) is listed as the region in the AWS Marketplace console. -* To get a detailed breakdown of your charges by deployment or by product, open the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body) and go to **Account & Billing** > **Usage**. +* To get a detailed breakdown of your charges by deployment or by product, open the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) and go to **Account & Billing** > **Usage**. * To end your trial or unsubscribe from the service, delete your deployment(s). * Elastic provides different [subscription levels](https://www.elastic.co/subscriptions/cloud). During your 7-day trial you will automatically have an Enterprise level subscription. After the trial you can choose the subscription level. @@ -31,14 +35,14 @@ Some differences exist when you subscribe to Elasticsearch Service through the A Note the following items before you subscribe: * You cannot use an email address that already has an Elastic Cloud account. If you want to use the same account email address with AWS Marketplace billing, you must first change the email address on your existing account before setting up your new AWS Marketplace subscription. For instructions on how to change your email address in Elastic Cloud, check [update your email address](../../../cloud-account/update-your-email-address.md). -* If you want to manage deployments on the existing Elasticsearch Service account with your AWS MP billing account, you must migrate your deployments over to the new MP billing account. To migrate, use a [custom repository](../../tools/snapshot-and-restore/elastic-cloud-hosted.md) to take a snapshot and then restore that snapshot to a new deployment under your AWS Marketplace account. +* If you want to manage deployments on the existing {{ecloud}} account with your AWS MP billing account, you must migrate your deployments over to the new MP billing account. To migrate, use a [custom repository](../../tools/snapshot-and-restore/elastic-cloud-hosted.md) to take a snapshot and then restore that snapshot to a new deployment under your AWS Marketplace account. -## Subscribe to Elasticsearch Service through the AWS Marketplace [ec_subscribe_to_elasticsearch_service_through_the_aws_marketplace] +## Subscribe to {{ecloud}} through the AWS Marketplace [ec_subscribe_to_elasticsearch_service_through_the_aws_marketplace] -To subscribe to Elasticsearch Service through the AWS Marketplace: +To subscribe to {{ecloud}} through the AWS Marketplace: -1. Go to [Elasticsearch Service on the AWS Marketplace](https://aws.amazon.com/marketplace/pp/B01N6YCISK) and click **View purchase options**. +1. Go to [{{ecloud}} on the AWS Marketplace](https://aws.amazon.com/marketplace/pp/B01N6YCISK) and click **View purchase options**. 2. Click **Subscribe** and then **Set Up Your Account** to continue. 3. Follow the steps displayed to complete the signup process. diff --git a/deploy-manage/deploy/elastic-cloud/azure-marketplace-pricing.md b/deploy-manage/deploy/elastic-cloud/azure-marketplace-pricing.md index 75ab51bbd..d6d381f1d 100644 --- a/deploy-manage/deploy/elastic-cloud/azure-marketplace-pricing.md +++ b/deploy-manage/deploy/elastic-cloud/azure-marketplace-pricing.md @@ -1,4 +1,8 @@ --- +applies_to: + deployment: + ess: ga + serverless: preview mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-azure-marketplace-pricing.html --- diff --git a/deploy-manage/deploy/elastic-cloud/azure-native-isv-service.md b/deploy-manage/deploy/elastic-cloud/azure-native-isv-service.md index c215be569..fd394f1a2 100644 --- a/deploy-manage/deploy/elastic-cloud/azure-native-isv-service.md +++ b/deploy-manage/deploy/elastic-cloud/azure-native-isv-service.md @@ -1,4 +1,8 @@ --- +applies_to: + deployment: + ess: ga + serverless: preview mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-azure-marketplace-native.html --- @@ -34,7 +38,7 @@ Note the following terms: * **Azure Marketplace SaaS ID**: This is a unique identifier that’s generated one time by Microsoft Commercial Marketplace when a user creates their first Elastic resource (deployment) using the Microsoft Azure (Portal, API, SDK, or Terraform). This is mapped to a User ID and Azure Subscription ID * **{{ecloud}} organization**: An [organization](../../users-roles/cloud-organization.md) is the foundational construct under which everything in {{ecloud}} is grouped and managed. An organization is created as a step during the creation of your first Elastic resource (deployment), whether that’s done through Microsoft Azure (Portal, API, SDK, or Terraform). The initial member of the {{ecloud}} organization can then invite other users. -* **Elastic resource (deployment)**: An {{ecloud}} deployment helps you manage an {{es}} cluster and instances of other Elastic products in one place. You can work with Elastic deployments from within the Azure ecosystem. Multiple users in the {{ecloud}} organization can create different deployments from different Azure subscriptions. They can also create deployments from the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). +* **Elastic resource (deployment)**: An {{ecloud}} deployment helps you manage an {{es}} cluster and instances of other Elastic products in one place. You can work with Elastic deployments from within the Azure ecosystem. Multiple users in the {{ecloud}} organization can create different deployments from different Azure subscriptions. They can also create deployments from the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). The following diagram shows the mapping between Microsoft Azure IDs, {{ecloud}} organization IDs, and your Elastic resources (deployments). @@ -139,7 +143,7 @@ $$$azure-integration-pricing$$$What is the pricing for this offer? $$$azure-integration-regions$$$Which Azure regions are supported? -: Here is the [list of available Azure regions](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-regions-templates-instances.md#ec-azure_regions) supported in {{ecloud}}. +: Here is the [list of available Azure regions](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md#ec-azure_regions) supported in {{ecloud}}. $$$azure-integration-subscription-levels$$$Which {{ecloud}} subscription levels are available? : The subscription defaults to the Enterprise subscription, granting immediate access to advanced {{stack}} features like machine learning, and premium support response time SLAs. {{ecloud}} offers a number of different [subscription levels](https://elastic.co/pricing). @@ -175,7 +179,7 @@ $$$azure-integration-azure-user-management$$$Is the {{ecloud}} Azure Native ISV :alt: Error message displayed in the {{ecloud}} console: To access the resource {resource-name} ::: - Share deployment resources directly with other Azure users by [configuring Active Directory single sign-on with the {{es}} cluster](../../users-roles/cluster-or-deployment-auth/openid-connect.md#ec-securing-oidc-azure). + Share deployment resources directly with other Azure users by [configuring Active Directory single sign-on with the {{es}} cluster](/deploy-manage/users-roles/cluster-or-deployment-auth/oidc-examples.md#ec-securing-oidc-azure). $$$azure-integration-azure-rbac$$$Does {{ecloud}} Azure Native ISV Service support recently introduced {{ecloud}} RBAC capability? @@ -185,11 +189,11 @@ $$$azure-integration-prior-cloud-account$$$I already have an {{ecloud}} account, : Yes. If you already have an {{ecloud}} account with the same email address as your Azure account you may need to contact `support@elastic.co`. $$$azure-integration-convert-trial$$$Can I sign up for an {{ecloud}} trial account and then convert to the {{ecloud}} Azure Native ISV Service? -: Yes. You can start a [free Elasticsearch Service trial](https://cloud.elastic.co/registration?page=docs&placement=docs-body) and then convert your account over to Azure. There are a few requirements: +: Yes. You can start a [free {{ecloud}} trial](https://cloud.elastic.co/registration?page=docs&placement=docs-body) and then convert your account over to Azure. There are a few requirements: * Make sure when creating deployments in the trial account you specify Azure as the cloud provider. * To convert your trial to the Azure marketplace you need to create a deployment in the Azure console. Just delete the new deployment if you don’t need it. After you create the new deployment your marketplace subscription is ready. - * Any deployments created during your trial won’t show up in the Azure console, since they weren’t created in Azure, but they are still accessible through the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body) and you are billed for their usage. + * Any deployments created during your trial won’t show up in the Azure console, since they weren’t created in Azure, but they are still accessible through the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) and you are billed for their usage. $$$azure-integration-azure-tenant$$$Does {{es}} get deployed into my tenant in Azure? @@ -235,8 +239,8 @@ $$$azure-integration-cli-api$$$What other methods are available to deploy {{es}} * **Deploy using {{ecloud}}** * The {{ecloud}} [console](https://cloud.elastic.co?page=docs&placement=docs-body) - * The {{ecloud}} [REST API](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-api-restful.md) - * The {{ecloud}} [command line tool](asciidocalypse://docs/ecctl/docs/reference/cloud/ecctl/index.md) + * The {{ecloud}} [REST API](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-api-restful.md) + * The {{ecloud}} [command line tool](asciidocalypse://docs/ecctl/docs/reference/index.md) * The {{ecloud}} [Terraform provider](https://registry.terraform.io/providers/elastic/ec/latest/docs) Note that when you use any of the {{ecloud}} methods, the {{es}} deployment will not be available in Azure. @@ -253,7 +257,7 @@ $$$azure-integration-migrate$$$How do I migrate my data from the classic Azure m 6. In the new {{es}} resource, follow the steps in [Restore from a snapshot](../../../manage-data/migrate.md#ec-restore-snapshots) to register the custom snapshot repository from Step 1. 7. In the same set of steps, restore the snapshot data from the snapshot repository that you registered. 8. Confirm the data has moved successfully into your new {{es}} resource on Azure. - 9. To remove the old Azure subscription and the old deployments, go to the [Azure SaaS page](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.SaaS%2Fresources) and unsubscribe from the `{{ecloud}} ({{es}})` marketplace subscription. This action triggers the existing deployments termination. + 9. To remove the old Azure subscription and the old deployments, go to the [Azure SaaS page](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.SaaS%2Fresources) and unsubscribe from the {{ecloud}} ({{es}}) marketplace subscription. This action triggers the existing deployments termination. $$$azure-integration-no-inbox$$$Can I invite users to my organization, even if they cannot receive emails? @@ -272,7 +276,7 @@ $$$azure-integration-billing-elastic-costs$$$Why can’t I see Elastic resources : The costs associated with Elastic resources (deployments) are reported under unassigned in the Azure Portal. Refer to [Understand your Azure external services charges](https://learn.microsoft.com/en-us/azure/cost-management-billing/understand/understand-azure-marketplace-charges) in the Microsoft Documentation to understand Elastic resources/deployments costs. For granular Elastic resources costs, refer to [Monitor and analyze your acccount usage](../../cloud-organization/billing/monitor-analyze-usage.md). $$$azure-integration-billing-deployments$$$Why don’t I see my individual Elastic resources (deployments) in the Azure Marketplace Invoice? -: The way Azure Marketplace Billing Integration works today, the costs for Elastic resources (deployments) are reported for an {{ecloud}} organization as a single line item, reported against the Marketplace SaaS ID. This includes the Elastic deployments created using the Azure Portal, API, SDK, or CLI, and also the Elastic deployments created directly from the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body) in the respective {{ecloud}} organization. For granular Elastic resources costs refer to [Monitor and analyze your acccount usage](../../cloud-organization/billing/monitor-analyze-usage.md). As well, for more detail refer to [Integrated billing](#ec-azure-integration-billing-summary). +: The way Azure Marketplace Billing Integration works today, the costs for Elastic resources (deployments) are reported for an {{ecloud}} organization as a single line item, reported against the Marketplace SaaS ID. This includes the Elastic deployments created using the Azure Portal, API, SDK, or CLI, and also the Elastic deployments created directly from the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) in the respective {{ecloud}} organization. For granular Elastic resources costs refer to [Monitor and analyze your acccount usage](../../cloud-organization/billing/monitor-analyze-usage.md). As well, for more detail refer to [Integrated billing](#ec-azure-integration-billing-summary). :::{image} ../../../images/cloud-ec-azure-billing-example.png :alt: Example billing report in the {{ecloud}} console @@ -323,7 +327,7 @@ $$$azure-integration-modify-deployment$$$How can I modify my {{ecloud}} deployme * [Add or remove custom plugins](add-plugins-extensions.md). * [Configure IP filtering](../../security/traffic-filtering.md). * [Monitor your {{ecloud}} deployment](../../monitor/stack-monitoring/elastic-cloud-stack-monitoring.md) to ensure it remains healthy. - * Add or remove API keys to use the [REST API](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-api-restful.md). + * Add or remove API keys to use the [REST API](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-api-restful.md). * [And more](cloud-hosted.md) @@ -380,7 +384,7 @@ Note that following restrictions for logging: * Only logs from non-compute Azure services are ingested as part of the configuration detailed in this document. Logs from compute services, such as Virtual Machines, into the {{stack}} will be added in a future release. -* The Azure services must be in one of the [supported regions](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-regions-templates-instances.md#ec-azure_regions). All regions will be supported in the future. +* The Azure services must be in one of the [supported regions](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md#ec-azure_regions). All regions will be supported in the future. :::: @@ -476,7 +480,7 @@ $$$azure-integration-deployment-failed-traffic-filter$$$My {{ecloud}} deployment Follow these steps to resolve the problem: - 1. Login to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). + 1. Login to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. Go to the [Traffic filters page](https://cloud.elastic.co/deployment-features/traffic-filters). 3. Edit the traffic filter and disable the **Include by default** option. @@ -497,10 +501,10 @@ $$$azure-integration-failed-sso$$$I can’t SSO into my {{ecloud}} deployment. $$$azure-integration-cant-see-deployment$$$I see some deployments in the {{ecloud}} console but not in the Azure Portal. -: Elastic Deployments created using the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body), the [{{es}} Service API](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-api-restful.md), or the [{{ecloud}} Terraform provider](https://registry.terraform.io/providers/elastic/ec/latest/docs) are only visible through the {{ecloud}} Console. To have the necessary metadata to be visible in the Azure Portal, {{ecloud}} deployments need to be created in Microsoft Azure. +: Elastic Deployments created using the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body), the [{{es}} Service API](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-api-restful.md), or the [{{ecloud}} Terraform provider](https://registry.terraform.io/providers/elastic/ec/latest/docs) are only visible through the {{ecloud}} Console. To have the necessary metadata to be visible in the Azure Portal, {{ecloud}} deployments need to be created in Microsoft Azure. ::::{note} -Mimicking this metadata by manually adding tags to an {{ecloud}} deployment will not work around this limitation. Instead, it will prevent you from being able to delete the deployment using the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). +Mimicking this metadata by manually adding tags to an {{ecloud}} deployment will not work around this limitation. Instead, it will prevent you from being able to delete the deployment using the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). :::: diff --git a/deploy-manage/deploy/elastic-cloud/change-hardware.md b/deploy-manage/deploy/elastic-cloud/change-hardware.md index 8df62da98..6fbe0958d 100644 --- a/deploy-manage/deploy/elastic-cloud/change-hardware.md +++ b/deploy-manage/deploy/elastic-cloud/change-hardware.md @@ -1,11 +1,14 @@ --- +applies_to: + deployment: + ess: ga mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-change-hardware-for-a-specific-resource.html --- # Change hardware [ec-change-hardware-for-a-specific-resource] -The virtual hardware on which Elastic stack deployments run is defined by instance configurations. To learn more about what an instance configuration is, refer to [Instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/hardware.md#ec-getting-started-configurations). +The virtual hardware on which Elastic stack deployments run is defined by instance configurations. To learn more about what an instance configuration is, refer to [Instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/hardware.md#ec-getting-started-configurations). When a deployment is created, each Elasticsearch tier and stateless resource (e.g., Kibana) gets an instance configuration assigned to it, based on the hardware profile used. The combination of instance configurations defined within each hardware profile is designed to provide the best possible outcome for each use case. Therefore, it is not advisable to use instance configurations that are not specified on the hardware profile, except in specific situations in which we may need to migrate an Elasticsearch tier or stateless resource to a different hardware type. An example of such a scenario is when a cloud provider stops supporting a hardware type in a specific region. @@ -20,8 +23,8 @@ Prerequisites: Follow these steps to migrate to a different instance configuration, replacing the default `$EC_API_KEY` value with your actual API key: -1. From the [list of instance configurations available for each region](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-regions-templates-instances.md), select the target instance configuration you want to migrate to. -2. Get the deployment update payload from the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body) **Edit** page, by selecting **Equivalent API request**, and store it in a file called `migrate_instance_configuration.json`. +1. From the [list of instance configurations available for each region](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md), select the target instance configuration you want to migrate to. +2. Get the deployment update payload from the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) **Edit** page, by selecting **Equivalent API request**, and store it in a file called `migrate_instance_configuration.json`. Example payload containing relevant data for migrating the hot Elasticsearch tier: @@ -78,6 +81,6 @@ Having an instance configuration mismatch between the deployment and the hardwar ## Deprecated instance configurations (ICs) and deployment templates (DTs) [ec-deprecated-icdt] -A list of deprecated and valid ICs/DTs can be found on the [Available regions, deployment templates and instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-regions-templates-instances.md) page, as well as through the API, using `hide_deprecated` to return valid ICs/DTs. For example, to return valid ICs/DTs the following request can be used: `https://api.elastic-cloud.com/api/v1/deployments/templates?region=us-west-2&hide_deprecated=true`. To list only the deprecated ones, this can be used: `https://api.elastic-cloud.com/api/v1/deployments/templates?region=us-west-2&metadata=legacy:true`. +A list of deprecated and valid ICs/DTs can be found on the [Available regions, deployment templates and instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md) page, as well as through the API, using `hide_deprecated` to return valid ICs/DTs. For example, to return valid ICs/DTs the following request can be used: `https://api.elastic-cloud.com/api/v1/deployments/templates?region=us-west-2&hide_deprecated=true`. To list only the deprecated ones, this can be used: `https://api.elastic-cloud.com/api/v1/deployments/templates?region=us-west-2&metadata=legacy:true`. If a deprecated IC/DT is already in use, it can continue to be used. However, creating or migrating to a deprecated IC/DT is no longer possible and will result in a plan failing. In order to migrate to a valid IC/DT, navigate to the **Edit hardware profile** option in the Cloud UI or use the [Deployment API](https://www.elastic.co/docs/api/doc/cloud/operation/operation-migrate-deployment-template). diff --git a/deploy-manage/deploy/elastic-cloud/cloud-hosted.md b/deploy-manage/deploy/elastic-cloud/cloud-hosted.md index fdc47345b..c2dfcce57 100644 --- a/deploy-manage/deploy/elastic-cloud/cloud-hosted.md +++ b/deploy-manage/deploy/elastic-cloud/cloud-hosted.md @@ -1,14 +1,15 @@ --- +applies_to: + deployment: + ess: ga mapped_urls: - https://www.elastic.co/guide/en/cloud/current/index.html - https://www.elastic.co/guide/en/cloud/current/ec-getting-started.html - - https://www.elastic.co/guide/en/cloud/current/ec-prepare-production.html - https://www.elastic.co/guide/en/cloud/current/ec-faq-getting-started.html - https://www.elastic.co/guide/en/cloud/current/ec-about.html - - https://www.elastic.co/guide/en/cloud-heroku/current/ech-configure.html --- -# Cloud Hosted +# Elastic Cloud Hosted % What needs to be done: Refine @@ -51,10 +52,152 @@ $$$faq-where$$$ $$$faq-x-pack$$$ -**This page is a work in progress.** The documentation team is working to combine content pulled from the following pages: +**{{ech}} is the Elastic Stack, managed through {{ecloud}} deployments.** -* [/raw-migrated-files/cloud/cloud/ec-getting-started.md](/raw-migrated-files/cloud/cloud/ec-getting-started.md) -* [/raw-migrated-files/cloud/cloud/ec-prepare-production.md](/raw-migrated-files/cloud/cloud/ec-prepare-production.md) -* [/raw-migrated-files/cloud/cloud/ec-faq-getting-started.md](/raw-migrated-files/cloud/cloud/ec-faq-getting-started.md) -* [/raw-migrated-files/cloud/cloud/ec-about.md](/raw-migrated-files/cloud/cloud/ec-about.md) -* [/raw-migrated-files/cloud/cloud-heroku/ech-configure.md](/raw-migrated-files/cloud/cloud-heroku/ech-configure.md) \ No newline at end of file +It is also formerly known as Elasticsearch Service. + +{{ech}} allows you to manage one or more instances of the Elastic Stack through **deployments**. These deployments are hosted on {{ecloud}}, through the cloud provider and regions of your choice, and are tied to your organization account. + +A **hosted deployment** helps you manage an Elasticsearch cluster and instances of other Elastic products, like Kibana or APM instances, in one place. Spin up, scale, upgrade, and delete your Elastic Stack products without having to manage each one separately. In a deployment, everything works together. + +::::{note} +{{ech}} is one of the two deployment options available on {{ecloud}}. [Depending on your needs](../elastic-cloud.md), you can also run [Elastic Cloud Serverless projects](/deploy-manage/deploy/elastic-cloud/serverless.md). +:::: + + +**Hardware profiles to optimize deployments for your usage.** + +You can optimize the configuration and performance of a deployment by selecting a **hardware profile** that matches your usage. + +*Hardware profiles* are presets that provide a unique blend of storage, memory and vCPU for each component of a deployment. They support a specific purpose, such as a hot-warm architecture that helps you manage your data storage retention. + +You can use these presets, or start from them to get the unique configuration you need. They can vary slightly from one cloud provider or region to another to align with the available virtual hardware. + +**Solutions to help you make the most out of your data in each deployment.** + +Building a rich search experience, gaining actionable insight into your environment, or protecting your systems and endpoints? You can implement each of these major use cases, and more, with the solutions that are pre-built in each Elastic deployment. + +:::{image} ../../../images/cloud-ec-stack-components.png +:alt: Elastic Stack components and solutions with Enterprise Search +:width: 75% +::: + +:::{important} +Enterprise Search is not available in {{stack}} 9.0+. +::: + +These solutions help you accomplish your use cases: Ingest data into the deployment and set up specific capabilities of the Elastic Stack. + +Of course, you can choose to follow your own path and use Elastic components available in your deployment to ingest, visualize, and analyze your data independently from solutions. + + +## How to operate {{ech}}? [ec_how_to_operate_elasticsearch_service] + +**Where to start?** + +* Learn the basics of {{es}}, the {{stack}}, and its solutions in [Get started](/get-started/index.md). +* Sign up using your preferred method: + + * [Sign Up for a Trial](/deploy-manage/deploy/elastic-cloud/create-an-organization.md) - Sign up, check what your free trial includes and when we require a credit card. + * [Sign Up from Marketplace](/deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md) - Consolidate billing portals by signing up through one of the available marketplaces. + +* [Create a deployment](/deploy-manage/deploy/elastic-cloud/create-an-elastic-cloud-hosted-deployment.md) - Get up and running very quickly. Select your desired configuration and let Elastic deploy Elasticsearch, Kibana, and the Elastic products that you need for you. In a deployment, everything works together, everything runs on hardware that is optimized for your use case. +* [Connect your data to your deployment](/manage-data/ingest.md) - Ingest and index the data you want, from a variety of sources, and take action on it. + +**Adjust the capacity and capabilities of your deployments for production** + +There are a few things that can help you make sure that your production deployments remain available, healthy, and ready to handle your data in a scalable way over time, with the expected level of performance. Check [](/deploy-manage/production-guidance/plan-for-production-elastic-cloud.md). + +**Secure your environment** + +Control which users and services can access your deployments by [securing your environment](/deploy-manage/security/secure-your-cluster-deployment.md). [Add authentication mechanisms](/deploy-manage/users-roles.md), configure [traffic filtering](/deploy-manage/security/traffic-filtering.md) for private link, encrypt your deployment data and snapshots at rest [with your own key](/deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md), [manage trust](/deploy-manage/remote-clusters.md) with {{es}} clusters from other environments, and more. + +**Monitor your deployments and keep them healthy** + +{{ech}} provides several ways to monitor your deployments, anticipate and prevent issues, or fix them when they occur. Check [Monitoring your deployment](/deploy-manage/monitor.md) to get more details. + +## More about {{ech}} [ec-about] + +Find more information about {{ech}} on the following pages: + +* [Subscription Levels](/deploy-manage/license.md) +* [Version Policy](/deploy-manage/deploy/elastic-cloud/available-stack-versions.md) +* [{{ech}} Hardware](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/hardware.md) +* [{{ech}} Regions](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/regions.md) +* [Service Status](/deploy-manage/cloud-organization/service-status.md) +* [Getting help](/troubleshoot/index.md) +* [Restrictions and known problems](/deploy-manage/deploy/elastic-cloud/restrictions-known-problems.md) + +:::{dropdown} {{ech}} FAQ + +$$$ec-faq-getting-started$$$ + +This frequently-asked-questions list helps you with common questions while you get {{ech}} up and running for the first time. For questions about {{ech}} configuration options or billing, check the [Technical FAQ](/deploy-manage/index.md) and the [Billing FAQ](/deploy-manage/cloud-organization/billing/billing-faq.md). + +* [What is {{ech}}?](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-what) +* [Is {{ech}}, formerly known as Elasticsearch Service, the same as Amazon’s {{es}} Service?](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-aws-difference) +* [Can I run the full Elastic Stack in {{ech}}?](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-full-stack) +* [Can I try {{ech}} for free?](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-trial) +* [What if I need to change the size of my {{es}} cluster at a later time?](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-config) +* [Do you offer support subscriptions?](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-subscriptions) +* [Where are deployments hosted?](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-where) +* [What is the difference between {{ech}} and the Amazon {{es}} Service?](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-vs-aws) +* [Can I use {{ech}} on platforms other than AWS?](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-aws) +* [Do you offer Elastic’s commercial products?](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-elastic) +* [Is my {{es}} cluster protected by X-Pack?](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-x-pack) +* [Is there a limit on the number of documents or indexes I can have in my cluster?](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-limit) + +$$$faq-what$$$**What is {{ech}}?** +: {{ech}} is hosted and managed {{es}} and {{kib}} brought to you by the creators of {{es}}. {{ech}} is part of Elastic Cloud and ships with features that you can only get from the company behind {{es}}, {{kib}}, {{beats}}, and {{ls}}. {{es}} is a full text search engine that suits a range of uses, from search on websites to big data analytics and more. + +$$$faq-aws-difference$$$**Is {{ech}}, formerly known as Elasticsearch Service, the same as Amazon’s {{es}} Service?** +: {{ech}} is not the same as the Amazon {{es}} service. To learn more about the differences, check our [AWS {{es}} Service](https://www.elastic.co/aws-elasticsearch-service) comparison. + +$$$faq-full-stack$$$**Can I run the full Elastic Stack in {{ech}}?** +: Many of the products that are part of the Elastic Stack are readily available in {{ech}}, including {{es}}, {{kib}}, plugins, and features such as monitoring and security. Use other Elastic Stack products directly with {{ech}}. For example, both Logstash and Beats can send their data to {{ech}}. What is run is determined by the [subscription level](https://www.elastic.co/cloud/as-a-service/subscriptions). + +$$$faq-trial$$$**Can I try {{ech}} for free?** +: Yes, sign up for a 14-day free trial. The trial starts the moment a cluster is created. During the free trial period get access to a deployment to explore Elastic solutions for Search, Observability, Security, or the latest version of the Elastic Stack. + + +$$$faq-config$$$**What if I need to change the size of my {{es}} cluster at a later time?** +: Scale your clusters both up and down from the user console, whenever you like. The resizing of the cluster is transparently done in the background, and highly available clusters are resized without any downtime. If you scale your cluster down, make sure that the downsized cluster can handle your {{es}} memory requirements. Read more about sizing and memory in [Sizing {{es}}](https://www.elastic.co/blog/found-sizing-elasticsearch). + +$$$faq-subscriptions$$$**Do you offer support?** +: Yes, all subscription levels for {{ech}} include support, handled by email or through the Elastic Support Portal. Different subscription levels include different levels of support. For the Standard subscription level, there is no service-level agreement (SLA) on support response times. Gold and Platinum subscription levels include an SLA on response times to tickets and dedicated resources. To learn more, check [Getting Help](/troubleshoot/index.md). + +$$$faq-where$$$**Where are deployments hosted?** +: We host our {{es}} clusters on Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. Check out which [regions we support](https://www.elastic.co/guide/en/cloud/current/ec-reference-regions.html) and what [hardware we use](https://www.elastic.co/guide/en/cloud/current/ec-reference-hardware.html). New data centers are added all the time. + +$$$faq-vs-aws$$$**What is the difference between {{ech}} and the Amazon {{es}} Service?** +: {{ech}} is the only hosted and managed {{es}} service built, managed, and supported by the company behind {{es}}, {{kib}}, {{beats}}, and {{ls}}. With {{ech}}, you always get the latest versions of the software. Our service is built on best practices and years of experience hosting and managing thousands of {{es}} clusters in the Cloud and on premise. For more information, check the following Amazon and Elastic {{es}} Service [comparison page](https://www.elastic.co/aws-elasticsearch-service). + + Please note that there is no formal partnership between Elastic and Amazon Web Services (AWS), and Elastic does not provide any support on the AWS {{es}} Service. + + +$$$faq-aws$$$**Can I use {{ech}} on platforms other than AWS?** +: Yes, create deployments on the Google Cloud Platform and Microsoft Azure. + +$$$faq-elastic$$$**Do you offer Elastic’s commercial products?** +: Yes, all {{ech}} customers have access to basic authentication, role-based access control, and monitoring. + + {{ecloud}} Gold, Platinum and Enterprise customers get complete access to all the capabilities in X-Pack: + + * Security + * Alerting + * Monitoring + * Reporting + * Graph Analysis & Visualization + + [Contact us](https://www.elastic.co/cloud/contact) to learn more. + + +$$$faq-x-pack$$$**Is my Elasticsearch cluster protected by X-Pack?** +: Yes, X-Pack security features offer the full power to protect your {{ech}} deployment with basic authentication and role-based access control. + +$$$faq-limit$$$**Is there a limit on the number of documents or indexes I can have in my cluster?** +: No. We do not enforce any artificial limit on the number of indexes or documents you can store in your cluster. + + That said, there is a limit to how many indexes Elasticsearch can cope with. Every shard of every index is a separate Lucene index, which in turn comprises several files. A process cannot have an unlimited number of open files. Also, every shard has its associated control structures in memory. So, while we will let you make as many indexes as you want, there are limiting factors. Our larger plans provide your processes with more dedicated memory and CPU-shares, so they are capable of handling more indexes. The number of indexes or documents you can fit in a given plan therefore depends on their structure and use. + +::: \ No newline at end of file diff --git a/deploy-manage/deploy/elastic-cloud/configure.md b/deploy-manage/deploy/elastic-cloud/configure.md index de5a2762d..5cfef00d5 100644 --- a/deploy-manage/deploy/elastic-cloud/configure.md +++ b/deploy-manage/deploy/elastic-cloud/configure.md @@ -1,21 +1,63 @@ --- +applies_to: + deployment: + ess: ga mapped_urls: - https://www.elastic.co/guide/en/cloud/current/ec-customize-deployment.html - https://www.elastic.co/guide/en/cloud-heroku/current/ech-configure-settings.html + - https://www.elastic.co/guide/en/cloud-heroku/current/ech-configure.html --- # Configure -% What needs to be done: Refine +You might want to change the configuration of your deployment to: -% Use migrated content from existing pages that map to this page: +* Add features, such as machine learning or APM (application performance monitoring). +* Increase or decrease capacity by changing the amount of reserved memory and storage for different parts of your deployment. -% - [ ] ./raw-migrated-files/cloud/cloud/ec-customize-deployment.md -% - [ ] ./raw-migrated-files/cloud/cloud-heroku/ech-configure-settings.md + ::::{note} + During the free trial, {{ech}} deployments are restricted to a limited size. You can increase the size of your deployments when your trial is converted to a paid subscription. + :::: -⚠️ **This page is a work in progress.** ⚠️ +* Enable [autoscaling](../../../deploy-manage/autoscaling.md) so that the available resources for deployment components, such as data tiers and machine learning nodes, adjust automatically as the demands on them change over time. +* Enable high availability, also known as fault tolerance, by adjusting the number of data center availability zones that parts of your deployment run on. +* Upgrade to new versions of {{es}}. You can upgrade from one major version to another, such as from 6.8.23 to 7.17.27, or from one minor version to another, such as 6.1 to 6.2. You can’t downgrade versions. +* Change what plugins are available on your {{es}} cluster. -The documentation team is working to combine content pulled from the following pages: +With the exception of major version upgrades for Elastic Stack products, {{ech}} can perform configuration changes without having to interrupt your deployment. You can continue searching and indexing. The changes can also be done in bulk. For example: in one action, you can add more memory, upgrade, adjust the number of {{es}} plugins and adjust the number of availability zones. -* [/raw-migrated-files/cloud/cloud/ec-customize-deployment.md](/raw-migrated-files/cloud/cloud/ec-customize-deployment.md) -* [/raw-migrated-files/cloud/cloud-heroku/ech-configure-settings.md](/raw-migrated-files/cloud/cloud-heroku/ech-configure-settings.md) \ No newline at end of file +We perform all of these changes by creating instances with the new configurations that join your existing deployment before removing the old ones. For example: if you are changing your {{es}} cluster configuration, we create new {{es}} nodes, recover your indexes, and start routing requests to the new nodes. Only when all new {{es}} nodes are ready, do we bring down the old ones. + +By doing it this way, we reduce the risk of making configuration changes. If any of the new instances have a problems, the old ones are still there, processing requests. + +::::{note} +If you use a Platform-as-a-Service provider like Heroku, the administration console is slightly different and does not allow you to make changes that will affect the price. That must be done in the platform provider’s add-on system. You can still do things like change {{es}} version or plugins. +:::: + + +To change your deployment: + +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. + + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + +3. From the deployment menu, select **Edit**. +4. Let the user interface guide you through the cluster configuration for your cluster. + + If you are changing an existing deployment, you can make multiple changes to your {{es}} cluster with a single configuration update, such as changing the capacity and upgrading to a new {{es}} version in one step. + +5. Save your changes. The new configuration takes a few moments to create. + +Review the changes to your configuration on the **Activity** page, with a tab for {{es}} and one for {{kib}}. + +::::{tip} +If you are creating a new deployment, select **Edit settings** to change the cloud provider, region, hardware profile, and stack version; or select **Advanced settings** for more complex configuration settings. +:::: + + +That’s it! If you haven’t already, [start exploring with {{kib}}](../../../deploy-manage/deploy/elastic-cloud/access-kibana.md), our visualization tool. If you’re not familiar with adding data yet, {{kib}} can show you how to index your data into {{es}}, or try our basic steps for working with [{{es}}](../../../manage-data/data-store/manage-data-from-the-command-line.md). + +::::{tip} +Some features are not available during the 14-day free trial. If a feature is greyed out, [add a credit card](../../../deploy-manage/cloud-organization/billing/add-billing-details.md) to unlock the feature. +:::: diff --git a/deploy-manage/deploy/elastic-cloud/create-an-elastic-cloud-hosted-deployment.md b/deploy-manage/deploy/elastic-cloud/create-an-elastic-cloud-hosted-deployment.md index 700e29d2c..6bdc5625f 100644 --- a/deploy-manage/deploy/elastic-cloud/create-an-elastic-cloud-hosted-deployment.md +++ b/deploy-manage/deploy/elastic-cloud/create-an-elastic-cloud-hosted-deployment.md @@ -1,45 +1,50 @@ --- +applies_to: + deployment: + ess: ga mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-create-deployment.html + - https://www.elastic.co/guide/en/cloud/current/ec-prepare-production.html + - https://www.elastic.co/guide/en/cloud/current/ec-configure-deployment-settings.html + - https://www.elastic.co/guide/en/cloud-heroku/current/ech-configure-deployment-settings.html --- # Create an Elastic Cloud Hosted deployment [ec-create-deployment] An Elastic Cloud deployment includes Elastic Stack components such as Elasticsearch, Kibana, and other features, allowing you to store, search, and analyze your data. You can spin up a proof-of-concept deployment to learn more about what Elastic can do for you. -::::{note} -To explore Elasticsearch Service and its solutions, create your first deployment by following one of these [getting started guides](https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions/current/getting-started-guides.html). If you are instead interested in serverless Elastic Cloud, check the [serverless documentation](https://docs.elastic.co/serverless). -:::: - - +:::{note} You can also create a deployment using the [Elastic Cloud API](https://www.elastic.co/docs/api/doc/cloud/group/endpoint-deployments). This can be an interesting alternative for more advanced needs, such as for [creating a deployment encrypted with your own key](../../security/encrypt-deployment-with-customer-managed-encryption-key.md). +::: -1. Log in to your [cloud.elastic.co](https://cloud.elastic.co/login) account and select **Create deployment** from the Elasticsearch Service main page: +1. Log in to your [cloud.elastic.co](https://cloud.elastic.co/login) account and select **Create deployment** from the {{ecloud}} main page: :::{image} ../../../images/cloud-ec-login-first-deployment.png :alt: Log in to create a deployment ::: +1. Select a solution view for your deployment. Solution views define the navigation and set of features that will be first available in your deployment. You can change it later, or [create different spaces](/deploy-manage/manage-spaces.md) with different solution views within your deployment. -Once you are on the **Create deployment** page, you can create the deployment with the defaults assigned, where you can edit the basic settings, or configure more advanced settings. + To learn more about what each solution offers, check [Elasticsearch](/solutions/search/get-started.md), [Observability](/solutions/observability/get-started.md), and [Security](/solutions/security/get-started.md). 1. From the main **Settings**, you can change the cloud provider and region that host your deployment, the stack version, and the hardware profile, or restore data from another deployment (**Restore snapshot data**): :::{image} ../../../images/cloud-ec-create-deployment.png :alt: Create deployment + :width: 50% ::: - Cloud provider - : The cloud platform where you’ll deploy your deployment. We support: Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. You do not need to provide your own keys. + **Cloud provider**: The cloud platform where you’ll deploy your deployment. We support: Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. You do not need to provide your own keys. + + **Region**: The cloud platform’s region your deployment will live. If you have compliance or latency requirements, you can create your deployment in any of our [supported regions](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/regions.md). The region should be as close as possible to the location of your data. - Region - : The cloud platform’s region your deployment will live. If you have compliance or latency requirements, you can create your deployment in any of our [supported regions](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/regions.md). The region should be as close as possible to the location of your data. + **Hardware profile**: This allows you to configure the underlying virtual hardware that you’ll deploy your Elastic Stack on. Each hardware profile provides a unique blend of storage, RAM and vCPU sizes. You can select a hardware profile that’s best suited for your use case. For example CPU Optimized if you have a search-heavy use case that’s bound by compute resources. For more details, check the [hardware profiles](ec-change-hardware-profile.md) section. You can also view the [virtual hardware details](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/hardware.md) which powers hardware profiles. With the **Advanced settings** option, you can configure the underlying virtual hardware associated with each profile. - Hardware profile - : This allows you to configure the underlying virtual hardware that you’ll deploy your Elastic Stack on. Each hardware profile provides a unique blend of storage, RAM and vCPU sizes. You can select a hardware profile that’s best suited for your use case. For example CPU Optimized if you have a search-heavy use case that’s bound by compute resources. For more details, check the [hardware profiles](ec-configure-deployment-settings.md#ec-hardware-profiles) section. You can also view the [virtual hardware details](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/hardware.md) which powers hardware profiles. With the **Advanced settings** option, you can configure the underlying virtual hardware associated with each profile. + **Version**: The Elastic Stack version that will get deployed. Defaults to the latest version. Our [version policy](available-stack-versions.md) describes which versions are available to deploy. - Version - : The Elastic Stack version that will get deployed. Defaults to the latest version. Our [version policy](available-stack-versions.md) describes which versions are available to deploy. + **Snapshot source**: To create a deployment from a snapshot, select a snapshot source. You need to [configure snapshots](../../tools/snapshot-and-restore.md) and establish a snapshot lifecycle management policy and repository before you can restore from a snapshot. The snapshot options depend on the stack version the deployment is running. + + **Name**: This setting allows you to assign a more human-friendly name to your cluster which will be used for future reference in the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). Common choices are dev, prod, test, or something more domain specific. 2. Expand **Advanced settings** to configure your deployment for encryption using a customer-managed key, autoscaling, storage, memory, and vCPU. Check [Customize your deployment](configure.md) for more details. @@ -56,4 +61,15 @@ Once you are on the **Create deployment** page, you can create the deployment wi :alt: ESS Deployment main page ::: +## Preparing a deployment for production [ec-prepare-production] + +To make sure you’re all set for production, consider the following actions: + +* [Plan for your expected workloads](/deploy-manage/production-guidance/plan-for-production-elastic-cloud.md) and consider how many availability zones you’ll need. +* [Create a deployment](/deploy-manage/deploy/elastic-cloud/create-an-elastic-cloud-hosted-deployment.md) on the region you need and with a hardware profile that matches your use case. +* [Change your configuration](/deploy-manage/deploy/elastic-cloud/ec-customize-deployment-components.md) by turning on autoscaling, adding high availability, or adjusting components of the Elastic Stack. +* [Add extensions and plugins](/deploy-manage/deploy/elastic-cloud/add-plugins-extensions.md) to use Elastic supported extensions or add your own custom dictionaries and scripts. +* [Edit settings and defaults](/deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) to fine tune the performance of specific features. +* [Manage your deployment](/deploy-manage/deploy/elastic-cloud/manage-deployments.md) as a whole to restart, upgrade, stop routing, or delete. +* [Set up monitoring](/deploy-manage/monitor/stack-monitoring/elastic-cloud-stack-monitoring.md) to learn how to configure your deployments for observability, which includes metric and log collection, troubleshooting views, and cluster alerts to automate performance monitoring. diff --git a/deploy-manage/deploy/elastic-cloud/create-an-organization.md b/deploy-manage/deploy/elastic-cloud/create-an-organization.md index a67a47361..15ee964e1 100644 --- a/deploy-manage/deploy/elastic-cloud/create-an-organization.md +++ b/deploy-manage/deploy/elastic-cloud/create-an-organization.md @@ -1,11 +1,16 @@ --- +applies_to: + deployment: + ess: ga + serverless: ga +navigation_title: Sign up mapped_urls: - https://www.elastic.co/guide/en/cloud/current/ec-getting-started-trial.html - https://www.elastic.co/guide/en/serverless/current/general-sign-up-trial.html - https://www.elastic.co/guide/en/cloud/current/ec-getting-started-existing-email.html --- -# Create an organization +# Sign up and create an organization % What needs to be done: Refine @@ -23,8 +28,87 @@ mapped_urls: $$$general-sign-up-trial-what-is-included-in-my-trial$$$ -**This page is a work in progress.** The documentation team is working to combine content pulled from the following pages: +To sign up: -* [/raw-migrated-files/cloud/cloud/ec-getting-started-trial.md](/raw-migrated-files/cloud/cloud/ec-getting-started-trial.md) -* [/raw-migrated-files/docs-content/serverless/general-sign-up-trial.md](/raw-migrated-files/docs-content/serverless/general-sign-up-trial.md) -* [/raw-migrated-files/cloud/cloud/ec-getting-started-existing-email.md](/raw-migrated-files/cloud/cloud/ec-getting-started-existing-email.md) \ No newline at end of file +1. Go to the [Elastic Cloud Sign Up](https://cloud.elastic.co/registration?page=docs&placement=docs-body) page. +2. Choose one of the available sign up methods. You can register with your email address and a password, use a Google or Microsoft account, or [subscribe from a Marketplace](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md). + +:::{note} +You can only belong to one {{ecloud}} organization at a time. If you want to create or join another organization, you must leave the previous one or use a different email address. +::: + +When your first sign up, you create an organization and start with a trial license. + +This organization is the umbrella for all of your Elastic Cloud resources, users, and account settings. Every organization has a unique identifier. Bills are invoiced according to the billing contact and details that you set for your organization. For more details on how to manage your organization, refer to [](/deploy-manage/cloud-organization.md). + + +## Trial information [general-sign-up-trial-what-is-included-in-my-trial] + +Your free 14-day trial includes: + +**One hosted deployment** + +A deployment lets you explore Elastic solutions for Search, Observability, and Security. Trial deployments run on the latest version of the Elastic Stack. They includes 8 GB of RAM spread out over two availability zones, and enough storage space to get you started. If you’re looking to evaluate a smaller workload, you can scale down your trial deployment. Each deployment includes Elastic features such as Maps, SIEM, machine learning, advanced security, and much more. You have some sample data sets to play with and tutorials that describe how to add your own data. + +For more information, check the [{{ech}} documentation](cloud-hosted.md). + +**One serverless project** + +Serverless projects package Elastic Stack features by type of solution: + +* [{{es}}](../../../solutions/search.md) +* [Observability](../../../solutions/observability.md) +* [Security](../../../solutions/security/elastic-security-serverless.md) + +When you create a project, you select the project type applicable to your use case, so only the relevant and impactful applications and features are easily accessible to you. + +For more information, check the [{{serverless-short}} documentation](serverless.md). + + +### Trial limitations [general-sign-up-trial-what-limits-are-in-place-during-a-trial] + +During the free 14 day trial, Elastic provides access to one hosted deployment and one serverless project. If all you want to do is try out Elastic, the trial includes more than enough to get you started. During the trial period, some limitations apply. + +**Hosted deployments** + +* You can have one active deployment at a time +* The deployment size is limited to 8GB RAM and approximately 360GB of storage, depending on the specified hardware profile +* Machine learning nodes are available up to 4GB RAM +* Custom {{es}} plugins are not enabled + +For more information, check the [{{ech}} documentation](cloud-hosted.md). + +**Serverless projects** + +* You can have one active serverless project at a time. +* Search Power is limited to 100. This setting only exists in {{es-serverless}} projects +* Search Boost Window is limited to 7 days. This setting only exists in {{es-serverless}} projects +* Scaling is limited for serverless projects in trials. Failures might occur if the workload requires memory or compute beyond what the above search power and search boost window setting limits can provide. + +**Remove limitations** + +Subscribe to [Elastic Cloud](/deploy-manage/cloud-organization/billing/add-billing-details.md) for the following benefits: + +* Increased memory or storage for deployment components, such as {{es}} clusters, machine learning nodes, and APM server. +* As many deployments and projects as you need. +* Third availability zone for your deployments. +* Access to additional features, such as cross-cluster search and cross-cluster replication. + +You can subscribe to Elastic Cloud at any time during your trial. [Billing](../../../deploy-manage/cloud-organization/billing/serverless-project-billing-dimensions.md) starts when you subscribe. To maximize the benefits of your trial, subscribe at the end of the free period. To monitor charges, anticipate future costs, and adjust your usage, check your [account usage](/deploy-manage/cloud-organization/billing/monitor-analyze-usage.md) and [billing history](/deploy-manage/cloud-organization/billing/view-billing-history.md). + + +### Get started with your trial [general-sign-up-trial-how-do-i-get-started-with-my-trial] + +Start by checking out some common approaches for [moving data into Elastic Cloud](https://www.elastic.co/guide/en/cloud/current/ec-cloud-ingest-data.html). + + +### Maintain access to your trial projects and data [general-sign-up-trial-what-happens-at-the-end-of-the-trial] + +When your trial expires, the deployment and project that you created during the trial period are suspended until you subscribe to [Elastic Cloud](/deploy-manage/cloud-organization/billing/add-billing-details.md). When you subscribe, you are able to resume your deployment and serverless project, and regain access to the ingested data. After your trial expires, you have 30 days to subscribe. After 30 days, your deployment, serverless project, and ingested data are permanently deleted. + +If you’re interested in learning more ways to subscribe to Elastic Cloud, don’t hesitate to [contact us](https://www.elastic.co/contact). + + +## How do I get help? [ec_how_do_i_get_help] + +We’re here to help. If you have any questions feel free to reach out to [Support](https://cloud.elastic.co/support). diff --git a/deploy-manage/deploy/elastic-cloud/create-monthly-pay-as-you-go-subscription-on-aws-marketplace.md b/deploy-manage/deploy/elastic-cloud/create-monthly-pay-as-you-go-subscription-on-aws-marketplace.md index 532645aa5..0c0977367 100644 --- a/deploy-manage/deploy/elastic-cloud/create-monthly-pay-as-you-go-subscription-on-aws-marketplace.md +++ b/deploy-manage/deploy/elastic-cloud/create-monthly-pay-as-you-go-subscription-on-aws-marketplace.md @@ -1,4 +1,8 @@ --- +applies_to: + deployment: + ess: ga + serverless: ga mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-aws-marketplace-conversion.html --- diff --git a/deploy-manage/deploy/elastic-cloud/create-monthly-pay-as-you-go-subscription-on-gcp-marketplace.md b/deploy-manage/deploy/elastic-cloud/create-monthly-pay-as-you-go-subscription-on-gcp-marketplace.md index 1569c2dd1..d04022d9b 100644 --- a/deploy-manage/deploy/elastic-cloud/create-monthly-pay-as-you-go-subscription-on-gcp-marketplace.md +++ b/deploy-manage/deploy/elastic-cloud/create-monthly-pay-as-you-go-subscription-on-gcp-marketplace.md @@ -1,4 +1,8 @@ --- +applies_to: + deployment: + ess: ga + serverless: unavailable mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-gcp-marketplace-conversion.html --- diff --git a/deploy-manage/deploy/elastic-cloud/create-serverless-project.md b/deploy-manage/deploy/elastic-cloud/create-serverless-project.md index 3a29622d2..09b65b3f6 100644 --- a/deploy-manage/deploy/elastic-cloud/create-serverless-project.md +++ b/deploy-manage/deploy/elastic-cloud/create-serverless-project.md @@ -1,6 +1,8 @@ --- mapped_pages: - https://www.elastic.co/guide/en/serverless/current/serverless-get-started.html +applies_to: + serverless: --- # Create a serverless project [serverless-get-started] @@ -15,8 +17,7 @@ Choose the type of project that matches your needs and we’ll help you get star | | | | --- | --- | | | | -| ![elasticsearch](https://www.elastic.co/docs/assets/images/elasticsearch.png "") | Elasticsearch
Build custom search applications with Elasticsearch.

[**View guide →**](../../../solutions/search.md)
| -| ![observability](https://www.elastic.co/docs/assets/images/observability.png "") | Observability
Monitor applications and systems with Elastic Observability.

[**View guide →**](../../../solutions/observability.md)
| -| ![security](https://www.elastic.co/docs/assets/images/security.png "") | Security
Detect, investigate, and respond to threats with Elastic Security.

[**View guide →**](../../../solutions/security/elastic-security-serverless.md)
| -| | | - +| ![elasticsearch](https://www.elastic.co/docs/assets/images/elasticsearch.png "elasticsearch =50%") | **Elasticsearch**
Build custom search applications with Elasticsearch.

[**View guide →**](/solutions/search/serverless-elasticsearch-get-started.md)
| +| ![observability](https://www.elastic.co/docs/assets/images/observability.png "observability =50%") | **Observability**
Monitor applications and systems with Elastic Observability.

[**View guide →**](/solutions/observability/get-started/create-an-observability-project.md)
| +| ![security](https://www.elastic.co/docs/assets/images/security.png "security =50%") | **Security**
Detect, investigate, and respond to threats with Elastic Security.

[**View guide →**](/solutions/security/get-started/create-security-project.md)
| +| | | \ No newline at end of file diff --git a/deploy-manage/deploy/elastic-cloud/custom-endpoint-aliases.md b/deploy-manage/deploy/elastic-cloud/custom-endpoint-aliases.md index e463ea00d..ba9f9ce4d 100644 --- a/deploy-manage/deploy/elastic-cloud/custom-endpoint-aliases.md +++ b/deploy-manage/deploy/elastic-cloud/custom-endpoint-aliases.md @@ -1,4 +1,7 @@ --- +applies_to: + deployment: + ess: ga mapped_urls: - https://www.elastic.co/guide/en/cloud/current/ec-regional-deployment-aliases.html - https://www.elastic.co/guide/en/cloud-heroku/current/ech-regional-deployment-aliases.html @@ -13,9 +16,151 @@ mapped_urls: % - [ ] ./raw-migrated-files/cloud/cloud/ec-regional-deployment-aliases.md % - [ ] ./raw-migrated-files/cloud/cloud-heroku/ech-regional-deployment-aliases.md -⚠️ **This page is a work in progress.** ⚠️ -The documentation team is working to combine content pulled from the following pages: +Custom aliases for your deployment endpoints on {{ech}} allow you to have predictable, human-readable URLs that can be shared easily. An alias is unique to only one deployment within a region. + + +## Create a custom endpoint alias for a deployment [ec-create-regional-deployment-alias] + +::::{note} +New deployments are assigned a default alias derived from the deployment name. This alias can be modified later, if needed. +:::: + + +To add an alias to an existing deployment: + +1. From the **Deployments** menu, select a deployment. +2. Under **Custom endpoint alias**, select **Edit**. +3. Define a new alias. Make sure you choose something meaningful to you. + + ::::{tip} + Make the alias as unique as possible to avoid collisions. Aliases might have been already claimed by other users for deployments in the region. + :::: + +4. Select **Update alias**. + + +## Remove a custom endpoint alias [ec-delete-regional-deployment-alias] + +To remove an alias from your deployment, or if you want to re-assign an alias to another deployment, follow these steps: + +1. From the **Deployments** menu, select a deployment. +2. Under **Custom endpoint alias**, select **Edit**. +3. Remove the text from the **Custom endpoint alias** text box. +4. Select **Update alias**. + +::::{note} +After removing an alias, your organisation’s account will hold a claim on it for 30 days. After that period, other users can re-use this alias. +:::: + + + +## Using the custom endpoint URL [ec-using-regional-deployment-alias] + +To use your new custom endpoint URL to access your Elastic products, note that each has its own alias to use in place of the default application UUID. For example, if you configured the custom endpoint alias for your deployment to be `test-alias`, the corresponding alias for the Elasticsearch cluster in that deployment is `test-alias.es`. + +::::{note} +You can get the application-specific custom endpoint alias by selecting **Copy endpoint** for that product. It should contain a subdomain for each application type, for example `es`, `kb`, `apm`, or `ent`. +:::: + + + +### With the REST Client [ec-rest-regional-deployment-alias] + +* As part of the host name: + + After configuring your custom endpoint alias, select **Copy endpoint** on the deployment overview page, which gives you the fully qualified custom endpoint URL for that product. + +* As an HTTP request header: + + Alternatively, you can reach your application by passing the application-specific custom endpoint alias, for example, `test-alias.es`, as the value for the `X-Found-Cluster` HTTP header. + + + +### With the `TransportClient` [ec-transport-regional-deployment-alias] + +While the `TransportClient` is deprecated, your custom endpoint aliases still work with it. Similar to the REST Client, there are two ways to use your custom endpoint alias with the `TransportClient`: + +* As part of the host name: + + Similar to HTTP, you can find the fully qualified host on the deployment overview page by selecting **Copy endpoint** next to Elasticsearch. Make sure to remove the unnecessary `https://` prefix as well as the trailing HTTP port. + +* As part of the **Settings**: + + Include the application-specific custom endpoint alias as the value for `request.headers.X-Found-Cluster` setting in place of the `clusterId`: + + ```java + // Build the settings for our client. + String alias = "test-alias.es"; // Your application-specific custom endpoint alias here + String region = "us-east-1"; // Your region here + boolean enableSsl = true; + + Settings settings = Settings.settingsBuilder() + .put("transport.ping_schedule", "5s") + //.put("transport.sniff", false) // Disabled by default and *must* be disabled. + .put("action.bulk.compress", false) + .put("shield.transport.ssl", enableSsl) + .put("request.headers.X-Found-Cluster", alias) + .put("shield.user", "username:password") // your shield username and password + .build(); + + String hostname = alias + "." + region + ".aws.found.io"; + // Instantiate a TransportClient and add the cluster to the list of addresses to connect to. + // Only port 9343 (SSL-encrypted) is currently supported. + Client client = TransportClient.builder() + .addPlugin(ShieldPlugin.class) + .settings(settings) + .build() + .addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName(hostname), 9343)); + ``` + + +For more information on configuring the `TransportClient`, see + + +## Create a custom domain with NGINX [ec-custom-domains-with-nginx] + +If you don’t get the level of domain customization you’re looking for by using the [custom endpoint aliases](../../../deploy-manage/deploy/elastic-cloud/custom-endpoint-aliases.md), you might consider creating a CNAME record that points to your Elastic Cloud endpoints. However, that can lead to some issues. Instead, setting up your own proxy could provide the desired level of customization. + +::::{important} +The setup described in the following sections is not supported by Elastic, and if your proxy cannot connect to the endpoint, but curl can, we may not be able to help. +:::: + + + +### Avoid creating CNAMEs [ec_avoid_creating_cnames] + +To achieve a fully custom domain, you can add a CNAME that points to your Elastic Cloud endpoint. However, this will lead to invalid certificate errors, and moreover, may simply not work. Your Elastic Cloud endpoints already point to a proxy internal to Elastic Cloud, which may not resolve your configured CNAME in the desired way. + +So what to do, instead? + + +### Setting up a proxy [ec_setting_up_a_proxy] + +Here we’ll show you an example of proxying with NGINX, but this can be extrapolated to HAProxy or some other proxy server. + +You need to set `proxy_pass` and `proxy_set_header`, and include the `X-Found-Cluster` header with the cluster’s UUID. You can get the cluster ID by clicking the `Copy cluster ID` link on your deployment’s main page. + +``` +server { + listen 443 ssl; + server_name elasticsearch.example.com; + + include /etc/nginx/tls.conf; + + location / { + proxy_pass https://.eu-west-1.aws.elastic-cloud.com/; + proxy_set_header X-Found-Cluster ; + } +} +``` + +This should work for all of your applications, not just {{es}}. To set it up for {{kib}}, for example, you can select `Copy cluster ID` next to {{kib}} on your deployment’s main page to get the correct UUID. + +::::{note} +Doing this for {{kib}} won't work with Cloud SSO. +:::: + + +To configure `tls.conf in this example, check out [https://ssl-config.mozilla.org/](https://ssl-config.mozilla.org/) for more fields. -* [/raw-migrated-files/cloud/cloud/ec-regional-deployment-aliases.md](/raw-migrated-files/cloud/cloud/ec-regional-deployment-aliases.md) -* [/raw-migrated-files/cloud/cloud-heroku/ech-regional-deployment-aliases.md](/raw-migrated-files/cloud/cloud-heroku/ech-regional-deployment-aliases.md) \ No newline at end of file diff --git a/deploy-manage/deploy/elastic-cloud/differences-from-other-elasticsearch-offerings.md b/deploy-manage/deploy/elastic-cloud/differences-from-other-elasticsearch-offerings.md index da1221ccd..d4b8d56f7 100644 --- a/deploy-manage/deploy/elastic-cloud/differences-from-other-elasticsearch-offerings.md +++ b/deploy-manage/deploy/elastic-cloud/differences-from-other-elasticsearch-offerings.md @@ -2,6 +2,8 @@ navigation_title: "Serverless differences" mapped_pages: - https://www.elastic.co/guide/en/serverless/current/elasticsearch-differences.html +applies_to: + serverless: --- @@ -151,7 +153,7 @@ The following features are planned for future support in all {{serverless-full}} The following features are not available in {{es-serverless}} and are not planned for future support: * [Custom plugins and bundles](/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md) -* [{{es}} for Apache Hadoop](asciidocalypse://docs/elasticsearch-hadoop/docs/reference/ingestion-tools/elasticsearch-hadoop/elasticsearch-for-apache-hadoop.md) +* [{{es}} for Apache Hadoop](asciidocalypse://docs/elasticsearch-hadoop/docs/reference/elasticsearch-for-apache-hadoop.md) * [Scripted metric aggregations](asciidocalypse://docs/elasticsearch/docs/reference/data-analysis/aggregations/search-aggregations-metrics-scripted-metric-aggregation.md) * Managed web crawler: You can use the [self-managed web crawler](https://github.com/elastic/crawler) instead. * Managed Search connectors: You can use [self-managed Search connectors](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/search-connectors/self-managed-connectors.md) instead. \ No newline at end of file diff --git a/deploy-manage/deploy/elastic-cloud/ec-change-hardware-profile.md b/deploy-manage/deploy/elastic-cloud/ec-change-hardware-profile.md index 7eca1f53f..b7eb011fc 100644 --- a/deploy-manage/deploy/elastic-cloud/ec-change-hardware-profile.md +++ b/deploy-manage/deploy/elastic-cloud/ec-change-hardware-profile.md @@ -1,11 +1,14 @@ --- +applies_to: + deployment: + ess: ga mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-change-hardware-profile.html --- # Change hardware profiles [ec-change-hardware-profile] -Deployment [hardware profiles](ec-configure-deployment-settings.md#ec-hardware-profiles) deploy the Elastic Stack on virtual hardware. Each hardware profile has a different blend of storage, RAM, and vCPU. +Deployment hardware profiles deploy the Elastic Stack on virtual hardware. Each hardware profile has a different blend of storage, RAM, and vCPU. Elastic Cloud regularly introduces new hardware profiles to provide: @@ -27,6 +30,7 @@ Note that if there’s no indication that a newer version is available, that mea :::{image} ../../../images/cloud-ec-new-hardware-profile-version.png :alt: Badge indicating new hardware profile version + :width: 50% ::: 2. Preview the changes for the new hardware profile version. @@ -115,7 +119,7 @@ Replace those values with your actual API key and deployment ID in the following "region":"gcp-us-central1", ``` -3. Check the [hardware profiles available](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-regions-templates-instances.md) for the region that your deployment is in and find the template ID of the deployment hardware profile you’d like to use. +3. Check the [hardware profiles available](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md) for the region that your deployment is in and find the template ID of the deployment hardware profile you’d like to use. ::::{tip} If you wish to update your hardware profile to the latest version available for that same profile, locate the template ID corresponding to the `deployment_template` you retrieved at step 2, but without the version information. For example, if your deployment’s current hardware profile is `gcp-cpu-optimized-v5`, use `gcp-cpu-optimized` as a template ID to update your deployment. @@ -143,7 +147,7 @@ Replace those values with your actual API key and deployment ID in the following ### Storage optimized [ec-profiles-storage] -Your Elasticsearch data nodes are optimized for high I/O throughput. Use this profile if you are new to Elasticsearch or don’t need to run a more specialized workload. You can find the exact storage, memory, and vCPU allotment on the [hardware details page](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/hardware.md#ec-getting-started-configurations) for each cloud provider. +Your Elasticsearch data nodes are optimized for high I/O throughput. Use this profile if you are new to Elasticsearch or don’t need to run a more specialized workload. You can find the exact storage, memory, and vCPU allotment on the [hardware details page](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/hardware.md#ec-getting-started-configurations) for each cloud provider. **Ideal use case** @@ -152,7 +156,7 @@ Good for most ingestion use cases with 7-10 days of data available for fast acce ### Storage optimized (dense) [ec-profiles-storage-dense] -Your Elasticsearch data nodes are optimized for high I/O throughput. You can find the exact storage, memory, and vCPU allotment on the [hardware details page](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/hardware.md#ec-getting-started-configurations) for each cloud provider. +Your Elasticsearch data nodes are optimized for high I/O throughput. You can find the exact storage, memory, and vCPU allotment on the [hardware details page](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/hardware.md#ec-getting-started-configurations) for each cloud provider. **Ideal use case** @@ -161,7 +165,7 @@ Ideal for ingestion use cases with more than 10 days of data available for fast ### CPU optimized [ec-profiles-compute-optimized] -This profile runs CPU-intensive workloads faster. You can find the exact storage, memory, and vCPU allotment on the [hardware details page](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/hardware.md#ec-getting-started-configurations) for each cloud provider. +This profile runs CPU-intensive workloads faster. You can find the exact storage, memory, and vCPU allotment on the [hardware details page](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/hardware.md#ec-getting-started-configurations) for each cloud provider. **Ideal use case** @@ -170,7 +174,7 @@ Consider this configuration for ingestion use cases with 1-4 days of data availa ### CPU optimized (ARM) [ec-profiles-compute-optimized-arm] -This profile is similar to CPU optimized profile but is powered by AWS Graviton2 instances. You can find the exact storage, memory, and vCPU allotment on the [hardware details page](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/hardware.md#ec-getting-started-configurations) for each cloud provider. +This profile is similar to CPU optimized profile but is powered by AWS Graviton2 instances. You can find the exact storage, memory, and vCPU allotment on the [hardware details page](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/hardware.md#ec-getting-started-configurations) for each cloud provider. **Ideal use case** @@ -179,7 +183,7 @@ Consider this configuration for ingestion use cases with 1-4 days of data availa ### Vector search optimized (ARM) [ec-profiles-vector-search] -This profile is suited for Vector search, Generative AI and Semantic search optimized workloads. You can find the exact storage, memory, and vCPU allotment on the [hardware details page](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/hardware.md#ec-getting-started-configurations) for each cloud provider. +This profile is suited for Vector search, Generative AI and Semantic search optimized workloads. You can find the exact storage, memory, and vCPU allotment on the [hardware details page](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/hardware.md#ec-getting-started-configurations) for each cloud provider. **Ideal use case** @@ -188,7 +192,7 @@ Optimized for applications that leverage Vector Search and/or Generative AI. Als ### General purpose [ec-profiles-general-purpose] -This profile runs CPU-intensive workloads faster . You can find the exact storage, memory, and vCPU allotment on the [hardware details page](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/hardware.md#ec-getting-started-configurations) for each cloud provider. +This profile runs CPU-intensive workloads faster . You can find the exact storage, memory, and vCPU allotment on the [hardware details page](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/hardware.md#ec-getting-started-configurations) for each cloud provider. **Ideal use case** @@ -197,7 +201,7 @@ Suitable for ingestion use cases with 5-7 days of data available for fast access ### General purpose (ARM) [ec-profiles-general-purpose-arm] -This profile is similar to the General purpose profile but is powered by AWS Graviton2 instances. You can find the exact storage, memory, and vCPU allotment on the [hardware details page](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/hardware.md#ec-getting-started-configurations) for each cloud provider. +This profile is similar to the General purpose profile but is powered by AWS Graviton2 instances. You can find the exact storage, memory, and vCPU allotment on the [hardware details page](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/hardware.md#ec-getting-started-configurations) for each cloud provider. **Ideal use case** diff --git a/deploy-manage/deploy/elastic-cloud/ec-configure-deployment-settings.md b/deploy-manage/deploy/elastic-cloud/ec-configure-deployment-settings.md deleted file mode 100644 index 362f782d1..000000000 --- a/deploy-manage/deploy/elastic-cloud/ec-configure-deployment-settings.md +++ /dev/null @@ -1,67 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/cloud/current/ec-configure-deployment-settings.html ---- - -# What deployment settings are available? [ec-configure-deployment-settings] - -The following deployment settings are available: - - -## Cloud provider [ec_cloud_provider] - -Selects a cloud platform where your {{es}} clusters and {{kib}} instances will be hosted. Elasticsearch Service currently supports Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. - - -## Region [ec_region] - -Regions represent data centers in a geographic location, where your deployment will be located. When choosing a region, the general rule is to choose one as close to your application servers as possible in order to minimize network delays. - -::::{tip} -You can select your cloud platform and region only when you create a new deployment, so pick ones that works for you. They cannot be changed later. Different deployments can use different platforms and regions. -:::: - - - -## Hardware profile [ec-hardware-profiles] - -Elastic Cloud deploys Elastic Stack components into a *hardware profile* which provides a unique blend of storage, memory and vCPU. This gives you more flexibility to choose the hardware profile that best fits for your use case. For example, *Compute Optimized* deploys Elasticsearch on virtual hardware that provides high [vCPU](../../monitor/monitoring-data/ec-vcpu-boost-instance.md) which can help search-heavy use cases return queries quickly. - -Under the covers, hardware profiles leverage virtualized instances from a cloud provider, such as Amazon Web Services, Google Compute Platform, and Microsoft Azure. You don’t interact with the cloud provider directly, but we do document what we use for your reference. To learn more, check [Elasticsearch Service Hardware](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/hardware.md). - -The components of the Elastic Stack that we support as part of a deployment are called *instances* and include: - -* Elasticsearch data tiers and master nodes -* Machine Learning (ML) nodes -* Kibana instances -* APM and Fleet instances -* Integrations Server instances - -When you [create your deployment](create-an-elastic-cloud-hosted-deployment.md), you can choose the hardware profile that best fits your needs, and configure it with the **Advanced settings** option. Depending on the cloud provider that you select, you can adjust the size of Elasticsearch nodes, or configure your Kibana and APM & Fleet instances. As your usage evolves, you can [change the hardware profile](ec-change-hardware-profile.md) of your deployment. - -::::{note} -Elastic Agent, Beats, and Logstash are components of the Elastic Stack that are not included in the hardware profiles as they are installed outside of Elastic Cloud. -:::: - - - -## Version [ec_version] - -Elastic Stack uses a versions code that is constructed of three numbers separated by dots: the leftmost number is the number of the major release, the middle number is the number of the minor release and the rightmost number is the number of the maintenance release (e.g., 8.3.2 means major release 8, minor release 3 and maintenance release 2). - -You might sometimes notice additional versions listed in the user interface beyond the versions we currently support and maintain, such as [release candidate builds](available-stack-versions.md#ec-release-builds) and older versions. If a version is listed in the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body), it can be deployed. - -To learn about how we support {{es}} versions in Elasticsearch Service, check [Version Policy](available-stack-versions.md). - -You can always upgrade {{es}} versions, but you cannot downgrade. To learn more about upgrading versions of {{es}} and best practices for major version upgrades, check [Version Upgrades](../../upgrade/deployment-or-cluster.md). - - -## Snapshot source [ec_snapshot_source] - -To create a deployment from a snapshot, select the snapshot source. You need to [configure snapshots](../../tools/snapshot-and-restore.md) and establish a snapshot lifecycle management policy and repository before you can restore from a snapshot. The snapshot options depend on the stack version the deployment is running. - - -## Name [ec_name] - -This setting allows you to assign a more human-friendly name to your cluster which will be used for future reference in the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). Common choices are dev, prod, test, or something more domain specific. - diff --git a/deploy-manage/deploy/elastic-cloud/ec-customize-deployment-components.md b/deploy-manage/deploy/elastic-cloud/ec-customize-deployment-components.md index 2293a464c..2ca6888d5 100644 --- a/deploy-manage/deploy/elastic-cloud/ec-customize-deployment-components.md +++ b/deploy-manage/deploy/elastic-cloud/ec-customize-deployment-components.md @@ -1,9 +1,13 @@ --- +applies_to: + deployment: + ess: ga mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-customize-deployment-components.html + - https://www.elastic.co/guide/en/cloud-heroku/current/ech-customize-deployment-components.html --- -# How can I customize the components of my deployment? [ec-customize-deployment-components] +# Customize deployment components [ec-customize-deployment-components] When you create or edit an existing deployment, you can fine-tune the capacity, add extensions, and select additional features. @@ -15,7 +19,7 @@ Autoscaling reduces some of the manual effort required to manage a deployment by ## {{es}} [ec-cluster-size] -Depending upon how much data you have and what queries you plan to run, you need to select a cluster size that fits your needs. There is no silver bullet for deciding how much memory you need other than simply testing it. The [cluster performance metrics](../../monitor/stack-monitoring.md) in the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body) can tell you if your cluster is sized appropriately. You can also [enable deployment monitoring](../../monitor/stack-monitoring/elastic-cloud-stack-monitoring.md) for more detailed performance metrics. Fortunately, you can change the amount of memory allocated to the cluster later without any downtime for HA deployments. +Depending upon how much data you have and what queries you plan to run, you need to select a cluster size that fits your needs. There is no silver bullet for deciding how much memory you need other than simply testing it. The [cluster performance metrics](../../monitor/stack-monitoring.md) in the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) can tell you if your cluster is sized appropriately. You can also [enable deployment monitoring](../../monitor/stack-monitoring/elastic-cloud-stack-monitoring.md) for more detailed performance metrics. Fortunately, you can change the amount of memory allocated to the cluster later without any downtime for HA deployments. To change a cluster’s topology, from deployment management, select **Edit deployment** from the **Actions** dropdown. Next, select a storage and RAM setting from the **Size per zone** drop-down list, and save your changes. When downsizing the cluster, make sure to have enough resources to handle the current load, otherwise your cluster will be under stress. @@ -53,7 +57,7 @@ High availability is achieved by running a cluster with replicas in multiple dat Running in two data centers or availability zones is our default high availability configuration. It provides reasonably high protection against infrastructure failures and intermittent network problems. You might want three data centers if you need even higher fault tolerance. Just one zone might be sufficient, if the cluster is mainly used for testing or development. ::::{important} -Some [regions](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/regions.md) might have only two availability zones. +Some [regions](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/regions.md) might have only two availability zones. :::: @@ -66,7 +70,7 @@ The node capacity you choose is per data center. The reason for this is that the ## Sharding [ec_sharding] -You can review your {{es}} shard activity from Elasticsearch Service. At the bottom of the {{es}} page, you can hover over each part of the shard visualization for specific numbers. +You can review your {{es}} shard activity from the {{ecloud}} Console. When viewing a hosted deployment details, at the bottom of the {{es}} page, you can hover over each part of the shard visualization for specific numbers. :::{image} ../../../images/cloud-ec-shard-activity.gif :alt: Shard activity @@ -82,7 +86,7 @@ Here, you can configure user settings, extensions, and system settings (older v ### User settings [ec-user-settings] -Set specific configuration parameters to change how {{es}} and other Elastic products run. User settings are appended to the appropriate YAML configuration file, but not all settings are supported in Elasticsearch Service. +Set specific configuration parameters to change how {{es}} and other Elastic products run. User settings are appended to the appropriate YAML configuration file, but not all settings are supported in {{ech}} deployments. For more information, refer to [Edit your user settings](edit-stack-settings.md). diff --git a/deploy-manage/deploy/elastic-cloud/ec-customize-deployment.md b/deploy-manage/deploy/elastic-cloud/ec-customize-deployment.md deleted file mode 100644 index 09b813ff5..000000000 --- a/deploy-manage/deploy/elastic-cloud/ec-customize-deployment.md +++ /dev/null @@ -1,63 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/cloud/current/ec-customize-deployment.html ---- - -# Change your configuration [ec-customize-deployment] - -You might want to change the configuration of your deployment to: - -* Add features, such as machine learning or APM (application performance monitoring). -* Increase or decrease capacity by changing the amount of reserved memory and storage for different parts of your deployment. - - ::::{note} - During the free trial, Elasticsearch Service deployments are restricted to a limited size. You can increase the size of your deployments when your trial is converted to a paid subscription. - :::: - -* Enable [autoscaling](../../autoscaling.md) so that the available resources for deployment components, such as data tiers and machine learning nodes, adjust automatically as the demands on them change over time. -* Enable high availability, also known as fault tolerance, by adjusting the number of data center availability zones that parts of your deployment run on. -* Upgrade to new versions of {{es}}. You can upgrade from one major version to another, such as from 6.8.23 to 7.17.27, or from one minor version to another, such as 6.1 to 6.2. You can’t downgrade versions. -* Change what plugins are available on your {{es}} cluster. - -With the exception of major version upgrades for Elastic Stack products, Elasticsearch Service can perform configuration changes without having to interrupt your deployment. You can continue searching and indexing. The changes can also be done in bulk. For example: in one action, you can add more memory, upgrade, adjust the number of {{es}} plugins and adjust the number of availability zones. - -We perform all of these changes by creating instances with the new configurations that join your existing deployment before removing the old ones. For example: if you are changing your {{es}} cluster configuration, we create new {{es}} nodes, recover your indexes, and start routing requests to the new nodes. Only when all new {{es}} nodes are ready, do we bring down the old ones. - -By doing it this way, we reduce the risk of making configuration changes. If any of the new instances have a problems, the old ones are still there, processing requests. - -::::{note} -If you use a Platform-as-a-Service provider like Heroku, the administration console is slightly different and does not allow you to make changes that will affect the price. That must be done in the platform provider’s add-on system. You can still do things like change {{es}} version or plugins. -:::: - - -To change your deployment: - -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. - - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. From the deployment menu, select **Edit**. -4. Let the user interface guide you through the cluster configuration for your cluster. For a full list of the supported settings, check [What Deployment Settings Are Available?](ec-configure-deployment-settings.md) - - If you are changing an existing deployment, you can make multiple changes to your {{es}} cluster with a single configuration update, such as changing the capacity and upgrading to a new {{es}} version in one step. - -5. Save your changes. The new configuration takes a few moments to create. - -Review the changes to your configuration on the **Activity** page, with a tab for {{es}} and one for {{kib}}. - -::::{tip} -If you are creating a new deployment, select **Edit settings** to change the cloud provider, region, hardware profile, and stack version; or select **Advanced settings** for more complex configuration settings. -:::: - - -That’s it! If you haven’t already, [start exploring with {{kib}}](access-kibana.md), our visualization tool. If you’re not familiar with adding data yet, {{kib}} can show you how to index your data into {{es}}, or try our basic steps for working with [{{es}}](../../../manage-data/data-store/manage-data-from-the-command-line.md). - -::::{tip} -Some features are not available during the 14-day free trial. If a feature is greyed out, [add a credit card](../../cloud-organization/billing/add-billing-details.md) to unlock the feature. -:::: - - - - - diff --git a/deploy-manage/deploy/elastic-cloud/ech-api-console.md b/deploy-manage/deploy/elastic-cloud/ech-api-console.md index 55cffbc51..c0aae3183 100644 --- a/deploy-manage/deploy/elastic-cloud/ech-api-console.md +++ b/deploy-manage/deploy/elastic-cloud/ech-api-console.md @@ -15,7 +15,7 @@ API console is intended for admin purposes. Avoid running normal workload like i You are unable to make Elasticsearch Add-On for Heroku platform changes from the Elasticsearch API. 1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. diff --git a/deploy-manage/deploy/elastic-cloud/ech-aws-instance-configuration.md b/deploy-manage/deploy/elastic-cloud/ech-aws-instance-configuration.md index cf00dca29..42df8709f 100644 --- a/deploy-manage/deploy/elastic-cloud/ech-aws-instance-configuration.md +++ b/deploy-manage/deploy/elastic-cloud/ech-aws-instance-configuration.md @@ -7,7 +7,7 @@ mapped_pages: Amazon EC2 (AWS) C6gd, M6gd & R6gd instances, powered by AWS Graviton2, are now available for Elastic Cloud deployments. C6gd, M6gd & R6gd VMs use the [Graviton2, ARM neoverse N1 cores](https://aws.amazon.com/about-aws/whats-new/2020/07/announcing-new-amazon-ec2-instances-powered-aws-graviton2-processors/) and provide high compute coupled with fast NVMe storage, which makes them a good fit to power Elastic workloads. In addition, Graviton2 VMs also offer more than a 20% improvement in price-performance over comparable Intel chipsets. -In addition to AWS Graviton2 instances, Amazon EC2 (AWS) C5d, M5d, I3, I3en, and D2/D3 instances are now available for Elastic Cloud deployments in all supported [AWS Cloud Regions](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-regions-templates-instances.md#ec-aws_regions). +In addition to AWS Graviton2 instances, Amazon EC2 (AWS) C5d, M5d, I3, I3en, and D2/D3 instances are now available for Elastic Cloud deployments in all supported [AWS Cloud Regions](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md#ec-aws_regions). For specific AWS hardware and availability details, check the [Regional availability of instances per AWS region](ech-default-aws-configurations.md#aws-list-region) and the [AWS default provider instance configurations](ech-default-aws-configurations.md). @@ -41,7 +41,7 @@ The new configuration naming convention aligns with the [data tiers](/manage-dat | aws.es.datawarm.i3en, aws.es.datacold.i3en | These configurations maintain the same type of VM configuration as used in the previous config (“aws.data.highstorage.i3en”) but will have a new name (and billing SKU) that is consistent with the new naming. | | aws.es.datafrozen.i3en | This configuration maintains the same type of VM configuration as defined for (“aws.es.datacold.i3en”) config. | -For a detailed price list, check the [Elastic Cloud price list](https://cloud.elastic.co/deployment-pricing-table?provider=aws). For a detailed specification of the new configurations, check [Elasticsearch Service default provider instance configurations](ech-default-aws-configurations.md). +For a detailed price list, check the [Elastic Cloud price list](https://cloud.elastic.co/deployment-pricing-table?provider=aws). For a detailed specification of the new configurations, check [{{ecloud}} default provider instance configurations](ech-default-aws-configurations.md). The benefits of the new configurations are multifold: diff --git a/deploy-manage/deploy/elastic-cloud/ech-azure-instance-configuration.md b/deploy-manage/deploy/elastic-cloud/ech-azure-instance-configuration.md index e39341a96..4fb3d5d18 100644 --- a/deploy-manage/deploy/elastic-cloud/ech-azure-instance-configuration.md +++ b/deploy-manage/deploy/elastic-cloud/ech-azure-instance-configuration.md @@ -5,7 +5,7 @@ mapped_pages: # Elasticsearch Add-On for Heroku Azure instance configurations [ech-azure-instance-configuration] -Azure [Ddv4](https://docs.microsoft.com/en-us/azure/virtual-machines/ddv4-ddsv4-series/), [Edsv4](https://docs.microsoft.com/en-us/azure/virtual-machines/edv4-edsv4-series/), [Fsv2](https://docs.microsoft.com/en-us/azure/virtual-machines/fsv2-series/), and [Lsv3](https://docs.microsoft.com/en-us/azure/virtual-machines/lsv3-series/) virtual machines (VM) are now available for Elastic Cloud deployments in all supported [Azure Cloud regions](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-regions-templates-instances.md#ec-azure_regions). These VMs provide additional combinations of compute, memory, and disk configurations to better fit your use-cases to optimize performance and cost. +Azure [Ddv4](https://docs.microsoft.com/en-us/azure/virtual-machines/ddv4-ddsv4-series/), [Edsv4](https://docs.microsoft.com/en-us/azure/virtual-machines/edv4-edsv4-series/), [Fsv2](https://docs.microsoft.com/en-us/azure/virtual-machines/fsv2-series/), and [Lsv3](https://docs.microsoft.com/en-us/azure/virtual-machines/lsv3-series/) virtual machines (VM) are now available for Elastic Cloud deployments in all supported [Azure Cloud regions](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md#ec-azure_regions). These VMs provide additional combinations of compute, memory, and disk configurations to better fit your use-cases to optimize performance and cost. To learn about the Azure specific configurations, check: @@ -37,7 +37,7 @@ The new configuration naming convention aligns with the [data tiers](/manage-dat | azure.es.datawarm.edsv4, azure.es.datacold.edsv4 | This is a new configuration that replaces “azure.data.highstorage.e16sv3” config but provides more disk space. | | azure.es.datafrozen.edsv4 | This is a new configuration that replaces “azure.es.datafrozen.lsv2” or “azure.es.datafrozen.esv3” config but provides more disk space. | -For a detailed price list, check the [Elastic Cloud price list](https://cloud.elastic.co/deployment-pricing-table?provider=azure). For a detailed specification of the new configurations, check [Elasticsearch Service default Azure instance configurations](ech-default-azure-configurations.md). +For a detailed price list, check the [Elastic Cloud price list](https://cloud.elastic.co/deployment-pricing-table?provider=azure). For a detailed specification of the new configurations, check [{{ecloud}} default Azure instance configurations](ech-default-azure-configurations.md). The benefits of the new configurations are multifold: diff --git a/deploy-manage/deploy/elastic-cloud/ech-configure-deployment-settings.md b/deploy-manage/deploy/elastic-cloud/ech-configure-deployment-settings.md deleted file mode 100644 index 7ea48596a..000000000 --- a/deploy-manage/deploy/elastic-cloud/ech-configure-deployment-settings.md +++ /dev/null @@ -1,45 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/cloud-heroku/current/ech-configure-deployment-settings.html ---- - -# What deployment settings are available? [ech-configure-deployment-settings] - -The following deployment settings are available: - - -## Cloud provider [echcloud_provider] - -Selects a cloud platform where your {{es}} clusters, {{kib}} instance, and other {{stack}} components will be hosted. Elasticsearch Add-On for Heroku currently supports Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. - - -## Region [echregion] - -Regions represent data centers in a geographic location, where your deployment will be located. When choosing a region, the general rule is to choose one as close to your application servers as possible in order to minimize network delays. - -::::{tip} -You can select your cloud platform and region only when you create a new deployment, so pick ones that works for you. They cannot be changed later. Different deployments can use different platforms and regions. -:::: - - - -## Version [echversion] - -Elastic Stack uses a versions code that is constructed of three numbers separated by dots: the leftmost number is the number of the major release, the middle number is the number of the minor release and the rightmost number is the number of the maintenance release (e.g., 8.3.2 means major release 8, minor release 3 and maintenance release 2). - -You might sometimes notice additional versions listed in the user interface beyond the versions we currently support and maintain, such as [release candidate builds](ech-version-policy.md#ech-release-builds) and older versions. If a version is listed in the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body), it can be deployed. - -To learn about how we support {{es}} versions in Elasticsearch Add-On for Heroku, check [Version Policy](ech-version-policy.md). - -You can always upgrade {{es}} versions, but you cannot downgrade. To learn more about upgrading versions of {{es}} and best practices for major version upgrades, check [Version Upgrades](../../upgrade/deployment-or-cluster.md). - - -## Snapshot source [echsnapshot_source] - -To create a deployment from a snapshot, select the snapshot source. You need to [configure snapshots](../../tools/snapshot-and-restore.md) and establish a snapshot lifecycle management policy and repository before you can restore from a snapshot. The snapshot options depend on the stack version the deployment is running. - - -## Name [echname] - -This setting allows you to assign a more human-friendly name to your cluster which will be used for future reference in the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). Common choices are dev, prod, test, or something more domain specific. - diff --git a/deploy-manage/deploy/elastic-cloud/ech-configure-settings.md b/deploy-manage/deploy/elastic-cloud/ech-configure-settings.md deleted file mode 100644 index 6f932bfe3..000000000 --- a/deploy-manage/deploy/elastic-cloud/ech-configure-settings.md +++ /dev/null @@ -1,52 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/cloud-heroku/current/ech-configure-settings.html ---- - -# Configure your deployment [ech-configure-settings] - -You might want to change the configuration of your deployment to: - -* Add features, such as machine learning or APM (application performance monitoring). -* Increase or decrease capacity by changing the amount of reserved memory and storage for different parts of your deployment. -* Enable [autoscaling](../../autoscaling.md) so that the available resources for deployment components, such as data tiers and machine learning nodes, adjust automatically as the demands on them change over time. -* Enable high availability by adjusting the number of availability zones that parts of your deployment run on. -* Upgrade to new versions of {{es}}. You can upgrade from one major version to another, such as from 7.17.27 to 8.17.1, or from one minor version to another, such as 8.6 to 8.7. You can’t downgrade versions. -* Change what plugins are available on your {{es}} cluster. - -::::{note} -During the free trial, {{ess}} deployments are restricted to a fixed size. You can resize your deployments when your trial is converted into a paid subscription. -:::: - - -You can change the configuration of a running deployment from the **Configuration** pane in the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). - -With the exception of major version upgrades for Elastic Stack products, Elasticsearch Add-On for Heroku can perform configuration changes without having to interrupt your deployment. You can continue searching and indexing. The changes can also be done in bulk. For example: in one action you can add more memory, upgrade, adjust the number of {{es}} plugins and adjust the number of availability zones. - -We perform all of these changes by creating instances with the new configurations that join your existing deployment before removing the old ones. For example: if you are changing your {{es}} cluster configuration, we create new {{es}} nodes, recover your indexes, and start routing requests to the new nodes. Only when all new {{es}} nodes are ready, do we bring down the old ones. - -By doing it this way, we reduce the risk of making configuration changes. If any of the new instances have a problems, the old ones are still there, processing requests. - -::::{note} -If you use a Platform-as-a-Service provider like Heroku, the administration console is slightly different and does not allow you to make changes that will affect the price. That must be done in the platform provider’s add-on system. You can still do things like change {{es}} version or plugins. -:::: - - -To change the {{es}} cluster in your deployment: - -1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the deployments page, select your deployment. - - Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. From your deployment menu, select **{{es}}** and then **Edit**. -4. Let the user interface guide you through the cluster configuration for your cluster. For a full list of the supported settings, check [What Deployment Settings Are Available?](ech-configure-deployment-settings.md) - - If you are changing an existing deployment, you can make multiple changes to your {{es}} cluster with a single configuration update, such as changing the capacity and upgrading to a new {{es}} version in one step. - -5. Save your changes. The new configuration takes a few moments to create. - -Review the changes to your configuration on the **Activity** page, with a tab for {{es}} and one for {{kib}}. - - - diff --git a/deploy-manage/deploy/elastic-cloud/ech-customize-deployment-components.md b/deploy-manage/deploy/elastic-cloud/ech-customize-deployment-components.md deleted file mode 100644 index b734038a9..000000000 --- a/deploy-manage/deploy/elastic-cloud/ech-customize-deployment-components.md +++ /dev/null @@ -1,135 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/cloud-heroku/current/ech-customize-deployment-components.html ---- - -# How can I customize the components of my deployment? [ech-customize-deployment-components] - -When you create or edit an existing deployment, you can fine-tune the capacity, add extensions, and select additional features. - - -### Autoscaling [ech-customize-autoscaling] - -Autoscaling reduces some of the manual effort required to manage a deployment by adjusting the capacity as demands on the deployment change. Currently, autoscaling is supported to scale {{es}} data tiers upwards, and to scale machine learning nodes both upwards and downwards. Check [Deployment autoscaling](../../autoscaling.md) to learn more. - - -### {{es}} [ech-cluster-size] - -Depending upon how much data you have and what queries you plan to run, you need to select a cluster size that fits your needs. There is no silver bullet for deciding how much memory you need other than simply testing it. The [cluster performance metrics](../../monitor/stack-monitoring/elastic-cloud-stack-monitoring.md) in the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body) can tell you if your cluster is sized appropriately. You can also [enable deployment monitoring](../../monitor/stack-monitoring/elastic-cloud-stack-monitoring.md) for more detailed performance metrics. Fortunately, you can change the amount of memory allocated to the cluster later without any downtime for HA deployments. - -To change a cluster’s topology, from deployment management, select **Edit deployment** from the **Actions** dropdown. Next, select a storage and RAM setting from the **Size per zone** drop-down list, and save your changes. When downsizing the cluster, make sure to have enough resources to handle the current load, otherwise your cluster will be under stress. - -:::{image} ../../../images/cloud-heroku-ec-capacity.png -:alt: Capacity slider to adjust {{es}} cluster size -::: - -Currently, half the memory is assigned to the JVM heap (a bit less when monitoring is activated). For example, on a 32 GB cluster, 16 GB are allotted to heap. The disk-to-RAM ratio currently is 1:24, meaning that you get 24 GB of storage space for each 1 GB of RAM. All clusters are backed by SSD drives. - -::::{tip} -For production systems, we recommend not using less than 4 GB of RAM for your cluster, which assigns 2 GB to the JVM heap. -:::: - - -The CPU resources assigned to a cluster are relative to the size of your cluster, meaning that a 32 GB cluster gets twice as much CPU resources as a 16 GB cluster. All clusters are guaranteed their share of CPU resources, as we do not overcommit resources. Smaller clusters up to and including 8 GB of RAM benefit from temporary CPU boosting to improve performance when needed most. - - -### Fault tolerance [ech-high-availability] - -High availability is achieved by running a cluster with replicas in multiple data centers (availability zones), to prevent against downtime when infrastructure problems occur or when resizing or upgrading deployments. We offer the options of running in one, two, or three data centers. - -:::{image} ../../../images/cloud-heroku-ec-fault-tolerance.png -:alt: High availability features -::: - -Running in two data centers or availability zones is our default high availability configuration. It provides reasonably high protection against infrastructure failures and intermittent network problems. You might want three data centers if you need even higher fault tolerance. Just one zone might be sufficient, if the cluster is mainly used for testing or development. - -::::{important} -Some [regions](ech-reference-regions.md) might have only two availability zones. -:::: - - -Like many other changes, you change the level of fault tolerance while the cluster is running. For example, when you prepare a new cluster for production use, you can first run it in a single data center and then add another data center right before deploying to production. - -While multiple data centers or availability zones increase a cluster’s fault tolerance, they do not protect against problematic searches that cause nodes to run out of memory. For a cluster to be highly reliable and available, it is also important to [have enough memory](../../../troubleshoot/monitoring/high-memory-pressure.md). - -The node capacity you choose is per data center. The reason for this is that there is no point in having two data centers if the failure of one will result in a cascading error because the remaining data center cannot handle the total load. Through the allocation awareness in {{es}}, we configure the nodes so that your {{es}} cluster will automatically allocate replicas between each availability zone. - - -### Sharding [echsharding] - -You can get an at-a-glance status of all the shards in the deployment on the **{{es}}** page. - -:::{image} ../../../images/cloud-heroku-ec-shard-activity.gif -:alt: Shard activity -::: - -We recommend that you read [Size your shards](../../production-guidance/optimize-performance/size-shards.md) before you change the number of shards. - -## Manage user settings and extensions [echmanage_user_settings_and_extensions] - -Here, you can configure user settings, extensions, and system settings (older versions only). - -### User settings [ech-user-settings] - -Set specific configuration parameters to change how {{es}} and other Elastic products run. User settings are appended to the appropriate YAML configuration file, but not all settings are supported in Elasticsearch Add-On for Heroku. - -For more information, refer to [Edit your user settings](edit-stack-settings.md). - - -#### Extensions [echextensions] - -Lists the official plugins available for your selected {{es}} version, as well as any custom plugins and user bundles with dictionaries or scripts. - -The reason we do not list the version chosen on this page is because we reserve the option to change it when necessary. That said, we will not force a cluster restart for a simple plugin upgrade unless there are severe issues with the current version. In most cases, plugin upgrades are applied lazily, in other words when something else forces a restart like you changing the plan or {{es}} runs out of memory. - -::::{tip} -Only Gold and Platinum subscriptions have access to uploading custom plugins. All subscription levels, including Standard, can upload scripts and dictionaries. -:::: - - - -### {{kib}} [echkib] - -A {{kib}} instance is created automatically as part of every deployment. - -::::{tip} -If you use a version before 5.0 or if your deployment didn’t include a {{kib}} instance initially, there might not be a {{kib}} endpoint URL shown, yet. To enable {{kib}}, select **Enable**. Enabling {{kib}} provides you with an endpoint URL, where you can access {{kib}}. It can take a short while to provision {{kib}} right after you select **Enable**, so if you get an error message when you first access the endpoint URL, try again. -:::: - - -Selecting **Open** will log you in to {{kib}} using single sign-on (SSO). For versions older than 7.9.2, you need to log in to {{kib}} with the `elastic` superuser. The password was provided when you created your deployment or [can be reset](../../users-roles/cluster-or-deployment-auth/built-in-users.md). - -In production systems, you might need to control what {{es}} data users can access through {{kib}}. Refer to [Securing your deployment](../../security.md) to learn more. - - -### {{integrations-server}} [echintegrations_server] - -{{integrations-server}} connects observability and security data from Elastic Agents and APM to Elasticsearch. An {{integrations-server}} instance is created automatically as part of every deployment. - - -### Security [echsecurity] - -Here, you can configure features that keep your deployment secure: reset the password for the `elastic` user, set up traffic filters, and add settings to the {{es}} keystore. You can also set up remote connections to other deployments. - - -### Maintenance mode [ech-maintenance-mode] - -In maintenance mode, requests to your cluster are blocked during configuration changes. You use maintenance mode to perform corrective actions that might otherwise be difficult to complete. Maintenance mode lasts for the duration of a configuration change and is turned off after the change completes. - -We strongly recommend that you use maintenance mode when your cluster is overwhelmed by requests and you need to increase capacity. If your cluster is being overwhelmed because it is undersized for its workload, nodes might not respond to efforts to resize. Putting the cluster into maintenance mode as part of the configuration change can stop the cluster from becoming completely unresponsive during the configuration change, so that you can resolve the capacity issue. Without this option, configuration changes for clusters that are overwhelmed can take longer and are more likely to fail. - - -### Actions [echactions] - -There are a few actions you can perform from the **Actions** dropdown: - -* Restart {{es}} - Needed only rarely, but full cluster restarts can help with a suspected operational issue before reaching out to Elastic for help. -* Delete your deployment - For deployment that you no longer need and don’t want to be charged for any longer. Deleting a deployment removes the {{es}} cluster and all your data permanently. - -::::{important} -Use these actions with care. Deployments are not available while they restart and deleting a deployment does really remove the {{es}} cluster and all your data permanently. -:::: - - - - diff --git a/deploy-manage/deploy/elastic-cloud/ech-default-aws-configurations.md b/deploy-manage/deploy/elastic-cloud/ech-default-aws-configurations.md index ade66cfc9..7d010e452 100644 --- a/deploy-manage/deploy/elastic-cloud/ech-default-aws-configurations.md +++ b/deploy-manage/deploy/elastic-cloud/ech-default-aws-configurations.md @@ -5,7 +5,7 @@ mapped_pages: # Elasticsearch Add-On for Heroku AWS default provider instance configurations [ech-default-aws-configurations] -Following are the preferred instance types / machine configurations, storage types, disk to memory ratios, and virtual CPU to RAM ratios for all instance configurations available on {{ess}} and provided by AWS. +Following are the preferred instance types / machine configurations, storage types, disk to memory ratios, and virtual CPU to RAM ratios for all instance configurations available on {{ech}} and provided by AWS. | Instance configuration | Preferred Instance Type or Machine Configuration1 | Storage Type1 | Disk:Memory Ratio2 | vCPU/RAM Ratio | | --- | --- | --- | --- | --- | @@ -27,7 +27,7 @@ Following are the preferred instance types / machine configurations, storage typ ## Additional instances [ech-aws-additional-instances] -Following are the preferred instance type / configuration and virtual CPU to RAM ratios for additional instance configurations available on {{ess}} and provided by AWS. +Following are the preferred instance type / configuration and virtual CPU to RAM ratios for additional instance configurations available on {{ech}} and provided by AWS. | Instance configuration | Preferred Instance Type or Machine Configuration1 | vCPU/RAM Ratio | | --- | --- | --- | diff --git a/deploy-manage/deploy/elastic-cloud/ech-default-azure-configurations.md b/deploy-manage/deploy/elastic-cloud/ech-default-azure-configurations.md index 0c841b2a9..fc9c5755f 100644 --- a/deploy-manage/deploy/elastic-cloud/ech-default-azure-configurations.md +++ b/deploy-manage/deploy/elastic-cloud/ech-default-azure-configurations.md @@ -5,7 +5,7 @@ mapped_pages: # Elasticsearch Add-On for Heroku Azure default provider instance configurations [ech-default-azure-configurations] -Following are the preferred instance types / machine configurations, storage types, disk to memory ratios, and virtual CPU to RAM ratios for all instance configurations available on {{ess}} and provided by Azure. +Following are the preferred instance types / machine configurations, storage types, disk to memory ratios, and virtual CPU to RAM ratios for all instance configurations available on {{ech}} and provided by Azure. | Instance configuration | Preferred Instance Type or Machine Configuration1 | Storage Type1 | Disk:Memory Ratio2 | vCPU/RAM Ratio | | --- | --- | --- | --- | --- | @@ -20,7 +20,7 @@ Following are the preferred instance types / machine configurations, storage typ ## Additional instances [ech-additional-instances] -Following are the preferred instance type / configuration and virtual CPU to RAM ratios for additional instance configurations available on {{ess}} and provided by Azure. +Following are the preferred instance type / configuration and virtual CPU to RAM ratios for additional instance configurations available on {{ech}} and provided by Azure. | Instance configuration | Preferred Instance Type or Machine Configuration1 | vCPU/RAM Ratio | | --- | --- | --- | diff --git a/deploy-manage/deploy/elastic-cloud/ech-default-gcp-configurations.md b/deploy-manage/deploy/elastic-cloud/ech-default-gcp-configurations.md index c6433a3db..44eb14bd4 100644 --- a/deploy-manage/deploy/elastic-cloud/ech-default-gcp-configurations.md +++ b/deploy-manage/deploy/elastic-cloud/ech-default-gcp-configurations.md @@ -5,7 +5,7 @@ mapped_pages: # Elasticsearch Add-On for Heroku GCP default provider instance configurations [ech-default-gcp-configurations] -Following are the preferred instance types / machine configurations, storage types, disk to memory ratios, and virtual CPU to RAM ratios for all instance configurations available on {{ess}} and provided by GCP. +Following are the preferred instance types / machine configurations, storage types, disk to memory ratios, and virtual CPU to RAM ratios for all instance configurations available on {{ech}} and provided by GCP. | Instance configuration | Preferred Instance Type or Machine Configuration1 | Storage Type1 | Disk:Memory Ratio2 | vCPU/RAM Ratio | | --- | --- | --- | --- | --- | @@ -21,7 +21,7 @@ Following are the preferred instance types / machine configurations, storage typ ## Additional instances [ech-gcp-additional-instances] -Following are the preferred instance configuration and virtual CPU to RAM ratios for additional instance configurations available on {{ess}} and provided by GCP. +Following are the preferred instance configuration and virtual CPU to RAM ratios for additional instance configurations available on {{ech}} and provided by GCP. | Instance configuration | Preferred Instance Type or Machine Configuration1 | vCPU/RAM Ratio | | --- | --- | --- | diff --git a/deploy-manage/deploy/elastic-cloud/ech-gcp-instance-configuration.md b/deploy-manage/deploy/elastic-cloud/ech-gcp-instance-configuration.md index 71d37d5ab..356cccd17 100644 --- a/deploy-manage/deploy/elastic-cloud/ech-gcp-instance-configuration.md +++ b/deploy-manage/deploy/elastic-cloud/ech-gcp-instance-configuration.md @@ -5,7 +5,7 @@ mapped_pages: # Elasticsearch Add-On for Heroku GCP instance configurations [ech-gcp-instance-configuration] -Google Compute Engine (GCE) N2 general purpose VM types are now available for Elastic Cloud deployments in all supported [Google Cloud regions](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-regions-templates-instances.md#ec-gcp_regions). [N2](https://cloud.google.com/compute/docs/machine-types) VMs have a better mix of vCPU, RAM, and internal disk, and are up to 50% more cost effective when compared to N1 VM types. In addition to N2, we also provide N2D VMs across the Google Cloud regions. +Google Compute Engine (GCE) N2 general purpose VM types are now available for Elastic Cloud deployments in all supported [Google Cloud regions](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md#ec-gcp_regions). [N2](https://cloud.google.com/compute/docs/machine-types) VMs have a better mix of vCPU, RAM, and internal disk, and are up to 50% more cost effective when compared to N1 VM types. In addition to N2, we also provide N2D VMs across the Google Cloud regions. To learn about the GCE specific configurations, check: @@ -39,7 +39,7 @@ The new configuration naming convention aligns with the [data tiers](/manage-dat | gcp.es.datawarm.n2.68x10x190, gcp.es.datacold.n2.68x10x190 | These configurations replace “highstorage”, which is based on N1 with 1:160 RAM:disk and similar RAM:CPU ratios. | | gcp.es.datafrozen.n2.68x10x95 | This configuration replaces the (short lived) gcp.es.datafrozen.n2d.64x8x95 configuration we used for the frozen cache tier. n2d was based on the AMC epyc processor but we found that the Intel-based configuration provides a slightly better cost/performance ratio. We also tweaked the RAM/CPU ratios to align to other configurations and benchmarks. | -For a detailed price list, check the [Elastic Cloud deployment pricing table](https://cloud.elastic.co/deployment-pricing-table?provider=gcp). For a detailed specification of the new configurations, check [Elasticsearch Service default GCP instance configurations](ech-default-gcp-configurations.md). +For a detailed price list, check the [Elastic Cloud deployment pricing table](https://cloud.elastic.co/deployment-pricing-table?provider=gcp). For a detailed specification of the new configurations, check [{{ecloud}} default GCP instance configurations](ech-default-gcp-configurations.md). The benefits of the new configurations are multifold: diff --git a/deploy-manage/deploy/elastic-cloud/ech-get-help.md b/deploy-manage/deploy/elastic-cloud/ech-get-help.md index 2b52a94fb..3c359d906 100644 --- a/deploy-manage/deploy/elastic-cloud/ech-get-help.md +++ b/deploy-manage/deploy/elastic-cloud/ech-get-help.md @@ -46,7 +46,7 @@ Elasticsearch Add-On for Heroku Gold and Platinum subscriptions : Support is handled by email or through the Elastic Support Portal. Provides guaranteed response times for support issues, better support coverage hours, and support contacts at Elastic. Also includes support for how-to and development questions. The exact support coverage depends on whether you are a Gold or Platinum customer. To learn more, check [Elasticsearch Add-On for Heroku Premium Support Services Policy](https://www.elastic.co/legal/support_policy/cloud_premium). ::::{note} -If you are in free trial, you are also eligible to get the Elasticsearch Service Standard level support for as long as the trial is active. +If you are in free trial, you are also eligible to get the {{ecloud}} Standard level support for as long as the trial is active. :::: diff --git a/deploy-manage/deploy/elastic-cloud/ech-getting-started.md b/deploy-manage/deploy/elastic-cloud/ech-getting-started.md deleted file mode 100644 index 32a33ed12..000000000 --- a/deploy-manage/deploy/elastic-cloud/ech-getting-started.md +++ /dev/null @@ -1,16 +0,0 @@ ---- -mapped_pages: - - https://www.elastic.co/guide/en/cloud-heroku/current/ech-getting-started.html - - https://www.elastic.co/guide/en/cloud-heroku/current/index.html ---- - -# Introducing Elasticsearch Add-On for Heroku [ech-getting-started] - -This documentation applies to Heroku users who want to make use of the Elasticsearch Add-On for Heroku that is available from the [Heroku Dashboard](https://dashboard.heroku.com/) or that can be installed from the CLI. - -The add-on runs on the Elasticsearch Service and provides access to [Elasticsearch](https://www.elastic.co/products/elasticsearch), the open source, distributed, RESTful search engine. Many other features of the Elastic Stack are also readily available to Heroku users through the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body) after you install the add-on. For example, you can use Kibana to visualize your Elasticsearch data. - -[Elasticsearch Machine Learning](/explore-analyze/machine-learning.md), [Elastic APM](/solutions/observability/apps/application-performance-monitoring-apm.md) and [Elastic Fleet Server](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/index.md) are not supported by the Elasticsearch Add-On for Heroku. - -To learn more about what plans are available for Heroku users and their cost, check the [Elasticsearch add-on](https://elements.heroku.com/addons/foundelasticsearch) in the Elements Marketplace. - diff --git a/deploy-manage/deploy/elastic-cloud/ech-restrictions.md b/deploy-manage/deploy/elastic-cloud/ech-restrictions.md index c2b2be609..9dc8ec9e4 100644 --- a/deploy-manage/deploy/elastic-cloud/ech-restrictions.md +++ b/deploy-manage/deploy/elastic-cloud/ech-restrictions.md @@ -28,7 +28,7 @@ To learn more about the features that are supported by Elasticsearch Add-On for ## Elasticsearch Add-On for Heroku [ech-restrictions-heroku] -Not all features of our Elasticsearch Service are available to Heroku users. Specifically, you cannot create additional deployments or use different deployment templates. +Not all features of {{ecloud}} are available to Heroku users. Specifically, you cannot create additional deployments or use different deployment templates. Generally, if a feature is shown as available in the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body), you can use it. diff --git a/deploy-manage/deploy/elastic-cloud/ech-version-policy.md b/deploy-manage/deploy/elastic-cloud/ech-version-policy.md index b32e57de2..ff660ca6d 100644 --- a/deploy-manage/deploy/elastic-cloud/ech-version-policy.md +++ b/deploy-manage/deploy/elastic-cloud/ech-version-policy.md @@ -25,7 +25,7 @@ You might sometimes notice additional versions listed in the user interface beyo Whenever a new Elastic Stack version is released, we do our best to provide the new version on our hosted service at the same time. We send you an email and add a notice to the console, recommending an upgrade. You’ll need to decide whether to upgrade to the new version with new features and bug fixes or to stay with a version you know works for you a while longer. -There can be [breaking changes](asciidocalypse://docs/elasticsearch/docs/release-notes/breaking-changes/elasticsearch.md) in some new versions of Elasticsearch that break what used to work in older versions. Before upgrading, you’ll want to check if the new version introduces any changes that might affect your applications. A breaking change might be a function that was previously deprecated and that has been removed in the latest version, for example. If you have an application that depends on the removed function, the application will need to be updated to continue working with the new version of Elasticsearch. +There can be [breaking changes](asciidocalypse://docs/elasticsearch/docs/release-notes/breaking-changes.md) in some new versions of Elasticsearch that break what used to work in older versions. Before upgrading, you’ll want to check if the new version introduces any changes that might affect your applications. A breaking change might be a function that was previously deprecated and that has been removed in the latest version, for example. If you have an application that depends on the removed function, the application will need to be updated to continue working with the new version of Elasticsearch. To learn more about upgrading to newer versions of the Elastic Stack on our hosted service, check [Upgrade Versions](../../upgrade/deployment-or-cluster.md). @@ -58,4 +58,4 @@ Cutting-edge releases do not remain available forever. Once the GA version of El ## Version Policy and Product End of Life [ech-version-policy-eol] -For Elasticsearch Service, we follow the [Elastic Version Maintenance and Support Policy](https://www.elastic.co/support/eol), which defines the support and maintenance policy of the Elastic Stack. +For {{ecloud}}, we follow the [Elastic Version Maintenance and Support Policy](https://www.elastic.co/support/eol), which defines the support and maintenance policy of the Elastic Stack. diff --git a/deploy-manage/deploy/elastic-cloud/ech-whats-new.md b/deploy-manage/deploy/elastic-cloud/ech-whats-new.md index 4a578c21b..eaa7c4049 100644 --- a/deploy-manage/deploy/elastic-cloud/ech-whats-new.md +++ b/deploy-manage/deploy/elastic-cloud/ech-whats-new.md @@ -15,7 +15,7 @@ Check the Release Notes to get the recent updates for each product. Elasticsearch -* [Elasticsearch 8.x Release Notes](asciidocalypse://docs/elasticsearch/docs/release-notes/elasticsearch.md) +* [Elasticsearch 8.x Release Notes](asciidocalypse://docs/elasticsearch/docs/release-notes/index.md) * [Elasticsearch 7.x Release Notes](https://www.elastic.co/guide/en/elasticsearch/reference/7.17/es-release-notes.html) * [Elasticsearch 6.x Release Notes](https://www.elastic.co/guide/en/elasticsearch/reference/6.8/es-release-notes.html) * [Elasticsearch 5.x Release Notes](https://www.elastic.co/guide/en/elasticsearch/reference/5.6/es-release-notes.html) diff --git a/deploy-manage/deploy/elastic-cloud/edit-stack-settings.md b/deploy-manage/deploy/elastic-cloud/edit-stack-settings.md index 31734e24e..408788433 100644 --- a/deploy-manage/deploy/elastic-cloud/edit-stack-settings.md +++ b/deploy-manage/deploy/elastic-cloud/edit-stack-settings.md @@ -1,4 +1,7 @@ --- +applies_to: + deployment: + ess: ga mapped_urls: - https://www.elastic.co/guide/en/cloud/current/ec-add-user-settings.html - https://www.elastic.co/guide/en/cloud/current/ec-editing-user-settings.html @@ -12,7 +15,7 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud/current/ec-manage-enterprise-search-settings.html --- -# Edit stack settings +# Edit {{stack}} settings % What needs to be done: Refine @@ -47,15 +50,65 @@ $$$ec-appsearch-settings$$$ $$$ec-es-elasticsearch-settings$$$ -**This page is a work in progress.** The documentation team is working to combine content pulled from the following pages: - -* [/raw-migrated-files/cloud/cloud/ec-add-user-settings.md](/raw-migrated-files/cloud/cloud/ec-add-user-settings.md) -* [/raw-migrated-files/cloud/cloud/ec-editing-user-settings.md](/raw-migrated-files/cloud/cloud/ec-editing-user-settings.md) -* [/raw-migrated-files/cloud/cloud-heroku/ech-add-user-settings.md](/raw-migrated-files/cloud/cloud-heroku/ech-add-user-settings.md) -* [/raw-migrated-files/cloud/cloud/ec-manage-kibana-settings.md](/raw-migrated-files/cloud/cloud/ec-manage-kibana-settings.md) -* [/raw-migrated-files/cloud/cloud-heroku/ech-manage-kibana-settings.md](/raw-migrated-files/cloud/cloud-heroku/ech-manage-kibana-settings.md) -* [/raw-migrated-files/cloud/cloud-heroku/ech-editing-user-settings.md](/raw-migrated-files/cloud/cloud-heroku/ech-editing-user-settings.md) -* [/raw-migrated-files/cloud/cloud/ec-manage-apm-settings.md](/raw-migrated-files/cloud/cloud/ec-manage-apm-settings.md) -* [/raw-migrated-files/cloud/cloud-heroku/ech-manage-apm-settings.md](/raw-migrated-files/cloud/cloud-heroku/ech-manage-apm-settings.md) -* [/raw-migrated-files/cloud/cloud/ec-manage-appsearch-settings.md](/raw-migrated-files/cloud/cloud/ec-manage-appsearch-settings.md) -* [/raw-migrated-files/cloud/cloud/ec-manage-enterprise-search-settings.md](/raw-migrated-files/cloud/cloud/ec-manage-enterprise-search-settings.md) \ No newline at end of file +From the {{ecloud}} Console you can customize {{es}}, {{kib}}, and related products to suit your needs. These editors append your changes to the appropriate YAML configuration file and they affect all users of that cluster. In each editor you can: + + +## Edit {{es}} user settings [ec-add-user-settings] + +Change how {{es}} runs by providing your own user settings. {{ech}} appends these settings to each node’s `elasticsearch.yml` configuration file. + +{{ech}} automatically rejects `elasticsearch.yml` settings that could break your cluster. + +For a list of supported settings, check [Supported {{es}} settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/elastic-cloud-hosted-elasticsearch-settings.md). + +::::{warning} +You can also update [dynamic cluster settings](../../../deploy-manage/deploy/self-managed/configure-elasticsearch.md#dynamic-cluster-setting) using {{es}}'s [update cluster settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings). However, {{ech}} doesn’t reject unsafe setting changes made using this API. Use it with caution. +:::: + + +To add or edit user settings: + +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. + + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + +3. From your deployment menu, go to the **Edit** page. +4. In the **Elasticsearch** section, select **Manage user settings and extensions**. +5. Update the user settings. +6. Select **Save changes**. + +::::{note} +In some cases, you may get a warning saying "User settings are different across Elasticsearch instances". To fix this issue, ensure that your user settings (including the comments sections and whitespaces) are identical across all Elasticsearch nodes (not only the data tiers, but also the Master, Machine Learning, and Coordinating nodes). +:::: + +## Edit Kibana user settings [ec-manage-kibana-settings] + +{{ech}} supports most of the standard Kibana and X-Pack settings. Through a YAML editor in the console, you can append Kibana properties to the `kibana.yml` file. Your changes to the configuration file are read on startup. + +Be aware that some settings that could break your cluster if set incorrectly and that the syntax might change between major versions. + +For a list of supported settings, check [Kibana settings](asciidocalypse://docs/kibana/docs/reference/cloud/elastic-cloud-kibana-settings.md). + +To change Kibana settings: + +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. + + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + +3. From your deployment menu, go to the **Edit** page. +4. In the **Kibana** section, select **Edit user settings**. For deployments with existing user settings, you may have to expand the **Edit kibana.yml** caret instead. +5. Update the user settings. +6. Select **Save changes**. + +Saving your changes initiates a configuration plan change that restarts Kibana automatically for you. + +::::{note} +If a setting is not supported by {{ech}}, you will get an error message when you try to save. +:::: + +## Edit APM user settings [ec-manage-apm-settings] + +Change how Elastic APM runs by providing your own user settings. +Check [APM configuration reference](/solutions/observability/apps/configure-apm-server.md) for information on how to configure the {{fleet}}-managed APM integration. \ No newline at end of file diff --git a/deploy-manage/deploy/elastic-cloud/find-cloud-id.md b/deploy-manage/deploy/elastic-cloud/find-cloud-id.md index f3ecd0f93..822ad55f3 100644 --- a/deploy-manage/deploy/elastic-cloud/find-cloud-id.md +++ b/deploy-manage/deploy/elastic-cloud/find-cloud-id.md @@ -1,19 +1,22 @@ --- +applies_to: + deployment: + ess: ga mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-cloud-id.html --- # Find your Cloud ID [ec-cloud-id] -The Cloud ID reduces the number of steps required to start sending data from Beats or Logstash to your hosted Elasticsearch cluster on Elasticsearch Service. Because we made it easier to send data, you can start exploring visualizations in Kibana on Elasticsearch Service that much more quickly. +The Cloud ID reduces the number of steps required to start sending data from Beats or Logstash to your hosted Elasticsearch cluster on {{ecloud}}. Because we made it easier to send data, you can start exploring visualizations in Kibana on {{ecloud}} that much more quickly. :::{image} ../../../images/cloud-ec-ce-cloud-id-beats-logstash.png :alt: Exploring data from Beats or Logstash in Kibana after sending it to a hosted Elasticsearch cluster ::: -The Cloud ID works by assigning a unique ID to your hosted Elasticsearch cluster on Elasticsearch Service. All deployments automatically get a Cloud ID. +The Cloud ID works by assigning a unique ID to your hosted Elasticsearch cluster on {{ecloud}}. All deployments automatically get a Cloud ID. -You include your Cloud ID along with your Elasticsearch Service user credentials (defined in `cloud.auth`) when you run Beats or Logstash locally, and then let Elasticsearch Service handle all of the remaining connection details to send the data to your hosted cluster on Elasticsearch Service safely and securely. +You include your Cloud ID along with your {{ecloud}} user credentials (defined in `cloud.auth`) when you run Beats or Logstash locally, and then let {{ecloud}} handle all of the remaining connection details to send the data to your hosted cluster on {{ecloud}} safely and securely. :::{image} ../../../images/cloud-ec-ce-cloud-id.png :alt: The Cloud ID and `elastic` user information shown when you create a deployment @@ -24,8 +27,8 @@ You include your Cloud ID along with your Elasticsearch Service user credentials Not sure why you need Beats or Logstash? Here’s what they do: -* [Beats](https://www.elastic.co/products/beats) is our open source platform for single-purpose data shippers. The purpose of Beats is to help you gather data from different sources and to centralize the data by shipping it to Elasticsearch. Beats install as lightweight agents and ship data from hundreds or thousands of machines to your hosted Elasticsearch cluster on Elasticsearch Service. If you want more processing muscle, Beats can also ship to Logstash for transformation and parsing before the data gets stored in Elasticsearch. -* [Logstash](https://www.elastic.co/products/logstash) is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite place where you stash things, here your hosted Elasticsearch cluster on Elasticsearch Service. Logstash supports a variety of inputs that pull in events from a multitude of common sources — logs, metrics, web applications, data stores, and various AWS services — all in continuous, streaming fashion. +* [Beats](https://www.elastic.co/products/beats) is our open source platform for single-purpose data shippers. The purpose of Beats is to help you gather data from different sources and to centralize the data by shipping it to Elasticsearch. Beats install as lightweight agents and ship data from hundreds or thousands of machines to your hosted Elasticsearch cluster on {{ecloud}}. If you want more processing muscle, Beats can also ship to Logstash for transformation and parsing before the data gets stored in Elasticsearch. +* [Logstash](https://www.elastic.co/products/logstash) is an open source, server-side data processing pipeline that ingests data from a multitude of sources simultaneously, transforms it, and then sends it to your favorite place where you stash things, here your hosted Elasticsearch cluster on {{ecloud}}. Logstash supports a variety of inputs that pull in events from a multitude of common sources — logs, metrics, web applications, data stores, and various AWS services — all in continuous, streaming fashion. ## Before you begin [ec_before_you_begin_3] @@ -39,26 +42,26 @@ To use the Cloud ID, you need: * The unique Cloud ID for your deployment, available from the deployment overview page. * A user ID and password that has permission to send data to your cluster. - In our examples, we use the `elastic` superuser that every Elasticsearch cluster comes with. The password for the `elastic` user is provided when you create a deployment (and can also be [reset](../../users-roles/cluster-or-deployment-auth/built-in-users.md) if you forget it). On a production system, you should adapt these examples by creating a user that can write to and access only the minimally required indices. For each Beat, review the specific feature and role table, similar to the one in [Metricbeat](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/feature-roles.md) documentation. + In our examples, we use the `elastic` superuser that every Elasticsearch cluster comes with. The password for the `elastic` user is provided when you create a deployment (and can also be [reset](../../users-roles/cluster-or-deployment-auth/built-in-users.md) if you forget it). On a production system, you should adapt these examples by creating a user that can write to and access only the minimally required indices. For each Beat, review the specific feature and role table, similar to the one in [Metricbeat](asciidocalypse://docs/beats/docs/reference/metricbeat/feature-roles.md) documentation. ## Configure Beats with your Cloud ID [ec-cloud-id-beats] -The following example shows how you can send operational data from Metricbeat to Elasticsearch Service by using the Cloud ID. Any of the available Beats will work, but we had to pick one for this example. +The following example shows how you can send operational data from Metricbeat to {{ecloud}} by using the Cloud ID. Any of the available Beats will work, but we had to pick one for this example. ::::{tip} -For others, you can learn more about [getting started](asciidocalypse://docs/beats/docs/reference/ingestion-tools/index.md) with each Beat. +For others, you can learn more about [getting started](asciidocalypse://docs/beats/docs/reference/index.md) with each Beat. :::: -To get started with Metricbeat and Elasticsearch Service: +To get started with Metricbeat and {{ecloud}}: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. [Create a new deployment](create-an-elastic-cloud-hosted-deployment.md) and copy down the password for the `elastic` user. 3. On the deployment overview page, copy down the Cloud ID. -4. Set up the Beat of your choice, such as [Metricbeat version 7.17](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/metricbeat-installation-configuration.md). -5. [Configure the Beat output to send to Elastic Cloud](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/configure-cloud-id.md). +4. Set up the Beat of your choice, such as [Metricbeat version 7.17](asciidocalypse://docs/beats/docs/reference/metricbeat/metricbeat-installation-configuration.md). +5. [Configure the Beat output to send to Elastic Cloud](asciidocalypse://docs/beats/docs/reference/metricbeat/configure-cloud-id.md). ::::{note} Make sure you replace the values for `cloud.id` and `cloud.auth` with your own information. diff --git a/deploy-manage/deploy/elastic-cloud/google-cloud-platform-marketplace.md b/deploy-manage/deploy/elastic-cloud/google-cloud-platform-marketplace.md index 33c5f5b46..3401b3fd4 100644 --- a/deploy-manage/deploy/elastic-cloud/google-cloud-platform-marketplace.md +++ b/deploy-manage/deploy/elastic-cloud/google-cloud-platform-marketplace.md @@ -1,34 +1,38 @@ --- +applies_to: + deployment: + ess: ga + serverless: unavailable mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-billing-gcp.html --- # Google Cloud Platform Marketplace [ec-billing-gcp] -Subscribe to Elasticsearch Service directly from the Google Cloud Platform (GCP). You then have the convenience of viewing your Elasticsearch Service subscription as part of your GCP bill, and you do not have to supply any additional credit card information to Elastic. +Subscribe to {{ecloud}} directly from the Google Cloud Platform (GCP). You then have the convenience of viewing your {{ecloud}} subscription as part of your GCP bill, and you do not have to supply any additional credit card information to Elastic. -Some differences exist when you subscribe to Elasticsearch Service through the GCP Marketplace: +Some differences exist when you subscribe to {{ecloud}} through the GCP Marketplace: -* There is no trial period. Billing starts when you subscribe to Elasticsearch Service. -* Existing Elasticsearch Service organizations cannot be converted to use the GCP Marketplace. -* Pricing for an Elasticsearch Service subscription through the GCP Marketplace follows the pricing outlined on the [Elasticsearch Service on Elastic Cloud](https://console.cloud.google.com/marketplace/product/endpoints/elasticsearch-service.gcpmarketplace.elastic.co) page in the GCP Marketplace. Pricing is based the Elastic Cloud [Billing Dimensions](../../cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md). +* There is no trial period. Billing starts when you subscribe to {{ecloud}}. +* Existing {{ecloud}} organizations cannot be converted to use the GCP Marketplace. +* Pricing for an {{ecloud}} subscription through the GCP Marketplace follows the pricing outlined on the [Elastic Cloud](https://console.cloud.google.com/marketplace/product/endpoints/elasticsearch-service.gcpmarketplace.elastic.co) page in the GCP Marketplace. Pricing is based the Elastic Cloud [Billing Dimensions](../../cloud-organization/billing/cloud-hosted-deployment-billing-dimensions.md). * To access your billing information at any time go to **Account & Billing**. You can also go to **Account & Billing** and then **Usage** to view your usage hours and units per hour. ::::{important} -Only one Elasticsearch Service organization can be subscribed through GCP Marketplace per GCP billing account. +Only one {{ecloud}} organization can be subscribed through GCP Marketplace per GCP billing account. :::: -To subscribe to Elasticsearch Service through the GCP Marketplace: +To subscribe to {{ecloud}} through the GCP Marketplace: 1. Log in to your Google Cloud Platform account. -2. Go to the [Elastic Cloud (Elasticsearch Service)](https://console.cloud.google.com/marketplace/product/elastic-prod/elastic-cloud) page in the GCP Marketplace. +2. Go to the [Elastic Cloud](https://console.cloud.google.com/marketplace/product/elastic-prod/elastic-cloud) page in the GCP Marketplace. 3. On the Elastic Cloud page select **Subscribe**, where you will be directed to another page. There is only one plan—the Elastic plan—and it’s pre-selected. The billing account you are logged into will be pre-selected for this purchase, though you can change it at this time. 4. Accept the terms of service (TOS) and select **Subscribe**. 5. When you are presented with a pop-up that specifies that "Your order request has been sent to Elastic" choose **Sign up with Elastic** to continue. 6. After choosing to sign up, a new window will appear. Do one of the following: - * Create a new, unique user account for an Elasticsearch Service Elastic Cloud organization. + * Create a new, unique user account for an {{ecloud}} Elastic Cloud organization. * Log in with an existing user account that’s associated with an Elastic Cloud trial. This links the billing account used for the purchase on GCP Marketplace to the existing Elastic organization. 7. After signing up, check your inbox to verify the email address you signed up with. Upon verification, you will be asked to create a password, and once created your organization will be set up and you will be logged into it. @@ -56,12 +60,12 @@ To prevent downtime, do not remove the currently used billing account before the :::: -Elasticsearch Service subscriptions through GCP Marketplace are associated with a GCP billing account. In order to change the billing account associated with an Elasticsearch Service organization: +{{ecloud}} subscriptions through GCP Marketplace are associated with a GCP billing account. In order to change the billing account associated with an {{ecloud}} organization: * for customers under a Private Offer contract: please reach out to Elastic support and provide the GCP Billing Account, as well as the contact of any reseller information for approval. * for pay-as-you-go customers: you need to have purchased and subscribed to Elastic Cloud on the new billing account using the details above—but do not create a new Elastic user or organization (that is, you can skip Steps 5 and 6 in the subscription instructions, above). Once you successfully subscribed with the new billing account, you can contact Elastic support and provide the new billing account ID you wish to move to, which you can find from [GCP’s billing page](https://console.cloud.google.com/billing). The ID is in the format `000000-000000-000000`. -If you cancel your Elasticsearch Service order on GCP through the [marketplace orders page](https://console.cloud.google.com/marketplace/orders) before the switch to the new billing account has been done, any running deployments will immediately enter a degraded state known as maintenance mode and they will be scheduled for termination in five days. +If you cancel your {{ecloud}} order on GCP through the [marketplace orders page](https://console.cloud.google.com/marketplace/orders) before the switch to the new billing account has been done, any running deployments will immediately enter a degraded state known as maintenance mode and they will be scheduled for termination in five days. If you already unsubscribed before the new billing account has been set up, you can subscribe again from the previously used billing account, which will cancel the termination and restore the deployments to a functional state. diff --git a/deploy-manage/deploy/elastic-cloud/heroku.md b/deploy-manage/deploy/elastic-cloud/heroku.md index c5bf8bbe7..228d0dc7f 100644 --- a/deploy-manage/deploy/elastic-cloud/heroku.md +++ b/deploy-manage/deploy/elastic-cloud/heroku.md @@ -4,7 +4,7 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud-heroku/current/ech-about.html --- -# Heroku +# Elasticsearch Add-On for Heroku [ech-getting-started] % What needs to be done: Refine @@ -17,9 +17,10 @@ mapped_urls: % - [ ] ./raw-migrated-files/cloud/cloud-heroku/ech-getting-started.md % - [ ] ./raw-migrated-files/cloud/cloud-heroku/ech-about.md -⚠️ **This page is a work in progress.** ⚠️ +This documentation applies to Heroku users who want to make use of the Elasticsearch Add-On for Heroku that is available from the [Heroku Dashboard](https://dashboard.heroku.com/) or that can be installed from the CLI. -The documentation team is working to combine content pulled from the following pages: +The add-on runs on {{ecloud}} and provides access to [Elasticsearch](https://www.elastic.co/products/elasticsearch), the open source, distributed, RESTful search engine. Many other features of the Elastic Stack are also readily available to Heroku users through the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body) after you install the add-on. For example, you can use Kibana to visualize your Elasticsearch data. -* [/raw-migrated-files/cloud/cloud-heroku/ech-getting-started.md](/raw-migrated-files/cloud/cloud-heroku/ech-getting-started.md) -* [/raw-migrated-files/cloud/cloud-heroku/ech-about.md](/raw-migrated-files/cloud/cloud-heroku/ech-about.md) \ No newline at end of file +[Elasticsearch Machine Learning](https://www.elastic.co/guide/en/machine-learning/current/index.html), [Elastic APM](/solutions/observability/apps/application-performance-monitoring-apm.md) and [Elastic Fleet Server](https://www.elastic.co/guide/en/fleet/current/fleet-overview.html) are not supported by the Elasticsearch Add-On for Heroku. + +To learn more about what plans are available for Heroku users and their cost, check the [Elasticsearch add-on](https://elements.heroku.com/addons/foundelasticsearch) in the Elements Marketplace. \ No newline at end of file diff --git a/deploy-manage/deploy/elastic-cloud/keep-track-of-deployment-activity.md b/deploy-manage/deploy/elastic-cloud/keep-track-of-deployment-activity.md index dae450e0d..683739130 100644 --- a/deploy-manage/deploy/elastic-cloud/keep-track-of-deployment-activity.md +++ b/deploy-manage/deploy/elastic-cloud/keep-track-of-deployment-activity.md @@ -1,4 +1,7 @@ --- +applies_to: + deployment: + ess: ga mapped_urls: - https://www.elastic.co/guide/en/cloud/current/ec-activity-page.html - https://www.elastic.co/guide/en/cloud-heroku/current/ech-activity-page.html @@ -15,9 +18,30 @@ mapped_urls: % - [ ] ./raw-migrated-files/cloud/cloud/ec-activity-page.md % - [ ] ./raw-migrated-files/cloud/cloud-heroku/ech-activity-page.md -⚠️ **This page is a work in progress.** ⚠️ +The deployment **Activity** page gives you a convenient way to follow all configuration changes that have been applied to your deployment, including which resources were affected, when the changes were applied, who initiated the changes, and whether or not the changes were successful. You can also select **Details** for an expanded, step-by-step view of each change applied to each deployment resource. -The documentation team is working to combine content pulled from the following pages: +To view the activity for a deployment: -* [/raw-migrated-files/cloud/cloud/ec-activity-page.md](/raw-migrated-files/cloud/cloud/ec-activity-page.md) -* [/raw-migrated-files/cloud/cloud-heroku/ech-activity-page.md](/raw-migrated-files/cloud/cloud-heroku/ech-activity-page.md) \ No newline at end of file +1. Log in to the [{{ech}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. On the **Deployments** page, select your deployment. + + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + +3. In your deployment menu, select **Activity**. +4. You can: + + 1. View the activity for all deployment resources (the default). + 2. Use one of the available filters to view configuration changes by status or type. You can use the query field to create a custom search. Select the filter buttons to get examples of the query format. + 3. Select one of the resource filters to view activity for only that resource type. + + +:::{image} ../../../images/cloud-ec-ce-activity-page.png +:alt: The Activity page +::: + +In the table columns you find the following information: + +- **Change**: Which deployment resource the configuration change was applied to. +- **Summary**: A summary of what change was applied, when the change was performed, and how long it took. +- **Applied by**: The user who submitted the configuration change. `System` indicates configuration changes initiated automatically by the {{ecloud}} platform. +- **Actions**: Select **Details** for an expanded view of each step in the configuration change, including the start time, end time, and duration. You can select **Reapply** to re-run the configuration change. \ No newline at end of file diff --git a/deploy-manage/deploy/elastic-cloud/manage-deployments-using-elastic-cloud-api.md b/deploy-manage/deploy/elastic-cloud/manage-deployments-using-elastic-cloud-api.md index 63081e06a..d91fa0c01 100644 --- a/deploy-manage/deploy/elastic-cloud/manage-deployments-using-elastic-cloud-api.md +++ b/deploy-manage/deploy/elastic-cloud/manage-deployments-using-elastic-cloud-api.md @@ -1,4 +1,7 @@ --- +applies_to: + deployment: + ess: ga mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-api-deployment-crud.html --- @@ -10,7 +13,7 @@ The following examples demonstrate Create, Read, Update and Delete operations on ## Listing your deployments [ec_listing_your_deployments] -List the details about all of your Elasticsearch Service deployments. +List the details about all of your {{ech}} deployments. ```sh curl \ @@ -40,7 +43,7 @@ When you create a new deployment through the API, you have two options: ### Create a deployment using default values [ec-api-examples-deployment-simple] -This example requires minimal information in the API payload, and creates a deployment with default settings and a default name. You just need to specify one of the [available deployment templates](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-regions-templates-instances.md) in your API request header and the deployment is created using default settings from that template. +This example requires minimal information in the API payload, and creates a deployment with default settings and a default name. You just need to specify one of the [available deployment templates](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md) in your API request header and the deployment is created using default settings from that template. ```sh curl -XPOST \ @@ -56,7 +59,7 @@ curl -XPOST \ ``` 1. Optional: You can specify a version for the deployment. If this field is omitted a default version is used. -2. Required: One of the [available regions](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-regions-templates-instances.md) must be provided in the request. +2. Required: One of the [available regions](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md) must be provided in the request. A `resource` field can be included in this request (check the following, manual example for the field details). When a `resource` is present, the content of the request is used instead of any default values provided by the the deployment template. @@ -259,11 +262,11 @@ curl -XPOST \ ' ``` -1. [Available Regions](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-regions-templates-instances.md) +1. [Available Regions](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md) 2. Availability zones for the Elasticsearch cluster -3. [Available instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-regions-templates-instances.md) +3. [Available instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md) 4. Memory allocated for each Elasticsearch node -5. [Available templates](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-regions-templates-instances.md) +5. [Available templates](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md) 6. Availability zones for Kibana 7. Memory allocated for Kibana 8. Availability zones for Integrations Server @@ -271,14 +274,14 @@ curl -XPOST \ ::::{tip} -You can get the payload easily from the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body) **Create Deployment** page, customize the regions, zones, memory allocated for each components, and then select **Equivalent API request**. +You can get the payload easily from the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) **Create Deployment** page, customize the regions, zones, memory allocated for each components, and then select **Equivalent API request**. :::: ## Using the API to create deployment with non EOL versions [ec_using_the_api_to_create_deployment_with_non_eol_versions] -You are able to create deployments with *non* [End-of-life (EOL) versions](available-stack-versions.md#ec-version-policy-eol) via API, which are not selectable in the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body) UI. You can simply replace the version number in the above example. +You are able to create deployments with *non* [End-of-life (EOL) versions](available-stack-versions.md#ec-version-policy-eol) via API, which are not selectable in the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) UI. You can simply replace the version number in the above example. ## Update a deployment [ec_update_a_deployment] @@ -343,7 +346,7 @@ curl -XPUT \ ::::{tip} -You can get the payload easily from the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body) deployment **Edit** page, customize the zone count, memory allocated for each components, and then select **Equivalent API request**. +You can get the payload easily from the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) deployment **Edit** page, customize the zone count, memory allocated for each components, and then select **Equivalent API request**. :::: diff --git a/deploy-manage/deploy/elastic-cloud/manage-deployments.md b/deploy-manage/deploy/elastic-cloud/manage-deployments.md index 85e86038a..7eea73508 100644 --- a/deploy-manage/deploy/elastic-cloud/manage-deployments.md +++ b/deploy-manage/deploy/elastic-cloud/manage-deployments.md @@ -1,17 +1,29 @@ --- +applies_to: + deployment: + ess: ga mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-manage-deployment.html --- # Manage deployments [ec-manage-deployment] -Sometimes you might need to make changes to the entire deployment, a specific component, or just a single data tier. +{{ech}} allows you to configure and maintain your deployments with a high level of control on every component of the {{stack}}. You can adjust the settings of any of your deployments at any time. -* Make adjustments to specific deployment components, such as an [Integrations Server](manage-integrations-server.md), [APM & Fleet Server](switch-from-apm-to-integrations-server-payload.md#ec-manage-apm-and-fleet), [Watcher](../../../explore-analyze/alerts-cases/watcher.md), or [Kibana](access-kibana.md#ec-enable-kibana2). -* [Enable logging and monitoring](../../monitor/stack-monitoring/elastic-cloud-stack-monitoring.md) of the deployment performance. -* [Disable a data tier](../../../manage-data/lifecycle/index-lifecycle-management.md). -* [Restart](../../maintenance/start-stop-services/restart-cloud-hosted-deployment.md), [stop routing](../../maintenance/ece/start-stop-routing-requests.md), or [delete your deployment](../../uninstall/delete-a-cloud-deployment.md). -* [Upgrade the Elastic Stack version](../../upgrade/deployment-or-cluster.md) for the deployment. +* Define the [core configuration](configure.md) of your deployment, including available features, hardware settings and capacity, autoscaling, and high availability. + * Select a [hardware profile](/deploy-manage/deploy/elastic-cloud/ec-change-hardware-profile.md) optimized for your use case. + * Make adjustments to specific [deployment components](/deploy-manage/deploy/elastic-cloud/ec-customize-deployment-components.md), such as the {{es}} cluster or an Integrations Server. + * [Manage data tiers](/manage-data/lifecycle/data-tiers.md). + +* Ensure the health of your deployment over time + + * [Keep track of your deployment's activity](keep-track-of-deployment-activity.md) or [Enable logging and monitoring](../../monitor/stack-monitoring/elastic-cloud-stack-monitoring.md) of the deployment performance. + * Perform maintenance operations to ensure the health of your deployment, such as [restarting your deployment](../../maintenance/start-stop-services/restart-cloud-hosted-deployment.md) or [stopping routing](../../maintenance/ece/start-stop-routing-requests.md). + +* Manage the lifecycle of your deployment: + + * [Upgrade your deployment](/deploy-manage/upgrade/deployment-or-cluster.md) and its components to a newer version of the {{stack}}. + * [Delete your deployment](../../uninstall/delete-a-cloud-deployment.md). diff --git a/deploy-manage/deploy/elastic-cloud/manage-integrations-server.md b/deploy-manage/deploy/elastic-cloud/manage-integrations-server.md index 065e01c4e..c526976be 100644 --- a/deploy-manage/deploy/elastic-cloud/manage-integrations-server.md +++ b/deploy-manage/deploy/elastic-cloud/manage-integrations-server.md @@ -1,4 +1,7 @@ --- +applies_to: + deployment: + ess: ga mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-manage-integrations-server.html --- @@ -20,13 +23,13 @@ From the deployment **Integrations Server** page you can also: * Fully remove the Integrations Server, delete it from the disk, and stop the charges. ::::{important} -The APM secret token can no longer be reset from the Elasticsearch Service UI. Check [Secret token](/solutions/observability/apps/secret-token.md) for instructions on managing a secret token. Note that resetting the token disrupts your APM service and restarts the server. When the server restarts, you’ll need to update all of your agents with the new token. +The APM secret token can no longer be reset from the {{ecloud}} UI. Check [Secret token](/solutions/observability/apps/secret-token.md) for instructions on managing a secret token. Note that resetting the token disrupts your APM service and restarts the server. When the server restarts, you’ll need to update all of your agents with the new token. :::: ## Enable Integrations Server through the API [ec-integrations-server-api-example] -This example demonstrates how to use the Elasticsearch Service RESTful API to create a deployment with Integrations Server enabled. +This example demonstrates how to use the {{ecloud}} RESTful API to create a deployment with Integrations Server enabled. #### Requirements [ec_requirements_2] diff --git a/deploy-manage/deploy/elastic-cloud/manage-plugins-extensions-through-api.md b/deploy-manage/deploy/elastic-cloud/manage-plugins-extensions-through-api.md index 43eb0faa4..00165ca3c 100644 --- a/deploy-manage/deploy/elastic-cloud/manage-plugins-extensions-through-api.md +++ b/deploy-manage/deploy/elastic-cloud/manage-plugins-extensions-through-api.md @@ -1,11 +1,14 @@ --- +applies_to: + deployment: + ess: ga mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-plugins-guide.html --- # Manage plugins and extensions through the API [ec-plugins-guide] -This guide provides a full list of tasks for managing [plugins and extensions](add-plugins-extensions.md) in Elasticsearch Service, using the API. +This guide provides a full list of tasks for managing [plugins and extensions](add-plugins-extensions.md) in {{ecloud}}, using the API. * [Create an extension](#ec-extension-guide-create) * [Add an extension to a deployment plan](#ec-extension-guide-add-plan) @@ -33,7 +36,7 @@ For plugins larger than 200MB the download URL option **must** be used. Plugins These two examples are for the `plugin` extension type. For bundles, change `extension_type` to `bundle`. -For plugins, `version` must match (exactly) the `elasticsearch.version` field defined in the plugin’s `plugin-descriptor.properties` file. Check [Help for plugin authors](asciidocalypse://docs/elasticsearch/docs/extend/create-elasticsearch-plugins/index.md#plugin-authors) for details. For plugins larger than 5GB, the `plugin-descriptor.properties` file needs to be at the top of the archive. This ensures that the our verification process is able to detect that it is an Elasticsearch plugin; otherwise the plugin will be rejected by the API. This order can be achieved by specifying at time of creating the ZIP file: `zip -r name-of-plugin.zip plugin-descriptor.properties *`. +For plugins, `version` must match (exactly) the `elasticsearch.version` field defined in the plugin’s `plugin-descriptor.properties` file. Check [Help for plugin authors](asciidocalypse://docs/elasticsearch/docs/extend/index.md#plugin-authors) for details. For plugins larger than 5GB, the `plugin-descriptor.properties` file needs to be at the top of the archive. This ensures that the our verification process is able to detect that it is an Elasticsearch plugin; otherwise the plugin will be rejected by the API. This order can be achieved by specifying at time of creating the ZIP file: `zip -r name-of-plugin.zip plugin-descriptor.properties *`. For bundles, we recommend setting `version` using wildcard notation that matches the major version of the Elasticsearch deployment. For example, if Elasticsearch is on version 8.4.3, simply set `8.*` as the version. The value `8.*` means that the bundle is compatible with all 8.x versions of Elasticsearch. @@ -303,7 +306,7 @@ Updating the name of an existing extension does not change its `EXTENSION_ID`. ## Update the version of an existing plugin [ec-extension-guide-update-version-plugin] -For plugins, `version` must match (exactly) the `elasticsearch.version` field defined in the plugin’s `plugin-descriptor.properties` file. Check [Help for plugin authors](asciidocalypse://docs/elasticsearch/docs/extend/create-elasticsearch-plugins/index.md#plugin-authors) for details. If you change the version, the associated plugin file *must* also be updated accordingly. +For plugins, `version` must match (exactly) the `elasticsearch.version` field defined in the plugin’s `plugin-descriptor.properties` file. Check [Help for plugin authors](asciidocalypse://docs/elasticsearch/docs/extend/index.md#plugin-authors) for details. If you change the version, the associated plugin file *must* also be updated accordingly. ## Update the file associated to an existing extension [ec-extension-guide-update-file] diff --git a/deploy-manage/deploy/elastic-cloud/project-settings.md b/deploy-manage/deploy/elastic-cloud/project-settings.md index 9970d7f84..69351ed1a 100644 --- a/deploy-manage/deploy/elastic-cloud/project-settings.md +++ b/deploy-manage/deploy/elastic-cloud/project-settings.md @@ -2,29 +2,89 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/project-and-management-settings.html - https://www.elastic.co/guide/en/serverless/current/elasticsearch-manage-project.html + +applies_to: + serverless: --- # Project settings -% What needs to be done: Refine +$$$elasticsearch-manage-project-search-power-settings$$$ + +{{serverless-full}} projects are fully managed and automatically scaled by Elastic. You have the option of {{es-serverless}}, {{observability}}, or {{elastic-sec}} for your project. + +Your project’s performance and general data retention are controlled by the **Search AI Lake settings**. To manage these settings: + +1. Navigate to [cloud.elastic.co](https://cloud.elastic.co/). +2. Log in to your Elastic Cloud account. +3. Select your project from the **Serverless projects** panel and click **Manage**. + +Additionally, there are [features and add-ons](#project-features-add-ons) available for security that you can configure. + +## Search AI Lake settings [elasticsearch-manage-project-search-ai-lake-settings] + +Once ingested, your data is stored in cost-efficient, general storage. A cache layer is available on top of the general storage for recent and frequently queried data that provides faster search speed. Data in this cache layer is considered **search-ready**. -% GitHub issue: https://github.com/elastic/docs-projects/issues/337 +Together, these data storage layers form your project’s **Search AI Lake**. -% Use migrated content from existing pages that map to this page: +The total volume of search-ready data is the sum of the following: -% - [ ] ./raw-migrated-files/docs-content/serverless/project-and-management-settings.md -% Notes: anything that isn't deduplicated from -% - [ ] ./raw-migrated-files/docs-content/serverless/elasticsearch-manage-project.md +1. The volume of non-time series project data +2. The volume of time series project data included in the Search Boost Window -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): +::::{note} +Time series data refers to any document in standard indices or data streams that includes the `@timestamp` field. This field must be present for data to be subject to the Search Boost Window setting. -$$$elasticsearch-manage-project-search-ai-lake-settings$$$ +:::: +Each project type offers different settings that let you adjust the performance and volume of search-ready data, as well as the features available in your projects. + +The documentation in this section describes shared capabilities that are available in multiple solutions. These settings allow you to tune your project settings not all functionality as you would have with a self-managed deployment. $$$elasticsearch-manage-project-search-power-settings$$$ -$$$project-features-add-ons$$$ +| Setting | Description | Project Type | +| :--- | :--- | :--- | +| **Search Power** | Search Power controls the speed of searches against your data. With Search Power, you can improve search performance by adding more resources for querying, or you can reduce provisioned resources to cut costs. Choose from three Search Power settings:

**On-demand:** Autoscales based on data and search load, with a lower minimum baseline for resource use. This flexibility results in more variable query latency and reduced maximum throughput.

**Performant:** Delivers consistently low latency and autoscales to accommodate moderately high query throughput.

**High-throughput:** Optimized for high-throughput scenarios, autoscaling to maintain query latency even at very high query volumes.
| Elasticsearch | +| **Search Boost Window** | Non-time series data is always considered search-ready. The **Search Boost Window** determines the volume of time series project data that will be considered search-ready.

Increasing the window results in a bigger portion of time series project data included in the total search-ready data volume.
| Elasticsearch | +| **Data Retention** | Data retention policies determine how long your project data is retained.
In {{serverless-full}} data retention policies are configured through [data streams](../../../manage-data/lifecycle/data-stream.md) and you can specify different retention periods for specific data streams in your project.

{{elastic-sec}} has to additional configuration settings that can be configured to managed your data retention.

**Maximum data retention period**

When enabled, this setting determines the maximum length of time that data can be retained in any data streams of this project.

Editing this setting replaces the data retention set for all data streams of the project that have a longer data retention defined. Data older than the new maximum retention period that you set is permanently deleted.

**Default data retention period**

When enabled, this setting determines the default retention period that is automatically applied to all data streams in your project that do not have a custom retention period already set.
|Elasticsearch
Observability
Security | +| **Project features** | Controls [feature tiers and add-on options](../../../deploy-manage/deploy/elastic-cloud/project-settings.md#project-features-add-ons) for your {{elastic-sec}} project. | Security | + +## Project features and add-ons [project-features-add-ons] + +```yaml {applies_to} +serverless: + security: +``` + +For {{elastic-sec}} projects, edit the **Project features** to select a feature tier and enable add-on options for specific use cases. + +| Feature tier | Description and add-ons | +| :--- | :--- | +| **Security Analytics Essentials** | Standard security analytics, detections, investigations, and collaborations. Allows these add-ons:

* **Endpoint Protection Essentials**: endpoint protections with {{elastic-defend}}.
* **Cloud Protection Essentials**: Cloud native security features.
| +| **Security Analytics Complete** | Everything in **Security Analytics Essentials*** plus advanced features such as entity analytics, threat intelligence, and more. Allows these add-ons:

* **Endpoint Protection Complete**: Everything in **Endpoint Protection Essentials** plus advanced endpoint detection and response features.
* **Cloud Protection Complete**: Everything in **Cloud Protection Essentials** plus advanced cloud security features.
| + +### Downgrading the feature tier [elasticsearch-manage-project-downgrading-the-feature-tier] + +When you downgrade your Security project features selection from **Security Analytics Complete** to **Security Analytics Essentials**, the following features become unavailable: + +* All Entity Analytics features +* The ability to use certain entity analytics-related integration packages, such as: + * Data Exfiltration detection + * Lateral Movement detection + * Living off the Land Attack detection +* Intelligence Indicators page +* External rule action connectors +* Case connectors +* Endpoint response actions history +* Endpoint host isolation exceptions +* AI Assistant +* Attack discovery -**This page is a work in progress.** The documentation team is working to combine content pulled from the following pages: +And, the following data may be permanently deleted: -* [/raw-migrated-files/docs-content/serverless/project-and-management-settings.md](/raw-migrated-files/docs-content/serverless/project-and-management-settings.md) -* [/raw-migrated-files/docs-content/serverless/elasticsearch-manage-project.md](/raw-migrated-files/docs-content/serverless/elasticsearch-manage-project.md) \ No newline at end of file +* AI Assistant conversation history +* AI Assistant settings +* Entity Analytics user and host risk scores +* Entity Analytics asset criticality information +* Detection rule external connector settings +* Detection rule response action settings diff --git a/deploy-manage/deploy/elastic-cloud/regions.md b/deploy-manage/deploy/elastic-cloud/regions.md index 0f55d6fb3..fd72694a0 100644 --- a/deploy-manage/deploy/elastic-cloud/regions.md +++ b/deploy-manage/deploy/elastic-cloud/regions.md @@ -1,6 +1,8 @@ --- mapped_pages: - https://www.elastic.co/guide/en/serverless/current/regions.html +applies_to: + serverless: --- # Regions [regions] @@ -10,20 +12,31 @@ A region is the geographic area where the data center of the cloud provider that Elastic Cloud Serverless handles all hosting details for you. You are unable to change the region after you create a project. ::::{note} -Currently, a limited number of Amazon Web Services (AWS) regions are available. More regions for AWS, as well as Microsoft Azure and Google Cloud Platform (GCP), will be added in the future. +Currently, a limited number of Amazon Web Services (AWS) and Microsoft Azure regions are available. More regions for AWS and Azure, as well as Google Cloud Platform (GCP), will be added in the future. :::: -## Amazon Web Services (AWS) regions [regions-amazon-web-services-aws-regions] +## Amazon Web Services (AWS) regions [regions-amazon-web-services-aws-regions] The following AWS regions are currently available: | Region | Name | -| --- | --- | +| :--- | :--- | | ap-southeast-1 | Asia Pacific (Singapore) | | eu-west-1 | Europe (Ireland) | | us-east-1 | US East (N. Virginia) | | us-west-2 | US West (Oregon) | +## Microsoft Azure regions [regions-azure-regions] + +```yaml {applies_to} +serverless: preview +``` + +The following Azure regions are currently available: + +| Region | Name | +| :--- | :--- | +| eastus | East US | \ No newline at end of file diff --git a/deploy-manage/deploy/elastic-cloud/restrictions-known-problems.md b/deploy-manage/deploy/elastic-cloud/restrictions-known-problems.md index 1d8fa8e6a..c1c472980 100644 --- a/deploy-manage/deploy/elastic-cloud/restrictions-known-problems.md +++ b/deploy-manage/deploy/elastic-cloud/restrictions-known-problems.md @@ -1,30 +1,36 @@ --- +applies_to: + deployment: + ess: ga + serverless: ga mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-restrictions.html --- # Restrictions and known problems [ec-restrictions] -When using Elasticsearch Service, there are some limitations you should be aware of: +When using {{ecloud}}, there are some limitations you should be aware of: * [Security](#ec-restrictions-security) * [APIs](#ec-restrictions-apis) * [Transport client](#ec-restrictions-transport-client) * [Elasticsearch and Kibana plugins](#ec-restrictions-plugins) * [Watcher](#ec-restrictions-watcher) +* [Private Link and SSO to Kibana URLs](#ec-restrictions-traffic-filters-kibana-sso) +* [PDF report generation using Alerts or Watcher webhooks](#ec-restrictions-traffic-filters-watcher) * [Kibana](#ec-restrictions-kibana) -* [APM Agent central configuration with Private Link or traffic filters](#ec-restrictions-apm-traffic-filters) +% * [APM Agent central configuration with Private Link or traffic filters](#ec-restrictions-apm-traffic-filters) * [Fleet with Private Link or traffic filters](#ec-restrictions-fleet-traffic-filters) * [Restoring a snapshot across deployments](#ec-snapshot-restore-enterprise-search-kibana-across-deployments) * [Migrate Fleet-managed {{agents}} across deployments by restoring a snapshot](#ec-migrate-elastic-agent) * [Regions and Availability Zones](#ec-regions-and-availability-zone) -* [Known problems](#ec-known-problems) +% * [Known problems](#ec-known-problems) For limitations related to logging and monitoring, check the [Restrictions and limitations](../../monitor/stack-monitoring/elastic-cloud-stack-monitoring.md#ec-restrictions-monitoring) section of the logging and monitoring page. -Occasionally, we also publish information about [Known problems](#ec-known-problems) with our Elasticsearch Service or the Elastic Stack. +% Occasionally, we also publish information about [Known problems](#ec-known-problems) with our {{ecloud}} or the Elastic Stack. -To learn more about the features that are supported by Elasticsearch Service, check [Elastic Cloud Subscriptions](https://www.elastic.co/cloud/elasticsearch-service/subscriptions?page=docs&placement=docs-body). +To learn more about the features that are supported by {{ecloud}}, check [Elastic Cloud Subscriptions](https://www.elastic.co/cloud/elasticsearch-service/subscriptions?page=docs&placement=docs-body). ## Security [ec-restrictions-security] @@ -36,36 +42,36 @@ To learn more about the features that are supported by Elasticsearch Service, ch ## APIs [ec-restrictions-apis] -The following restrictions apply when using APIs in Elasticsearch Service: +The following restrictions apply when using APIs in {{ecloud}}: -Elasticsearch Service API -: The Elasticsearch Service API is subject to a restriction on the volume of API requests that can be submitted per user, per second. Check [Rate limiting](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-api-rate-limiting.md) for details. +{{ecloud}} API +: The {{ecloud}} API is subject to a restriction on the volume of API requests that can be submitted per user, per second. Check [Rate limiting](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-api-rate-limiting.md) for details. $$$ec-restrictions-apis-elasticsearch$$$ Elasticsearch APIs -: The Elasticsearch APIs do not natively enforce rate limiting. However, all requests to the Elasticsearch cluster are subject to Elasticsearch configuration settings, such as the [network HTTP setting](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/networking-settings.md#http-settings) `http:max_content_length` which restricts the maximum size of an HTTP request body. This setting has a default value of 100MB, hence restricting API request payloads to that size. This setting is not currently configurable in Elasticsearch Service. For a list of which Elasticsearch settings are supported on Cloud, check [Add Elasticsearch user settings](edit-stack-settings.md). To learn about using the Elasticsearch APIs in Elasticsearch Service, check [Access the Elasticsearch API console](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-api-console.md). And, for full details about the Elasticsearch APIs and their endpoints, check the [Elasticsearch API reference documentation](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/rest-apis/index.md). +: The Elasticsearch APIs do not natively enforce rate limiting. However, all requests to the Elasticsearch cluster are subject to Elasticsearch configuration settings, such as the [network HTTP setting](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/networking-settings.md#http-settings) `http:max_content_length` which restricts the maximum size of an HTTP request body. This setting has a default value of 100MB, hence restricting API request payloads to that size. This setting is not currently configurable in {{ecloud}}. For a list of which Elasticsearch settings are supported on Cloud, check [Add Elasticsearch user settings](edit-stack-settings.md). To learn about using the Elasticsearch APIs in {{ecloud}}, check [Access the Elasticsearch API console](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-api-console.md). And, for full details about the Elasticsearch APIs and their endpoints, check the [Elasticsearch API reference documentation](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/rest-apis/index.md). $$$ec-restrictions-apis-kibana$$$ Kibana APIs -: There are no rate limits restricting your use of the Kibana APIs. However, Kibana features are affected by the [Kibana configuration settings](../self-managed/configure.md), not all of which are supported in Elasticsearch Service. For a list of what settings are currently supported, check [Add Kibana user settings](edit-stack-settings.md). For all details about using the Kibana APIs, check the [Kibana API reference documentation](https://www.elastic.co/guide/en/kibana/current/api.html). +: There are no rate limits restricting your use of the Kibana APIs. However, Kibana features are affected by the [Kibana configuration settings](../self-managed/configure.md), not all of which are supported in {{ecloud}}. For a list of what settings are currently supported, check [Add Kibana user settings](edit-stack-settings.md). For all details about using the Kibana APIs, check the [Kibana API reference documentation](https://www.elastic.co/guide/en/kibana/current/api.html). ## Transport client [ec-restrictions-transport-client] -* The transport client is not considered thread safe in a cloud environment. We recommend that you use the Java REST client instead. This restriction relates to the fact that your deployments hosted on Elasticsearch Service are behind proxies, which prevent the transport client from communicating directly with Elasticsearch clusters. +* The transport client is not considered thread safe in a cloud environment. We recommend that you use the Java REST client instead. This restriction relates to the fact that your deployments hosted on {{ecloud}} are behind proxies, which prevent the transport client from communicating directly with Elasticsearch clusters. * The transport client is not supported over [private link connections](../../security/aws-privatelink-traffic-filters.md). Use the Java REST client instead, or connect over the public internet. -* The transport client does not work with Elasticsearch clusters at version 7.6 and later that are hosted on Cloud. Transport client continues to work with Elasticsearch clusters at version 7.5 and earlier. Note that the transport client was deprecated with version 7.0 and will be removed with 8.0. +% * The transport client does not work with Elasticsearch clusters at version 7.6 and later that are hosted on Cloud. Transport client continues to work with Elasticsearch clusters at version 7.5 and earlier. Note that the transport client was deprecated with version 7.0 and will be removed with 8.0. ## Elasticsearch and Kibana plugins [ec-restrictions-plugins] * Kibana plugins are not supported. * Elasticsearch plugins, are not enabled by default for security purposes. Please reach out to support if you would like to enable Elasticsearch plugins support on your account. -* Some Elasticsearch plugins do not apply to Elasticsearch Service. For example, you won’t ever need to change discovery, as Elasticsearch Service handles how nodes discover one another. -* In Elasticsearch 5.0 and later, site plugins are no longer supported. This change does not affect the site plugins Elasticsearch Service might provide out of the box, such as Kopf or Head, since these site plugins are serviced by our proxies and not Elasticsearch itself. -* In Elasticsearch 5.0 and later, site plugins such as Kopf and Paramedic are no longer provided. We recommend that you use our [cluster performance metrics](../../monitor/stack-monitoring.md), [X-Pack monitoring features](../../monitor/stack-monitoring.md) and Kibana’s (6.3+) [Index Management UI](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-mgmt.html) if you want more detailed information or perform index management actions. +* Some Elasticsearch plugins do not apply to {{ecloud}}. For example, you won’t ever need to change discovery, as {{ecloud}} handles how nodes discover one another. +% * In Elasticsearch 5.0 and later, site plugins are no longer supported. This change does not affect the site plugins {{ecloud}} might provide out of the box, such as Kopf or Head, since these site plugins are serviced by our proxies and not Elasticsearch itself. +% * In Elasticsearch 5.0 and later, site plugins such as Kopf and Paramedic are no longer provided. We recommend that you use our [cluster performance metrics](../../monitor/stack-monitoring.md), [X-Pack monitoring features](../../monitor/stack-monitoring.md) and Kibana’s (6.3+) [Index Management UI](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-mgmt.html) if you want more detailed information or perform index management actions. ## Watcher [ec-restrictions-watcher] @@ -93,20 +99,20 @@ Currently you can’t use SSO to login directly from {{ecloud}} into Kibana endp ## Kibana [ec-restrictions-kibana] * The maximum size of a single {{kib}} instance is 8GB. This means, {{kib}} instances can be scaled up to 8GB before they are scaled out. For example, when creating a deployment with a {{kib}} instance of size 16GB, then 2x8GB instances are created. If you face performance issues with {{kib}} PNG or PDF reports, the recommendations are to create multiple, smaller dashboards to export the data, or to use a third party browser extension for exporting the dashboard in the format you need. -* Running an external Kibana in parallel to Elasticsearch Service’s Kibana instances may cause errors, for example [`Unable to decrypt attribute`](../../../explore-analyze/alerts-cases/alerts/alerting-common-issues.md#rule-cannot-decrypt-api-key), due to a mismatched [`xpack.encryptedSavedObjects.encryptionKey`](asciidocalypse://docs/kibana/docs/reference/configuration-reference/security-settings.md#security-encrypted-saved-objects-settings) as Elasticsearch Service does not [allow users to set](edit-stack-settings.md) nor expose this value. While workarounds are possible, this is not officially supported nor generally recommended. +* Running an external Kibana in parallel to {{ecloud}}’s Kibana instances may cause errors, for example [`Unable to decrypt attribute`](../../../explore-analyze/alerts-cases/alerts/alerting-common-issues.md#rule-cannot-decrypt-api-key), due to a mismatched [`xpack.encryptedSavedObjects.encryptionKey`](asciidocalypse://docs/kibana/docs/reference/configuration-reference/security-settings.md#security-encrypted-saved-objects-settings) as {{ecloud}} does not [allow users to set](edit-stack-settings.md) nor expose this value. While workarounds are possible, this is not officially supported nor generally recommended. -## APM Agent central configuration with PrivateLink or traffic filters [ec-restrictions-apm-traffic-filters] +% ## APM Agent central configuration with PrivateLink or traffic filters [ec-restrictions-apm-traffic-filters] -If you are using APM 7.9.0 or older: +% If you are using APM 7.9.0 or older: -* You cannot use [APM Agent central configuration](/solutions/observability/apps/apm-agent-central-configuration.md) if your deployment is secured by [traffic filters](../../security/traffic-filtering.md). -* If you access your APM deployment over [PrivateLink](../../security/aws-privatelink-traffic-filters.md), to use APM Agent central configuration you need to allow access to the APM deployment over public internet. +% * You cannot use [APM Agent central configuration](/solutions/observability/apps/apm-agent-central-configuration.md) if your deployment is secured by [traffic filters](../../security/traffic-filtering.md). +% * If you access your APM deployment over [PrivateLink](../../security/aws-privatelink-traffic-filters.md), to use APM Agent central configuration you need to allow access to the APM deployment over public internet. ## Fleet with PrivateLink or traffic filters [ec-restrictions-fleet-traffic-filters] -* You cannot use Fleet 7.13.x if your deployment is secured by [traffic filters](../../security/traffic-filtering.md). Fleet 7.14.0 and later works with traffic filters (both Private Link and IP filters). +% * You cannot use Fleet 7.13.x if your deployment is secured by [traffic filters](../../security/traffic-filtering.md). Fleet 7.14.0 and later works with traffic filters (both Private Link and IP filters). * If you are using Fleet 8.12+, using a remote {{es}} output with a target cluster that has [traffic filters](../../security/traffic-filtering.md) enabled is not currently supported. ## Restoring a snapshot across deployments [ec-snapshot-restore-enterprise-search-kibana-across-deployments] @@ -137,10 +143,10 @@ To make a seamless migration, after restoring from a snapshot there are some add * The AWS `eu-central-2` region is limited to two availability zones for CPU Optimized (ARM) Hardware profile ES data node and warm/cold tier. Deployment creation with three availability zones for Elasticsearch data nodes for hot (for CPU Optimized (ARM) profile), warm and cold tiers is not possible. This includes scaling an existing deployment with one or two AZs to three availability zones. The workaround is to use a different AWS region that allows three availability zones, or to scale existing nodes up within the two availability zones. -## Known problems [ec-known-problems] +% ## Known problems [ec-known-problems] -* There is a known problem affecting clusters with versions 7.7.0 and 7.7.1 due to [a bug in Elasticsearch](https://github.com/elastic/elasticsearch/issues/56739). Although rare, this bug can prevent you from running plans. If this occurs we recommend that you retry the plan, and if that fails please contact support to get your plan through. Because of this bug we recommend you to upgrade to version 7.8 and higher, where the problem has already been addressed. -* A known issue can prevent direct rolling upgrades from Elasticsearch version 5.6.10 to version 6.3.0. As a workaround, we have removed version 6.3.0 from the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body) for new cluster deployments and for upgrading existing ones. If you are affected by this issue, check [Rolling upgrades from 5.6.x to 6.3.0 fails with "java.lang.IllegalStateException: commit doesn’t contain history uuid"](https://elastic.my.salesforce.com/articles/Support_Article/Rolling-upgrades-to-6-3-0-from-5-x-fails-with-java-lang-IllegalStateException-commit-doesn-t-contain-history-uuid?popup=false&id=kA0610000005JFG) in our Elastic Support Portal. If these steps do not work or you do not have access to the Support Portal, you can contact `support@elastic.co`. +% * There is a known problem affecting clusters with versions 7.7.0 and 7.7.1 due to [a bug in Elasticsearch](https://github.com/elastic/elasticsearch/issues/56739). Although rare, this bug can prevent you from running plans. If this occurs we recommend that you retry the plan, and if that fails please contact support to get your plan through. Because of this bug we recommend you to upgrade to version 7.8 and higher, where the problem has already been addressed. +% * A known issue can prevent direct rolling upgrades from Elasticsearch version 5.6.10 to version 6.3.0. As a workaround, we have removed version 6.3.0 from the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) for new cluster deployments and for upgrading existing ones. If you are affected by this issue, check [Rolling upgrades from 5.6.x to 6.3.0 fails with "java.lang.IllegalStateException: commit doesn’t contain history uuid"](https://elastic.my.salesforce.com/articles/Support_Article/Rolling-upgrades-to-6-3-0-from-5-x-fails-with-java-lang-IllegalStateException-commit-doesn-t-contain-history-uuid?popup=false&id=kA0610000005JFG) in our Elastic Support Portal. If these steps do not work or you do not have access to the Support Portal, you can contact `support@elastic.co`. ## Repository Analysis API is unavailable in Elastic Cloud [ec-repository-analyis-unavailable] diff --git a/deploy-manage/deploy/elastic-cloud/serverless.md b/deploy-manage/deploy/elastic-cloud/serverless.md index 09cac7296..b049fd886 100644 --- a/deploy-manage/deploy/elastic-cloud/serverless.md +++ b/deploy-manage/deploy/elastic-cloud/serverless.md @@ -3,23 +3,94 @@ mapped_urls: - https://www.elastic.co/guide/en/serverless/current/index.html - https://www.elastic.co/guide/en/serverless/current/intro.html - https://www.elastic.co/guide/en/serverless/current/general-serverless-status.html +applies_to: + serverless: --- -# Serverless +# {{serverless-full}} -% What needs to be done: Refine +{{serverless-full}} is a fully managed solution that allows you to deploy and use Elastic for your use cases without managing the underlying infrastructure. It represents a shift in how you interact with {{es}} - instead of managing clusters, nodes, data tiers, and scaling, you create **serverless projects** that are fully managed and automatically scaled by Elastic. This abstraction of infrastructure decisions allows you to focus solely on gaining value and insight from your data. -% GitHub issue: https://github.com/elastic/docs-projects/issues/337 +## Serverless overview -% Use migrated content from existing pages that map to this page: +{{serverless-full}} automatically provisions, manages, and scales your {{es}} resources based on your actual usage. Unlike traditional deployments where you need to predict and provision resources in advance, serverless adapts to your workload in real-time, ensuring optimal performance while eliminating the need for manual capacity planning. -% - [ ] ./raw-migrated-files/docs-content/serverless/intro.md -% - [ ] ./raw-migrated-files/docs-content/serverless/general-serverless-status.md -% Notes: also in troubleshooting +Serverless projects use the core components of the {{stack}}, such as {{es}} and {{kib}}, and are based on an architecture that decouples compute and storage. Search and indexing operations are separated, which offers high flexibility for scaling your workloads while ensuring a high level of performance. -⚠️ **This page is a work in progress.** ⚠️ +:::{note} +There are differences between {{es-serverless}} and {{ech}}, for a list of differences between them, see [differences between {{ech}} and {{es-serverless}}](../elastic-cloud.md#general-what-is-serverless-elastic-differences-between-serverless-projects-and-hosted-deployments-on-ecloud). +::: -The documentation team is working to combine content pulled from the following pages: +## Get started -* [/raw-migrated-files/docs-content/serverless/intro.md](/raw-migrated-files/docs-content/serverless/intro.md) -* [/raw-migrated-files/docs-content/serverless/general-serverless-status.md](/raw-migrated-files/docs-content/serverless/general-serverless-status.md) \ No newline at end of file +Elastic provides three serverless solutions available on {{ecloud}}. Follow these guides to get started with your serverless project: + +* **[{{es-serverless}}](../../../solutions/search/serverless-elasticsearch-get-started.md)**: Build powerful applications and search experiences using a rich ecosystem of vector search capabilities, APIs, and libraries. +* **[{{obs-serverless}}](../../../solutions/observability/get-started/create-an-observability-project.md)**: Monitor your own platforms and services using powerful machine learning and analytics tools with your logs, metrics, traces, and APM data. +* **[{{sec-serverless}}](../../../solutions/security/get-started/create-security-project.md)**: Detect, investigate, and respond to threats with SIEM, endpoint protection, and AI-powered analytics capabilities. + +Afterwards, you can: + +* Learn about the [cloud organization](../../cloud-organization.md) that is the umbrella for all of your Elastic Cloud resources, users, and account settings. +* Learn about how {{es-serverless}} is [billed](../../cloud-organization/billing/serverless-project-billing-dimensions.md). +* Learn how to [create an API key](../../api-keys/serverless-project-api-keys.md). This key provides access to the API that enables you to manage your deployments. +* Learn how manage [users and roles](../../users-roles/cloud-organization.md) in your {{es-serverless}} deployment. +* Learn more about {{serverless-full}} in [our blog](https://www.elastic.co/blog/elastic-cloud-serverless). + +## Benefits of serverless projects [_benefits_of_serverless_projects] + +**Management free:** Elastic manages the underlying Elastic cluster, so you can focus on your data. With serverless projects, Elastic is responsible for automatic upgrades, data backups, and business continuity. + +**Autoscaled:** To meet your performance requirements, the system automatically adjusts to your workloads. For example, when you have a short time spike on the data you ingest, more resources are allocated for that period of time. When the spike is over, the system uses less resources, without any action on your end. + +**Optimized data storage:** Your data is stored in cost-efficient, general storage. A cache layer is available on top of the general storage for recent and frequently queried data that provides faster search speed. The size of the cache layer and the volume of data it holds depend on [settings](../../../deploy-manage/deploy/elastic-cloud/project-settings.md) that you can configure for each project. + +**Dedicated experiences:** All serverless solutions are built on the Elastic Search Platform and include the core capabilities of the Elastic Stack. They also each offer a distinct experience and specific capabilities that help you focus on your data, goals, and use cases. + +**Pay per usage:** Each serverless project type includes product-specific and usage-based pricing. + +**Data and performance control**. Control your project data and query performance against your project data. + * **Data:** Choose the data you want to ingest and the method to ingest it. By default, data is stored indefinitely in your project, and you define the retention settings for your data streams. + * **Performance:** For granular control over costs and query performance against your project data, serverless projects come with a set of predefined settings you can edit. + +## Monitor serverless status [general-serverless-status] + +Serverless projects run on cloud platforms, which may undergo changes in availability. When availability changes, Elastic makes sure to provide you with a current service status. + +To learn more about serverless status, see [Service status](../../cloud-organization/service-status.md). + +## Answers to common serverless questions [general-what-is-serverless-elastic-answers-to-common-serverless-questions] + +**Is there migration support between hosted deployments and serverless projects?** + +Migration paths between hosted deployments and serverless projects are currently unsupported. + +**How can I move data to or from serverless projects?** + +We are working on data migration tools! In the interim, [use Logstash](https://www.elastic.co/guide/en/serverless/current/elasticsearch-ingest-data-through-logstash.html) with Elasticsearch input and output plugins to move data to and from serverless projects. + +**How does serverless ensure compatibility between software versions?** + +Connections and configurations are unaffected by upgrades. To ensure compatibility between software versions, quality testing and API versioning are used. + +**Can I convert a serverless project into a hosted deployment, or a hosted deployment into a serverless project?** + +Projects and deployments are based on different architectures, and you are unable to convert. + +**Can I convert a serverless project into a project of a different type?** + +You are unable to convert projects into different project types, but you can create as many projects as you’d like. You will be charged only for your usage. + +**How can I create serverless service accounts?** + +Create API keys for service accounts in your serverless projects. Options to automate the creation of API keys with tools such as Terraform will be available in the future. + +To raise a Support case with Elastic, raise a case for your subscription the same way you do today. In the body of the case, make sure to mention you are working in serverless to ensure we can provide the appropriate support. + +**Where can I learn about pricing for serverless?** + +See serverless pricing information for [Search](https://www.elastic.co/pricing/serverless-search), [Observability](https://www.elastic.co/pricing/serverless-observability), and [Security](https://www.elastic.co/pricing/serverless-security). + +**Can I request backups or restores for my projects?** + +It is not currently possible to request backups or restores for projects, but we are working on data migration tools to better support this. diff --git a/deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md b/deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md index faca749d1..b77eab803 100644 --- a/deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md +++ b/deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md @@ -1,18 +1,28 @@ --- +applies_to: + deployment: + ess: ga + serverless: ga mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-marketplaces.html --- # Subscribe from a marketplace [ec-marketplaces] -Subscribe to Elasticsearch Service from a marketplace. Your subscription gets billed together with other services that you’re already using, and can contribute towards your spend commitment with cloud providers. You can subcribe to Elasticsearch Service from any of the following: +You can subscribe to {{ecloud}} from a marketplace. Your subscription gets billed together with other services that you’re already using, and can contribute towards your spend commitment with cloud providers. + +Trial availability and duration can vary depending on the marketplace. + +When subscribing from a marketplace, your marketplace email is used for your [Elastic account](../../../cloud-account/update-your-email-address.md). * [AWS Marketplace](aws-marketplace.md) * [Azure Marketplace](azure-native-isv-service.md) * [GCP Marketplace](google-cloud-platform-marketplace.md) +* [Heroku](heroku.md) - - +::::{note} +[Serverless projects](https://docs.elastic.co/serverless) are only available for the AWS Marketplace, and are in technical preview on the Azure Marketplace. Support for GCP Marketplace will be added in the near future. +:::: diff --git a/deploy-manage/deploy/elastic-cloud/switch-from-apm-to-integrations-server-payload.md b/deploy-manage/deploy/elastic-cloud/switch-from-apm-to-integrations-server-payload.md index 698fccd79..60a0c7d62 100644 --- a/deploy-manage/deploy/elastic-cloud/switch-from-apm-to-integrations-server-payload.md +++ b/deploy-manage/deploy/elastic-cloud/switch-from-apm-to-integrations-server-payload.md @@ -1,11 +1,14 @@ --- +applies_to: + deployment: + ess: ga mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-integrations-server-apm-switch.html --- # Switch from APM to Integrations Server payload [ec-integrations-server-apm-switch] -This example shows how to use the Elasticsearch Service RESTful API to switch from using [APM & Fleet Server](#ec-manage-apm-and-fleet) to [Integrations Server](manage-integrations-server.md). +This example shows how to use the {{ecloud}} RESTful API to switch from using [APM & Fleet Server](#ec-manage-apm-and-fleet) to [Integrations Server](manage-integrations-server.md). ### Requirements [ec_requirements_3] diff --git a/deploy-manage/deploy/elastic-cloud/tools-apis.md b/deploy-manage/deploy/elastic-cloud/tools-apis.md index 1d32278f7..705c19c46 100644 --- a/deploy-manage/deploy/elastic-cloud/tools-apis.md +++ b/deploy-manage/deploy/elastic-cloud/tools-apis.md @@ -1,4 +1,8 @@ --- +applies_to: + deployment: + ess: ga + serverless: ga mapped_urls: - https://www.elastic.co/guide/en/serverless/current/elasticsearch-http-apis.html - https://www.elastic.co/guide/en/tpec/current/index.html @@ -18,8 +22,110 @@ mapped_urls: % - [ ] https://www.elastic.co/guide/en/tpec/current/index.html % Notes: reference only, this page wasn't migrated, but you can pull from the live URL if needed. -⚠️ **This page is a work in progress.** ⚠️ +## REST APIs to orchestrate {{ecloud}} -The documentation team is working to combine content pulled from the following pages: +The following APIs allow you to manage your {{ecloud}} organization, users, security, billing and resources. -* [/raw-migrated-files/docs-content/serverless/elasticsearch-http-apis.md](/raw-migrated-files/docs-content/serverless/elasticsearch-http-apis.md) \ No newline at end of file +:::::{tab-set} +:group: serverless-hosted + +::::{tab-item} {{serverless-short}} +:sync: serverless + +You can use the [{{serverless-full}} APIs](https://www.elastic.co/docs/api/doc/elastic-cloud-serverless) to manage your {{serverless-full}} projects, your organization, and its users. + +:::: +::::{tab-item} {{ech}} +:sync: hosted + +You can use the [{{ecloud}} API](https://www.elastic.co/docs/api/doc/cloud/) to manage your hosted deployments and all of the resources associated with them. This includes performing deployment CRUD operations, scaling or autoscaling resources, and managing traffic filters, deployment extensions, remote clusters, and Elastic Stack versions. You can also access cost data by deployment and by organization. + +:::: + +::::: + +## REST APIs to interact with data and solution features + +The following APIs allow you to interact with your {{es}} cluster, its data, and the features available to you in your hosted deployment or serverless project. + +Note that some [restrictions](/deploy-manage/deploy/elastic-cloud/restrictions-known-problems.md#ec-restrictions-apis-elasticsearch) apply when using the these APIs on {{ecloud}}. + +:::::{tab-set} +:group: serverless-hosted +::::{tab-item} {{serverless-short}} +:sync: serverless + +**API references** + +The following APIs are available for {{es-serverless}} users. These links will take you to the autogenerated API reference documentation. + +- [Elasticsearch Serverless APIs](https://www.elastic.co/docs/api/doc/elasticsearch-serverless): Use these APIs to index, manage, search, and analyze your data in {{es-serverless}}. Learn how to [connect to your {{es-serverless}} endpoint](/solutions/search/get-started.md). + + ::::{tip} + Learn how to [connect to your {{es-serverless}} endpoint](/solutions/search/get-started.md). + :::: + +- [Kibana Serverless APIs](https://www.elastic.co/docs/api/doc/serverless): Use these APIs to manage resources such as connectors, data views, and saved objects for your {{serverless-full}} project. + + +**Additional API information** + +- [{{es}} API conventions](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/rest-apis/api-conventions.md): Reference information about headers and request body conventions for {{es-serverless}} REST APIs. +:::: + +::::{tab-item} {{ech}} +:sync: hosted + +- [Elasticsearch APIs](https://www.elastic.co/docs/api/doc/elasticsearch/): This set of APIs allows you to interact directly with the Elasticsearch nodes in your deployment. You can ingest data, run search queries, check the health of your clusters, manage snapshots, and more. +- [Kibana APIs](https://www.elastic.co/docs/api/doc/kibana/): Many Kibana features can be accessed through these APIs, including Kibana objects, patterns, and dashboards, as well as user roles and user sessions. You can use these APIs to configure alerts and actions, and to access health details for the Kibana Task Manager. +:::: + +::::: + +## {{ecloud}} API Console +```{applies_to} +deployment: + ess: ga +serverless: unavailable +``` + +For each deployment, an **API Console** page is available from the {{ecloud}} Console for you to execute queries using the available APIs. You can find this console when selecting a specific deployment to manage. From there, the API Console is available under the **{{es}}** page. + +:::{note} +This API Console is different from the [Dev Tools Console](/explore-analyze/query-filter/tools/console.md) available in each deployment, from which you can call {{es}} and {{kib}} APIs. On the {{ecloud}} API Console, you cannot run Kibana APIs. +::: + +## ECCTL - Command-line interface for {{ecloud}} + +ecctl is the command-line interface for {{ecloud}} APIs. It wraps typical operations commonly needed by operators within a single command line tool. + +Benefits of ecctl: + +- Easier to use than the Cloud UI or using the RESTful API directly +- Helps you automate the deployment lifecycle +- Provides a foundation for integration with other tools + +Find more details in the [ecctl documentation](https://www.elastic.co/guide/en/ecctl/current/index.html). + +## Monitor your deployments with AutoOps +```{applies_to} +deployment: + ess: ga +serverless: unavailable +``` + +AutoOps significantly simplifies cluster management for your {{ech}} deployments with performance recommendations, resource utilization visibility, real-time issue detection and resolution paths. Find more details in [](/deploy-manage/monitor/autoops.md) + + +## Provision hosted deployments with Terraform +```{applies_to} +deployment: + ess: ga +serverless: unavailable +``` + +The Elastic Cloud Terraform provider allows you to provision {{ech}} deployments on any Elastic Cloud platform, whether it’s {{ecloud}} or Elastic Cloud Enterprise. + +The provider lets you manage Elastic Cloud deployments as code, and introduce DevOps-driven methodologies to manage and deploy the Elastic Stack and solutions. + +To get started, see the [Elastic Cloud Terraform provider documentation](https://registry.terraform.io/providers/elastic/ec/latest/docs). diff --git a/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md b/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md index de4ef30bf..0b38fef93 100644 --- a/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md +++ b/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md @@ -1,4 +1,7 @@ --- +applies_to: + deployment: + ess: ga mapped_urls: - https://www.elastic.co/guide/en/cloud/current/ec-custom-bundles.html - https://www.elastic.co/guide/en/cloud-heroku/current/ech-custom-bundles.html @@ -6,28 +9,243 @@ mapped_urls: # Upload custom plugins and bundles -% What needs to be done: Lift-and-shift +There are several cases where you might need your own files to be made available to your {{es}} cluster’s nodes: -% Use migrated content from existing pages that map to this page: +* Your own custom plugins, or third-party plugins that are not amongst the [officially available plugins](../../../deploy-manage/deploy/elastic-cloud/add-plugins-extensions.md). +* Custom dictionaries, such as synonyms, stop words, compound words, and so on. +* Cluster configuration files, such as an Identity Provider metadata file used when you [secure your clusters with SAML](../../../deploy-manage/users-roles/cluster-or-deployment-auth/saml.md). -% - [ ] ./raw-migrated-files/cloud/cloud/ec-custom-bundles.md -% - [ ] ./raw-migrated-files/cloud/cloud-heroku/ech-custom-bundles.md +To facilitate this, we make it possible to upload a ZIP file that contains the files you want to make available. Uploaded files are stored using Amazon’s highly-available S3 service. This is necessary so we do not have to rely on the availability of third-party services, such as the official plugin repository, when provisioning nodes. -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): +Custom plugins and bundles are collectively referred to as extensions. -$$$ec-add-your-plugin$$$ +## Before you begin [ec_before_you_begin_7] -$$$ec-update-bundles-and-plugins$$$ +The selected plugins/bundles are downloaded and provided when a node starts. Changing a plugin does not change it for nodes already running it. Refer to [Updating Plugins and Bundles](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md#ec-update-bundles-and-plugins). -$$$ec-update-bundles$$$ +With great power comes great responsibility: your plugins can extend your deployment with new functionality, but also break it. Be careful. We obviously cannot guarantee that your custom code works. -$$$ech-add-your-plugin$$$ +::::{important} +You cannot edit or delete a custom extension after it has been used in a deployment. To remove it from your deployment, you can disable the extension and update your deployment configuration. +:::: -$$$ech-update-bundles-and-plugins$$$ -$$$ech-update-bundles$$$ +Uploaded files cannot be bigger than 20MB for most subscription levels, for Platinum and Enterprise the limit is 8GB. -**This page is a work in progress.** The documentation team is working to combine content pulled from the following pages: +It is important that plugins and dictionaries that you reference in mappings and configurations are available at all times. For example, if you try to upgrade {{es}} and de-select a dictionary that is referenced in your mapping, the new nodes will be unable to recover the cluster state and function. This is true even if the dictionary is referenced by an empty index you do not actually use. -* [/raw-migrated-files/cloud/cloud/ec-custom-bundles.md](/raw-migrated-files/cloud/cloud/ec-custom-bundles.md) -* [/raw-migrated-files/cloud/cloud-heroku/ech-custom-bundles.md](/raw-migrated-files/cloud/cloud-heroku/ech-custom-bundles.md) \ No newline at end of file + +## Prepare your files for upload [ec-prepare-custom-bundles] + +Plugins are uploaded as ZIP files. You need to choose whether your uploaded file should be treated as a *plugin* or as a *bundle*. Bundles are not installed as plugins. If you need to upload both a custom plugin and custom dictionaries, upload them separately. + +To prepare your files, create one of the following: + +Plugins +: A plugin is a ZIP file that contains a plugin descriptor file and binaries. + + The plugin descriptor file is called either `stable-plugin-descriptor.properties` for plugins built against the stable plugin API, or `plugin-descriptor.properties` for plugins built against the classic plugin API. A plugin ZIP file should only contain one plugin descriptor file. + + {{es}} assumes that the uploaded ZIP file contains binaries. If it finds any source code, it fails with an error message, causing provisioning to fail. Make sure you upload binaries, and not source code. + + ::::{note} + Plugins larger than 5GB should have the plugin descriptor file at the top of the archive. This order can be achieved by specifying at time of creating the ZIP file: + + ```sh + zip -r name-of-plugin.zip name-of-descriptor-file.properties * + ``` + + :::: + + +Bundles +: The entire content of a bundle is made available to the node by extracting to the {{es}} container’s `/app/config` directory. This is useful to make custom dictionaries available. Dictionaries should be placed in a `/dictionaries` folder in the root path of your ZIP file. + + Here are some examples of bundles: + + **Script** + + ```text + $ tree . + . + └── scripts + └── test.js + ``` + + The script `test.js` can be referred in queries as `"script": "test"`. + + **Dictionary of synonyms** + + ```text + $ tree . + . + └── dictionaries + └── synonyms.txt + ``` + + The dictionary `synonyms.txt` can be used as `synonyms.txt` or using the full path `/app/config/synonyms.txt` in the `synonyms_path` of the `synonym-filter`. + + To learn more about analyzing with synonyms, check [Synonym token filter](https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-synonym-tokenfilter.html) and [Formatting Synonyms](https://www.elastic.co/guide/en/elasticsearch/guide/2.x/synonym-formats.html). + + **GeoIP database bundle** + + ```text + $ tree . + . + └── ingest-geoip + └── MyGeoLite2-City.mmdb + ``` + + Note that the extension must be `-(City|Country|ASN).mmdb`, and it must be a different name than the original file name `GeoLite2-City.mmdb` which already exists in {{ech}}. To use this bundle, you can refer it in the GeoIP ingest pipeline as `MyGeoLite2-City.mmdb` under `database_file`. + + + +## Add your extension [ec-add-your-plugin] + +You must upload your files before you can apply them to your cluster configuration: + +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. +3. Under **Features**, select **Extensions**. +4. Select **Upload extension**. +5. Complete the extension fields, including the {{es}} version. + + * Plugins must use full version notation down to the patch level, such as `7.10.1`. You cannot use wildcards. This version notation should match the version in your plugin’s plugin descriptor file. For classic plugins, it should also match the target deployment version. + * Bundles should specify major or minor versions with wildcards, such as `7.*` or `*`. Wildcards are recommended to ensure the bundle is compatible across all versions of these releases. + +6. Select the extension **Type**. +7. Under **Plugin file**, choose the file to upload. +8. Select **Create extension**. + +After creating your extension, you can [enable them for existing {{es}} deployments](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md#ec-update-bundles) or enable them when creating new deployments. + +::::{note} +Creating extensions larger than 200MB should be done through the extensions API. + +Refer to [Managing plugins and extensions through the API](../../../deploy-manage/deploy/elastic-cloud/manage-plugins-extensions-through-api.md) for more details. + +:::: + + + +## Update your deployment configuration [ec-update-bundles] + +After uploading your files, you can select to enable them when creating a new {{es}} deployment. For existing deployments, you must update your deployment configuration to use the new files: + +1. Log in to the [{{ech}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. + + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + +3. From the **Actions** dropdown, select **Edit deployment**. +4. Select **Manage user settings and extensions**. +5. Select the **Extensions** tab. +6. Select the custom extension. +7. Select **Back**. +8. Select **Save**. The {{es}} cluster is then updated with new nodes that have the plugin installed. + + +## Update your extension [ec-update-bundles-and-plugins] + +While you can update the ZIP file for any plugin or bundle, these are downloaded and made available only when a node is started. + +You should be careful when updating an extension. If you update an existing extension with a new file, and if the file is broken for some reason, all the nodes could be in trouble, as a restart or move node could make even HA clusters non-available. + +If the extension is not in use by any deployments, then you are free to update the files or extension details as much as you like. However, if the extension is in use, and if you need to update it with a new file, it is recommended to [create a new extension](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md#ec-add-your-plugin) rather than updating the existing one that is in use. + +By following this method, only the one node would be down even if the extension file is faulty. This would ensure that HA clusters remain available. + +This method also supports having a test/staging deployment to test out the extension changes before applying them on a production deployment. + +You may delete the old extension after updating the deployment successfully. + +To update an extension with a new file version, + +1. Prepare a new plugin or bundle. +2. On the **Extensions** page, [upload a new extension](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md#ec-add-your-plugin). +3. Make your new files available by uploading them. +4. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. + + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + +5. From the **Actions** dropdown, select **Edit deployment**. +6. Select **Manage user settings and extensions**. +7. Select the **Extensions** tab. +8. Select the new extension and de-select the old one. +9. Select **Back**. +10. Select **Save**. + + +## How to use the extensions API [ec-extension-api-usage-guide] + +::::{note} +For a full set of examples, check [Managing plugins and extensions through the API](../../../deploy-manage/deploy/elastic-cloud/manage-plugins-extensions-through-api.md). +:::: + + +If you don’t already have one, create an [API key](../../../deploy-manage/api-keys/elastic-cloud-api-keys.md) + +There are ways that you can use the extensions API to upload a file. + +### Method 1: Use HTTP `POST` to create metadata and then upload the file using HTTP `PUT` [ec_method_1_use_http_post_to_create_metadata_and_then_upload_the_file_using_http_put] + +Step 1: Create metadata + +```text +curl -XPOST \ +-H "Authorization: ApiKey $EC_API_KEY" \ +-H 'content-type:application/json' \ +https://api.elastic-cloud.com/api/v1/deployments/extensions \ +-d'{ + "name" : "synonyms-v1", + "description" : "The best synonyms ever", + "extension_type" : "bundle", + "version" : "7.*" +}' +``` + +Step 2: Upload the file + +```text +curl -XPUT \ +-H "Authorization: ApiKey $EC_API_KEY" \ +"https://api.elastic-cloud.com/api/v1/deployments/extensions/$extension_id" \ +-T /tmp/synonyms.zip +``` + +If you are using a client that does not have native `application/zip` handling like `curl`, be sure to use the equivalent of the following with `content-type: multipart/form-data`: + +```text +curl -XPUT \ +-H 'Expect:' \ +-H 'content-type: multipart/form-data' \ +-H "Authorization: ApiKey $EC_API_KEY" \ +"https://api.elastic-cloud.com/api/v1/deployments/extensions/$extension_id" -F "file=@/tmp/synonyms.zip" +``` + +For example, using the Python `requests` module, the `PUT` request would be as follows: + +```text +import requests +files = {'file': open('/tmp/synonyms.zip','rb')} +r = requests.put('https://api.elastic-cloud.com/api/v1/deployments/extensions/{}'.format(extension_id), files=files, headers= {'Authorization': 'ApiKey {}'.format(EC_API_KEY)}) +``` + + +### Method 2: Single step. Use a `download_url` so that the API server downloads the object at the specified URL [ec_method_2_single_step_use_a_download_url_so_that_the_api_server_downloads_the_object_at_the_specified_url] + +```text +curl -XPOST \ +-H "Authorization: ApiKey $EC_API_KEY" \ +-H 'content-type:application/json' \ +https://api.elastic-cloud.com/api/v1/deployments/extensions \ +-d'{ + "name" : "anylysis_icu", + "description" : "Helpful description", + "extension_type" : "plugin", + "version" : "7.13.2", + "download_url": "https://artifacts.elastic.co/downloads/elasticsearch-plugins/analysis-icu/analysis-icu-7.13.2.zip" +}' +``` + +Please refer to the [Extensions API reference](https://www.elastic.co/docs/api/doc/cloud/group/endpoint-extensions) for the complete set of HTTP methods and payloads. diff --git a/deploy-manage/deploy/self-managed/access.md b/deploy-manage/deploy/self-managed/access.md index adceacdd0..efaed030d 100644 --- a/deploy-manage/deploy/self-managed/access.md +++ b/deploy-manage/deploy/self-managed/access.md @@ -10,7 +10,7 @@ The fastest way to access {{kib}} is to use our hosted {{es}} Service. If you [i ## Set up on cloud [_set_up_on_cloud] -There’s no faster way to get started than with our hosted {{ess}} on Elastic Cloud: +There’s no faster way to get started than with {{ecloud}}: 1. [Get a free trial](https://cloud.elastic.co/registration?page=docs&placement=docs-body). 2. Log into [Elastic Cloud](https://cloud.elastic.co?page=docs&placement=docs-body). diff --git a/deploy-manage/deploy/self-managed/configure-elasticsearch.md b/deploy-manage/deploy/self-managed/configure-elasticsearch.md index b3b9c3a6a..c6b267cb5 100644 --- a/deploy-manage/deploy/self-managed/configure-elasticsearch.md +++ b/deploy-manage/deploy/self-managed/configure-elasticsearch.md @@ -101,7 +101,7 @@ If you configure the same setting using multiple methods, {{es}} applies the set For example, you can apply a transient setting to override a persistent setting or `elasticsearch.yml` setting. However, a change to an `elasticsearch.yml` setting will not override a defined transient or persistent setting. ::::{tip} -If you use {{ess}}, use the [user settings](../elastic-cloud/edit-stack-settings.md) feature to configure all cluster settings. This method lets {{ess}} automatically reject unsafe settings that could break your cluster. +If you use {{ech}}, use the [user settings](../elastic-cloud/edit-stack-settings.md) feature to configure all cluster settings. This method lets {{ech}} automatically reject unsafe settings that could break your cluster. If you run {{es}} on your own hardware, use the [cluster update settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings) to configure dynamic cluster settings. Only use `elasticsearch.yml` for static cluster settings and node settings. The API doesn’t require a restart and ensures a setting’s value is the same on all nodes. diff --git a/deploy-manage/deploy/self-managed/configure.md b/deploy-manage/deploy/self-managed/configure.md index 18b639d83..4f4a5b4cb 100644 --- a/deploy-manage/deploy/self-managed/configure.md +++ b/deploy-manage/deploy/self-managed/configure.md @@ -102,7 +102,7 @@ $$$elasticsearch-pingTimeout$$$ `elasticsearch.pingTimeout` : Time in milliseconds to wait for {{es}} to respond to pings. **Default: the value of the [`elasticsearch.requestTimeout`](#elasticsearch-requestTimeout) setting** $$$elasticsearch-requestHeadersWhitelist$$$ `elasticsearch.requestHeadersWhitelist` -: List of {{kib}} client-side headers to send to {{es}}. To send **no** client-side headers, set this value to [] (an empty list). Removing the `authorization` header from being whitelisted means that you cannot use [basic authentication](../../users-roles/cluster-or-deployment-auth/user-authentication.md#basic-authentication) in {{kib}}. **Default: `[ 'authorization', 'es-client-authentication' ]`** +: List of {{kib}} client-side headers to send to {{es}}. To send **no** client-side headers, set this value to [] (an empty list). Removing the `authorization` header from being whitelisted means that you cannot use [basic authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md) in {{kib}}. **Default: `[ 'authorization', 'es-client-authentication' ]`** $$$elasticsearch-requestTimeout$$$ `elasticsearch.requestTimeout` : Time in milliseconds to wait for responses from the back end or {{es}}. This value must be a positive integer. **Default: `30000`** @@ -178,10 +178,10 @@ $$$elasticsearch-user-passwd$$$ `elasticsearch.username` and `elasticsearch.pass $$$elasticsearch-service-account-token$$$ `elasticsearch.serviceAccountToken` : If your {{es}} is protected with basic authentication, this token provides the credentials that the {{kib}} server uses to perform maintenance on the {{kib}} index at startup. This setting is an alternative to `elasticsearch.username` and `elasticsearch.password`. -`unifiedSearch.autocomplete.valueSuggestions.timeout` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ess}}") +`unifiedSearch.autocomplete.valueSuggestions.timeout` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") : Time in milliseconds to wait for autocomplete suggestions from {{es}}. This value must be a whole number greater than zero. **Default: `"1000"`** -`unifiedSearch.autocomplete.valueSuggestions.terminateAfter` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ess}}") +`unifiedSearch.autocomplete.valueSuggestions.terminateAfter` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") : Maximum number of documents loaded by each shard to generate autocomplete suggestions. This value must be a whole number greater than zero. **Default: `"100000"`** ::::{note} @@ -195,7 +195,7 @@ $$$logging-root$$$ `logging.root` $$$logging-root-appenders$$$ `logging.root.appenders` : A list of logging appenders to forward the root level logger instance to. By default `root` is configured with the `default` appender that logs to stdout with a `pattern` layout. This is the configuration that all custom loggers will use unless they’re re-configured explicitly. You can override the default behavior by configuring a different [appender](../../monitor/logging-configuration/kibana-logging.md#logging-appenders) to apply to `root`. -$$$logging-root-level$$$ `logging.root.level` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ess}}") +$$$logging-root-level$$$ `logging.root.level` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") : Level at which a log record should be logged. Supported levels are: *all*, *fatal*, *error*, *warn*, *info*, *debug*, *trace*, *off*. Levels are ordered from *all* (highest) to *off* and a log record will be logged it its level is higher than or equal to the level of its logger, otherwise the log record is ignored. Use this value to [change the overall log level](../../monitor/logging-configuration/kibana-log-settings-examples.md#change-overall-log-level). **Default: `info`**. ::::{tip} @@ -225,25 +225,25 @@ $$$logging-loggers$$$ `logging.loggers[]` `logging.appenders[]` : [Appenders](../../monitor/logging-configuration/kibana-logging.md#logging-appenders) define how and where log messages are displayed (eg. **stdout** or console) and stored (eg. file on the disk). -`map.includeElasticMapsService` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ess}}") +`map.includeElasticMapsService` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") : Set to `false` to disable connections to Elastic Maps Service. When `includeElasticMapsService` is turned off, only tile layer configured by [`map.tilemap.url`](#tilemap-url) is available in [Maps](../../../explore-analyze/visualize/maps.md). **Default: `true`** `map.emsUrl` : Specifies the URL of a self hosted [{{hosted-ems}}](../../../explore-analyze/visualize/maps/maps-connect-to-ems.md#elastic-maps-server) -$$$tilemap-settings$$$ `map.tilemap.options.attribution` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ess}}") +$$$tilemap-settings$$$ `map.tilemap.options.attribution` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") : The map attribution string. Provide attributions in markdown and use `\|` to delimit attributions, for example: `"[attribution 1](https://www.attribution1)\|[attribution 2](https://www.attribution2)"`. **Default: `"© [Elastic Maps Service](https://www.elastic.co/elastic-maps-service)"`** -$$$tilemap-max-zoom$$$ `map.tilemap.options.maxZoom` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ess}}") +$$$tilemap-max-zoom$$$ `map.tilemap.options.maxZoom` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") : The maximum zoom level. **Default: `10`** -$$$tilemap-min-zoom$$$ `map.tilemap.options.minZoom` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ess}}") +$$$tilemap-min-zoom$$$ `map.tilemap.options.minZoom` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") : The minimum zoom level. **Default: `1`** -$$$tilemap-subdomains$$$ `map.tilemap.options.subdomains` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ess}}") +$$$tilemap-subdomains$$$ `map.tilemap.options.subdomains` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") : An array of subdomains used by the tile service. Specify the position of the subdomain the URL with the token `{s}`. -$$$tilemap-url$$$ `map.tilemap.url` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ess}}") +$$$tilemap-url$$$ `map.tilemap.url` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") : The URL to the service that {{kib}} uses as the default basemap in [maps](../../../explore-analyze/visualize/maps.md) and [vega maps](../../../explore-analyze/visualize/custom-visualizations-with-vega.md#vega-with-a-map). By default, {{kib}} sets a basemap from the [Elastic Maps Service](../../../explore-analyze/visualize/maps/maps-connect-to-ems.md), but users can point to their own Tile Map Service. For example: `"https://tiles.elastic.co/v2/default/{{z}}/{x}/{{y}}.png?elastic_tile_service_tos=agree&my_app_name=kibana"` `migrations.batchSize` @@ -330,7 +330,7 @@ $$$server-securityResponseHeaders-disableEmbedding$$$`server.securityResponseHea $$$server-securityResponseHeaders-crossOriginOpenerPolicy$$$ `server.securityResponseHeaders.crossOriginOpenerPolicy` : Controls whether the [`Cross-Origin-Opener-Policy`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cross-Origin-Opener-Policy) header is used in all responses to the client from the {{kib}} server, and specifies what value is used. Allowed values are `unsafe-none`, `same-origin-allow-popups`, `same-origin`, or `null`. To disable, set to `null`. **Default:** `"same-origin"` -`server.customResponseHeaders` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ess}}") +`server.customResponseHeaders` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") : Header names and values to send on all responses to the client from the {{kib}} server. **Default: `{}`** $$$server-shutdownTimeout$$$ `server.shutdownTimeout` @@ -480,7 +480,7 @@ $$$settings-telemetry-optIn$$$ `telemetry.optIn` This setting can be changed at any time in [Advanced Settings](asciidocalypse://docs/kibana/docs/reference/advanced-settings.md). To prevent users from changing it, set [`telemetry.allowChangingOptInStatus`](#telemetry-allowChangingOptInStatus) to `false`. **Default: `true`** -`vis_type_vega.enableExternalUrls` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ess}}") +`vis_type_vega.enableExternalUrls` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") : Set this value to true to allow Vega to use any URL to access external data sources and images. When false, Vega can only get data from {{es}}. **Default: `false`** `xpack.ccr.ui.enabled` @@ -523,5 +523,5 @@ $$$settings-explore-data-in-chart$$$ `xpack.discoverEnhanced.actions.exploreData `xpack.upgrade_assistant.ui.enabled` : Set this value to false to disable the Upgrade Assistant UI. **Default: true** -`i18n.locale` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ess}}") -: Set this value to change the {{kib}} interface language. Valid locales are: `en`, `zh-CN`, `ja-JP`, `fr-FR`. **Default: `en`** +`i18n.locale` ![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ech}}") +: Set this value to change the {{kib}} interface language. Valid locales are: `en`, `zh-CN`, `ja-JP`, `fr-FR`. **Default: `en`** \ No newline at end of file diff --git a/deploy-manage/deploy/self-managed/plugins.md b/deploy-manage/deploy/self-managed/plugins.md index c77262fd2..1bb9edc9b 100644 --- a/deploy-manage/deploy/self-managed/plugins.md +++ b/deploy-manage/deploy/self-managed/plugins.md @@ -9,5 +9,5 @@ Plugins are a way to enhance the basic Elasticsearch functionality in a custom m For information about selecting and installing plugins, see [{{es}} Plugins and Integrations](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch-plugins/index.md). -For information about developing your own plugin, see [Help for plugin authors](asciidocalypse://docs/elasticsearch/docs/extend/create-elasticsearch-plugins/index.md). +For information about developing your own plugin, see [Help for plugin authors](asciidocalypse://docs/elasticsearch/docs/extend/index.md). diff --git a/deploy-manage/license.md b/deploy-manage/license.md index bf0bb782d..5a31b963b 100644 --- a/deploy-manage/license.md +++ b/deploy-manage/license.md @@ -1,11 +1,35 @@ --- -mapped_pages: - - https://www.elastic.co/guide/en/cloud/current/ec-licensing.html +applies_to: + deployment: + ece: + ech: + ess: + self: + serverless: --- -# Manage your license [ec-licensing] +# Manage your license -For more information on what is available with different subscription levels, check [Elasticsearch Service Subscriptions](https://www.elastic.co/elasticsearch/service/pricing). You are entitled to use all of the features in Elasticsearch Service that match your subscription level. Please use them to your heart’s content. +Your Elastic license determines which features are available and what level of support you receive. -Your subscription determines [which features are available](https://www.elastic.co/subscriptions/cloud). For example, machine learning requires a Platinum or Private subscription and is not available if you upgrade to a Gold subscription. Similarly, SAML Single Sign-On requires an Enterprise subscription. +Depending on your deployment type, licenses and subscriptions are applied at different levels: +* **{{ecloud}}, {{ece}}, and {{eck}}:** Licenses and subscriptions are controlled at the orchestrator or organization level, and apply to all related deployments. +* **Self-managed {{es}}:** Licenses are controlled at the cluster level, and apply only to a single cluster. + +For a comprehensive comparison of the available subscription levels, see [Elastic subscriptions](https://www.elastic.co/subscriptions). + +Use the topics in this section to manage your license or start a trial: + +- [{{ecloud}}](/deploy-manage/cloud-organization/billing/manage-subscription.md): Applies to both {{ech}} deployments and {{serverless-full}} projects in your Cloud organization +- [{{ece}}](/deploy-manage/license/manage-your-license-in-ece.md) +- [{{eck}}](/deploy-manage/license/manage-your-license-in-eck.md) +- [Self-managed cluster](/deploy-manage/license/manage-your-license-in-self-managed-cluster.md) + +## Additional resources + +Explore these resources for details on subscriptions and features: + +- [{{stack}} subscriptions](https://www.elastic.co/subscriptions) +- [{{ecloud}} features](https://www.elastic.co/subscriptions/cloud) +- [{{ecloud}} pricing](https://www.elastic.co/pricing) diff --git a/deploy-manage/license/manage-your-license-in-ece.md b/deploy-manage/license/manage-your-license-in-ece.md index 4cdbdf43d..e222bc733 100644 --- a/deploy-manage/license/manage-your-license-in-ece.md +++ b/deploy-manage/license/manage-your-license-in-ece.md @@ -1,9 +1,12 @@ --- +navigation_title: "{{ece}}" +applies_to: + ece: mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-add-license.html --- -# Manage your license in ECE [ece-add-license] +# Manage your license in {{ece}} [ece-add-license] The use of Elastic Cloud Enterprise requires a valid license, which you can obtain from Elastic and add to your installation following the steps described in this document. When you first install ECE we automatically activate ECE with a trial license that is valid for 30 days. @@ -24,7 +27,7 @@ If you have a license from 2018 or earlier, you might receive a warning that you Elastic Cloud Enterprise Licenses contains two types of licenses - the actual license for Elastic Cloud Enterprise that is validated to enable Elastic Cloud Enterprise features and the *cluster licenses*, which Elastic Cloud Enterprise installs into the individual clusters. -Elastic Cloud Enterprise installs those cluster licenses with an approximately 3 month window, and updates the cluster licenses automatically as they get within a month of expiration. This is the same system that we use for our Elasticsearch Service on Cloud. +Elastic Cloud Enterprise installs those cluster licenses with an approximately 3 month window, and updates the cluster licenses automatically as they get within a month of expiration. When the Elastic Cloud Enterprise license expires, and consequently the cluster license that’s currently installed for all managed clusters since it has the same expiration date, the following takes place: diff --git a/deploy-manage/license/manage-your-license-in-eck.md b/deploy-manage/license/manage-your-license-in-eck.md index a1fd8687e..86858d1e0 100644 --- a/deploy-manage/license/manage-your-license-in-eck.md +++ b/deploy-manage/license/manage-your-license-in-eck.md @@ -1,11 +1,13 @@ --- -applies: - eck: all +navigation_title: "{{eck}}" +applies_to: + deployment: + eck: all mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-licensing.html --- -# Manage your license in ECK [k8s-licensing] +# Manage your license in {{eck}} [k8s-licensing] When you install the default distribution of ECK, you receive a Basic license. Any Elastic stack application you manage through ECK will also be Basic licensed. Go to [https://www.elastic.co/subscriptions](https://www.elastic.co/subscriptions) to check which features are included in the Basic license for free. diff --git a/deploy-manage/license/manage-your-license-in-self-managed-cluster.md b/deploy-manage/license/manage-your-license-in-self-managed-cluster.md index 9b8c27787..f765df533 100644 --- a/deploy-manage/license/manage-your-license-in-self-managed-cluster.md +++ b/deploy-manage/license/manage-your-license-in-self-managed-cluster.md @@ -1,13 +1,16 @@ --- +navigation_title: "Self-managed cluster" +applies_to: + self: mapped_pages: - https://www.elastic.co/guide/en/kibana/current/managing-licenses.html --- -# Manage your license in self-managed cluster [managing-licenses] +# Manage your license in a self-managed cluster [managing-licenses] -By default, new installations have a Basic license that never expires. For the full list of features available at the Free and Open Basic subscription level, refer to {{subscriptions}}. +By default, new installations have a Basic license that never expires. For the full list of features available at the Free and Open Basic subscription level, refer to [Elastic subscriptions](https://www.elastic.co/subscriptions). -To explore all of the available solutions and features, start a 30-day free trial. You can activate a trial subscription once per major product version. If you need more than 30 days to complete your evaluation, request an extended trial at {{extendtrial}}. +To explore all of the available solutions and features, start a 30-day free trial. You can activate a trial subscription once per major product version. If you need more than 30 days to complete your evaluation, [request an extended trial](https://www.elastic.co/trialextension). To view the status of your license, start a trial, or install a new license, go to the **License Management** page using the navigation menu or the [global search field](../../explore-analyze/find-and-organize/find-apps-and-objects.md). diff --git a/deploy-manage/maintenance/start-stop-services/restart-cloud-hosted-deployment.md b/deploy-manage/maintenance/start-stop-services/restart-cloud-hosted-deployment.md index 33ded47d5..a7e11e6cb 100644 --- a/deploy-manage/maintenance/start-stop-services/restart-cloud-hosted-deployment.md +++ b/deploy-manage/maintenance/start-stop-services/restart-cloud-hosted-deployment.md @@ -7,9 +7,9 @@ applies_to: ess: --- -# Restart a Cloud Hosted deployment +# Restart an {{ech}} deployment -You can restart your {{es}} deployment through the deployment overview UI or by using an API. +You can restart your deployment through the deployment overview UI or by using an API. ## Restart your deployment through the deployment overview [ec-restart-deployment] @@ -19,7 +19,7 @@ On the deployment overview, from the **Action** drop-down menu select **Restart You can choose to restart without downtime or you can restart all nodes simultaneously. -Note that if you are looking to restart {{es}} to clear out [deployment activity](../../../deploy-manage/deploy/elastic-cloud/keep-track-of-deployment-activity.md) plan failures, you may instead run a [no-op plan](../../../troubleshoot/monitoring/deployment-health-warnings.md) to re-synchronize the last successful configuration settings between Elasticsearch Service and {{es}}. +Note that if you are looking to restart {{es}} to clear out [deployment activity](../../../deploy-manage/deploy/elastic-cloud/keep-track-of-deployment-activity.md) plan failures, you may instead run a [no-op plan](../../../troubleshoot/monitoring/deployment-health-warnings.md) to re-synchronize the last successful configuration settings between {{ech}} and {{es}}. ## Restart an {{es}} resource by using an API [ec_restart_an_elasticsearch_resource] @@ -37,9 +37,9 @@ curl -XPOST \ `REF_ID` Name given to each resource type in the attribute `ref_id`. `main-elasticsearch` in the preceding example -## Shut down a Elasticsearch Service deployment [ec_shut_down_a_elasticsearch_service_deployment] +## Shut down an {{ech}} deployment [ec_shut_down_a_elasticsearch_service_deployment] -Shut down a Elasticsearch Service deployment by calling the following API request: +Shut down an {{ech}} deployment by calling the following API request: ```sh curl -XPOST \ diff --git a/deploy-manage/monitor.md b/deploy-manage/monitor.md index 93fee1b46..2c426b91b 100644 --- a/deploy-manage/monitor.md +++ b/deploy-manage/monitor.md @@ -2,12 +2,13 @@ mapped_urls: - https://www.elastic.co/guide/en/elasticsearch/reference/current/monitor-elasticsearch-cluster.html - https://www.elastic.co/guide/en/elasticsearch/reference/current/secure-monitoring.html -applies: +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all serverless: all - hosted: all - ece: all - eck: all - stack: all --- # Monitoring diff --git a/deploy-manage/monitor/autoops.md b/deploy-manage/monitor/autoops.md index 0c8f17f71..6d72f6b10 100644 --- a/deploy-manage/monitor/autoops.md +++ b/deploy-manage/monitor/autoops.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-autoops.html -applies: - hosted: all +applies_to: + deployment: + ess: all --- # AutoOps [ec-autoops] diff --git a/deploy-manage/monitor/autoops/ec-autoops-deployment-view.md b/deploy-manage/monitor/autoops/ec-autoops-deployment-view.md index 26ac5ec0d..29717bf64 100644 --- a/deploy-manage/monitor/autoops/ec-autoops-deployment-view.md +++ b/deploy-manage/monitor/autoops/ec-autoops-deployment-view.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-autoops-deployment-view.html -applies: - hosted: all +applies_to: + deployment: + ess: all --- # Deployment [ec-autoops-deployment-view] diff --git a/deploy-manage/monitor/autoops/ec-autoops-dismiss-event.md b/deploy-manage/monitor/autoops/ec-autoops-dismiss-event.md index ddff08d80..6f279aaee 100644 --- a/deploy-manage/monitor/autoops/ec-autoops-dismiss-event.md +++ b/deploy-manage/monitor/autoops/ec-autoops-dismiss-event.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-autoops-dismiss-event.html -applies: - hosted: all +applies_to: + deployment: + ess: all --- # Dismiss Events [ec-autoops-dismiss-event] diff --git a/deploy-manage/monitor/autoops/ec-autoops-event-settings.md b/deploy-manage/monitor/autoops/ec-autoops-event-settings.md index baa32c7a6..5ebe7d8e9 100644 --- a/deploy-manage/monitor/autoops/ec-autoops-event-settings.md +++ b/deploy-manage/monitor/autoops/ec-autoops-event-settings.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-autoops-event-settings.html -applies: - hosted: all +applies_to: + deployment: + ess: all --- # Events Settings [ec-autoops-event-settings] diff --git a/deploy-manage/monitor/autoops/ec-autoops-events.md b/deploy-manage/monitor/autoops/ec-autoops-events.md index 66f390d73..498a7fb23 100644 --- a/deploy-manage/monitor/autoops/ec-autoops-events.md +++ b/deploy-manage/monitor/autoops/ec-autoops-events.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-autoops-events.html -applies: - hosted: all +applies_to: + deployment: + ess: all --- # AutoOps events [ec-autoops-events] diff --git a/deploy-manage/monitor/autoops/ec-autoops-faq.md b/deploy-manage/monitor/autoops/ec-autoops-faq.md index b568084df..2bc229dc9 100644 --- a/deploy-manage/monitor/autoops/ec-autoops-faq.md +++ b/deploy-manage/monitor/autoops/ec-autoops-faq.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-autoops-faq.html -applies: - hosted: all +applies_to: + deployment: + ess: all --- # AutoOps FAQ [ec-autoops-faq] diff --git a/deploy-manage/monitor/autoops/ec-autoops-how-to-access.md b/deploy-manage/monitor/autoops/ec-autoops-how-to-access.md index ffa3d70a6..e3f44f066 100644 --- a/deploy-manage/monitor/autoops/ec-autoops-how-to-access.md +++ b/deploy-manage/monitor/autoops/ec-autoops-how-to-access.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-autoops-how-to-access.html -applies: - hosted: all +applies_to: + deployment: + ess: all --- # How to access AutoOps [ec-autoops-how-to-access] diff --git a/deploy-manage/monitor/autoops/ec-autoops-index-view.md b/deploy-manage/monitor/autoops/ec-autoops-index-view.md index d6d477d43..2a93858b1 100644 --- a/deploy-manage/monitor/autoops/ec-autoops-index-view.md +++ b/deploy-manage/monitor/autoops/ec-autoops-index-view.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-autoops-index-view.html -applies: - hosted: all +applies_to: + deployment: + ess: all --- # Indices [ec-autoops-index-view] diff --git a/deploy-manage/monitor/autoops/ec-autoops-nodes-view.md b/deploy-manage/monitor/autoops/ec-autoops-nodes-view.md index ebd0d0f02..bce59205a 100644 --- a/deploy-manage/monitor/autoops/ec-autoops-nodes-view.md +++ b/deploy-manage/monitor/autoops/ec-autoops-nodes-view.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-autoops-nodes-view.html -applies: - hosted: all +applies_to: + deployment: + ess: all --- # Nodes [ec-autoops-nodes-view] diff --git a/deploy-manage/monitor/autoops/ec-autoops-notifications-settings.md b/deploy-manage/monitor/autoops/ec-autoops-notifications-settings.md index a98231d86..294882aef 100644 --- a/deploy-manage/monitor/autoops/ec-autoops-notifications-settings.md +++ b/deploy-manage/monitor/autoops/ec-autoops-notifications-settings.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-autoops-notifications-settings.html -applies: - hosted: all +applies_to: + deployment: + ess: all --- # Notifications settings [ec-autoops-notifications-settings] @@ -51,15 +52,26 @@ To set up a filter, follow these steps: The following connectors are available with AutoOps: -* [PagerDuty integration](#ec-autoops-pagerduty-integration) -* [Slack integration](#ec-autoops-slack-integration) -* [VictorOps integration](#ec-autoops-victorops-integration) -* [Opsgenie integration](#ec-autoops-opsgenie-integration) -* [Microsoft Teams Configuration integration](#ec-autoops-ms-configuration-integration) -* [Webhook integration](#ec-autoops-webhook-integration) +* [Email](#email) +* [PagerDuty](#ec-autoops-pagerduty) +* [Slack](#ec-autoops-slack) +* [VictorOps](#ec-autoops-victorops) +* [Opsgenie](#ec-autoops-opsgenie) +* [Microsoft Teams Configuration](#ec-autoops-ms-configuration) +* [Webhook](#ec-autoops-webhook) +### Email [email] -### PagerDuty integration [ec-autoops-pagerduty-integration] +To set up notifications via email, follow these steps: + +1. Add a new **Email** connector. +2. Add a list of emails. + You can add up to 40 emails for a single email connector, and opt in to get alerts also when events close. +4. To receive notifications, scroll down the **Notification** page and click **Add**. +5. Fill in the filter details. +6. Select the events that you want to send to this connector. + +### PagerDuty [ec-autoops-pagerduty] The PagerDuty integration consists of the following parts: @@ -76,7 +88,7 @@ The PagerDuty integration consists of the following parts: 4. Select the events that should be sent to this output. -### Slack integration [ec-autoops-slack-integration] +### Slack [ec-autoops-slack] To set up a webhook to send AutoOps notifications to a Slack channel, go through the following steps. @@ -92,7 +104,7 @@ To set up a webhook to send AutoOps notifications to a Slack channel, go through 10. Add the webhook URL when creating the endpoint. -### VictorOps integration [ec-autoops-victorops-integration] +### VictorOps [ec-autoops-victorops] The VictorOps integration consists of the following parts: @@ -109,7 +121,7 @@ The VictorOps integration consists of the following parts: 4. Select the events that should be sent to this output. -### Opsgenie integration [ec-autoops-opsgenie-integration] +### Opsgenie [ec-autoops-opsgenie] The Opsgenie integration consists of the following parts: @@ -131,7 +143,7 @@ The Opsgenie integration consists of the following parts: 6. Select events that should be sent to this output. -### Microsoft Teams Configuration integration [ec-autoops-ms-configuration-integration] +### Microsoft Teams Configuration [ec-autoops-ms-configuration] To create an incoming webhook on your Microsoft Teams, follow [these instructions](https://docs.microsoft.com/en-us/microsoftteams/platform/webhooks-and-connectors/how-to/add-incoming-webhook). @@ -145,7 +157,7 @@ Save the URL displayed during the creation of the incoming webhook, as you will 4. Select events that should be sent to this output. -### Webhook integration [ec-autoops-webhook-integration] +### Webhook [ec-autoops-webhook] A webhook enables an application to provide other applications with real-time information. A webhook is a user-defined HTTP callback (HTTP POST), which is triggered by specific events. @@ -182,8 +194,6 @@ A webhook enables an application to provide other applications with real-time in When the Endpoint settings have been completed, continue to set up the notification filter to define which events you’d like to be notified about. :::: - - ## Notifications report [ec-notification-report] From the **Notifications** report, you can check all the notifications sent. The report lists all the events that were set up in the notification filters and provide their status. diff --git a/deploy-manage/monitor/autoops/ec-autoops-overview-view.md b/deploy-manage/monitor/autoops/ec-autoops-overview-view.md index 99e9037b9..15c3719b0 100644 --- a/deploy-manage/monitor/autoops/ec-autoops-overview-view.md +++ b/deploy-manage/monitor/autoops/ec-autoops-overview-view.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-autoops-overview-view.html -applies: - hosted: all +applies_to: + deployment: + ess: all --- # Overview [ec-autoops-overview-view] diff --git a/deploy-manage/monitor/autoops/ec-autoops-regions.md b/deploy-manage/monitor/autoops/ec-autoops-regions.md index eae21ab9a..047d0445a 100644 --- a/deploy-manage/monitor/autoops/ec-autoops-regions.md +++ b/deploy-manage/monitor/autoops/ec-autoops-regions.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-autoops-regions.html -applies: - hosted: all +applies_to: + deployment: + ess: all --- # AutoOps regions [ec-autoops-regions] @@ -16,6 +17,7 @@ AutoOps is currently available in the following regions: | AWS | us-east-1 | US East (N. Virginia) | | | AWS | us-west-2 | Oregon | | | AWS | eu-west-1 | Ireland | | +| AWS | ap-southeast-1 | Singapore | | ::::{note} Currently, a limited number of AWS regions are available. More regions for AWS, Azure and GCP will be added in the future. Also, AutoOps is currently not available for GovCloud customers. diff --git a/deploy-manage/monitor/autoops/ec-autoops-shards-view.md b/deploy-manage/monitor/autoops/ec-autoops-shards-view.md index 1b5421a90..d492807a8 100644 --- a/deploy-manage/monitor/autoops/ec-autoops-shards-view.md +++ b/deploy-manage/monitor/autoops/ec-autoops-shards-view.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-autoops-shards-view.html -applies: - hosted: all +applies_to: + deployment: + ess: all --- # Shards [ec-autoops-shards-view] diff --git a/deploy-manage/monitor/autoops/ec-autoops-template-optimizer.md b/deploy-manage/monitor/autoops/ec-autoops-template-optimizer.md index 036541b2a..6810c9792 100644 --- a/deploy-manage/monitor/autoops/ec-autoops-template-optimizer.md +++ b/deploy-manage/monitor/autoops/ec-autoops-template-optimizer.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-autoops-template-optimizer.html -applies: - hosted: all +applies_to: + deployment: + ess: all --- # Template Optimizer [ec-autoops-template-optimizer] diff --git a/deploy-manage/monitor/kibana-task-manager-health-monitoring.md b/deploy-manage/monitor/kibana-task-manager-health-monitoring.md index 90aaf301a..4124727a0 100644 --- a/deploy-manage/monitor/kibana-task-manager-health-monitoring.md +++ b/deploy-manage/monitor/kibana-task-manager-health-monitoring.md @@ -2,12 +2,14 @@ navigation_title: "Kibana task manager monitoring" mapped_pages: - https://www.elastic.co/guide/en/kibana/current/task-manager-health-monitoring.html -applies: - stack: preview +applies_to: + deployment: + self: preview --- + # Kibana task manager health monitoring [task-manager-health-monitoring] diff --git a/deploy-manage/monitor/logging-configuration.md b/deploy-manage/monitor/logging-configuration.md index 9f2e5c81b..9002f7e1b 100644 --- a/deploy-manage/monitor/logging-configuration.md +++ b/deploy-manage/monitor/logging-configuration.md @@ -1,9 +1,10 @@ --- -applies: - hosted: all - ece: all - eck: all - stack: all +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all --- # Logging configuration diff --git a/deploy-manage/monitor/logging-configuration/auditing-search-queries.md b/deploy-manage/monitor/logging-configuration/auditing-search-queries.md index 6de9bfe91..6afdf75fc 100644 --- a/deploy-manage/monitor/logging-configuration/auditing-search-queries.md +++ b/deploy-manage/monitor/logging-configuration/auditing-search-queries.md @@ -1,11 +1,12 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/auditing-search-queries.html -applies: - hosted: all - ece: all - eck: all - stack: all +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all serverless: unavailable --- diff --git a/deploy-manage/monitor/logging-configuration/configuring-audit-logs.md b/deploy-manage/monitor/logging-configuration/configuring-audit-logs.md index e1f74d7f0..4d6a21874 100644 --- a/deploy-manage/monitor/logging-configuration/configuring-audit-logs.md +++ b/deploy-manage/monitor/logging-configuration/configuring-audit-logs.md @@ -1,9 +1,10 @@ --- -applies: - hosted: all - ece: all - eck: all - stack: all +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all serverless: unavailable --- diff --git a/deploy-manage/monitor/logging-configuration/correlating-kibana-elasticsearch-audit-logs.md b/deploy-manage/monitor/logging-configuration/correlating-kibana-elasticsearch-audit-logs.md index 76fe86418..a7fc41546 100644 --- a/deploy-manage/monitor/logging-configuration/correlating-kibana-elasticsearch-audit-logs.md +++ b/deploy-manage/monitor/logging-configuration/correlating-kibana-elasticsearch-audit-logs.md @@ -2,11 +2,12 @@ navigation_title: Correlate audit events mapped_pages: - https://www.elastic.co/guide/en/kibana/current/xpack-security-audit-logging.html -applies: - hosted: all - ece: all - eck: all - stack: all +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all serverless: unavailable --- diff --git a/deploy-manage/monitor/logging-configuration/elasticsearch-deprecation-logs.md b/deploy-manage/monitor/logging-configuration/elasticsearch-deprecation-logs.md index b27edbb6f..934d23e21 100644 --- a/deploy-manage/monitor/logging-configuration/elasticsearch-deprecation-logs.md +++ b/deploy-manage/monitor/logging-configuration/elasticsearch-deprecation-logs.md @@ -1,11 +1,12 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/logging.html -applies: - hosted: all - ece: all - eck: all - stack: all +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all --- # Elasticsearch deprecation logs [logging] diff --git a/deploy-manage/monitor/logging-configuration/elasticsearch-log4j-configuration-self-managed.md b/deploy-manage/monitor/logging-configuration/elasticsearch-log4j-configuration-self-managed.md index c17491b1b..65817eec2 100644 --- a/deploy-manage/monitor/logging-configuration/elasticsearch-log4j-configuration-self-managed.md +++ b/deploy-manage/monitor/logging-configuration/elasticsearch-log4j-configuration-self-managed.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/logging.html -applies: - stack: all +applies_to: + deployment: + self: all --- # Elasticsearch log4j configuration [logging] diff --git a/deploy-manage/monitor/logging-configuration/enabling-audit-logs.md b/deploy-manage/monitor/logging-configuration/enabling-audit-logs.md index 3460c73a4..31e889c6e 100644 --- a/deploy-manage/monitor/logging-configuration/enabling-audit-logs.md +++ b/deploy-manage/monitor/logging-configuration/enabling-audit-logs.md @@ -5,11 +5,12 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-enable-auditing.html - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s_audit_logging.html - https://www.elastic.co/guide/en/cloud/current/ec-enable-logging-and-monitoring.html#ec-enable-audit-logs -applies: - hosted: all - ece: all - eck: all - stack: all +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all serverless: unavailable --- @@ -61,7 +62,7 @@ To enable audit logging in an {{ech}} deployment: 1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. 3. From your deployment menu, go to the **Edit** page. diff --git a/deploy-manage/monitor/logging-configuration/kibana-log-settings-examples.md b/deploy-manage/monitor/logging-configuration/kibana-log-settings-examples.md index e00568f38..0414952ed 100644 --- a/deploy-manage/monitor/logging-configuration/kibana-log-settings-examples.md +++ b/deploy-manage/monitor/logging-configuration/kibana-log-settings-examples.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/kibana/current/log-settings-examples.html -applies: - stack: all +applies_to: + deployment: + self: all --- # Examples [log-settings-examples] @@ -29,7 +30,7 @@ logging: ## Log in JSON format [log-in-json-ECS-example] -Log the default log format to JSON layout instead of pattern (the default). With `json` layout, log messages will be formatted as JSON strings in [ECS format](asciidocalypse://docs/ecs/docs/reference/ecs/index.md) that includes a timestamp, log level, logger, message text and any other metadata that may be associated with the log message itself. +Log the default log format to JSON layout instead of pattern (the default). With `json` layout, log messages will be formatted as JSON strings in [ECS format](asciidocalypse://docs/ecs/docs/reference/index.md) that includes a timestamp, log level, logger, message text and any other metadata that may be associated with the log message itself. ```yaml logging: diff --git a/deploy-manage/monitor/logging-configuration/kibana-logging-cli-configuration.md b/deploy-manage/monitor/logging-configuration/kibana-logging-cli-configuration.md index b207bd7eb..266223947 100644 --- a/deploy-manage/monitor/logging-configuration/kibana-logging-cli-configuration.md +++ b/deploy-manage/monitor/logging-configuration/kibana-logging-cli-configuration.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/kibana/current/_cli_configuration.html -applies: - stack: all +applies_to: + deployment: + self: all --- # Cli configuration [_cli_configuration] diff --git a/deploy-manage/monitor/logging-configuration/kibana-logging.md b/deploy-manage/monitor/logging-configuration/kibana-logging.md index 76fcddc6a..c7708a20e 100644 --- a/deploy-manage/monitor/logging-configuration/kibana-logging.md +++ b/deploy-manage/monitor/logging-configuration/kibana-logging.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/kibana/current/logging-configuration.html -applies: - stack: all +applies_to: + deployment: + self: all --- % this might not be valid for all deployment types. needs review. @@ -98,7 +99,7 @@ The pattern layout also offers a `highlight` option that allows you to highlight ### JSON layout [json-layout] -With `json` layout log messages will be formatted as JSON strings in [ECS format](asciidocalypse://docs/ecs/docs/reference/ecs/index.md) that includes a timestamp, log level, logger, message text and any other metadata that may be associated with the log message itself. +With `json` layout log messages will be formatted as JSON strings in [ECS format](asciidocalypse://docs/ecs/docs/reference/index.md) that includes a timestamp, log level, logger, message text and any other metadata that may be associated with the log message itself. ## Logger hierarchy [logger-hierarchy] diff --git a/deploy-manage/monitor/logging-configuration/logfile-audit-events-ignore-policies.md b/deploy-manage/monitor/logging-configuration/logfile-audit-events-ignore-policies.md index 7383838c4..50777cef3 100644 --- a/deploy-manage/monitor/logging-configuration/logfile-audit-events-ignore-policies.md +++ b/deploy-manage/monitor/logging-configuration/logfile-audit-events-ignore-policies.md @@ -2,11 +2,12 @@ navigation_title: Elasticsearch audit events ignore policies mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/audit-log-ignore-policy.html -applies: - hosted: all - ece: all - eck: all - stack: all +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all serverless: unavailable --- diff --git a/deploy-manage/monitor/logging-configuration/logfile-audit-output.md b/deploy-manage/monitor/logging-configuration/logfile-audit-output.md index 57101a222..718242d0a 100644 --- a/deploy-manage/monitor/logging-configuration/logfile-audit-output.md +++ b/deploy-manage/monitor/logging-configuration/logfile-audit-output.md @@ -2,11 +2,12 @@ navigation_title: Elasticsearch logfile output mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/audit-log-output.html -applies: - hosted: all - ece: all - eck: all - stack: all +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all serverless: unavailable --- diff --git a/deploy-manage/monitor/logging-configuration/security-event-audit-logging.md b/deploy-manage/monitor/logging-configuration/security-event-audit-logging.md index 6027e23b9..4c5e7c60a 100644 --- a/deploy-manage/monitor/logging-configuration/security-event-audit-logging.md +++ b/deploy-manage/monitor/logging-configuration/security-event-audit-logging.md @@ -1,9 +1,10 @@ --- -applies: - hosted: all - ece: all - eck: all - stack: all +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all serverless: unavailable --- # Security event audit logging diff --git a/deploy-manage/monitor/logging-configuration/update-elasticsearch-logging-levels.md b/deploy-manage/monitor/logging-configuration/update-elasticsearch-logging-levels.md index 14d6220ca..f99f769ed 100644 --- a/deploy-manage/monitor/logging-configuration/update-elasticsearch-logging-levels.md +++ b/deploy-manage/monitor/logging-configuration/update-elasticsearch-logging-levels.md @@ -1,11 +1,12 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/logging.html -applies: - hosted: all - ece: all - eck: all - stack: all +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all --- # Update Elasticsearch logging levels [logging] diff --git a/deploy-manage/monitor/monitoring-data.md b/deploy-manage/monitor/monitoring-data.md index 26e53e9ef..36b0da280 100644 --- a/deploy-manage/monitor/monitoring-data.md +++ b/deploy-manage/monitor/monitoring-data.md @@ -1,9 +1,10 @@ --- -applies: - hosted: all - ece: all - eck: all - stack: all +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all --- # Managing monitoring data diff --git a/deploy-manage/monitor/monitoring-data/access-performance-metrics-on-elastic-cloud.md b/deploy-manage/monitor/monitoring-data/access-performance-metrics-on-elastic-cloud.md index a95c7e22b..d01d3ce25 100644 --- a/deploy-manage/monitor/monitoring-data/access-performance-metrics-on-elastic-cloud.md +++ b/deploy-manage/monitor/monitoring-data/access-performance-metrics-on-elastic-cloud.md @@ -2,8 +2,9 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud/current/ec-saas-metrics-accessing.html - https://www.elastic.co/guide/en/cloud-heroku/current/ech-saas-metrics-accessing.html -applies: - hosted: all +applies_to: + deployment: + ess: all --- # Access performance metrics on Elastic Cloud diff --git a/deploy-manage/monitor/monitoring-data/beats-page.md b/deploy-manage/monitor/monitoring-data/beats-page.md index c65f2bd2d..21e270079 100644 --- a/deploy-manage/monitor/monitoring-data/beats-page.md +++ b/deploy-manage/monitor/monitoring-data/beats-page.md @@ -2,11 +2,12 @@ navigation_title: "Beats Metrics" mapped_pages: - https://www.elastic.co/guide/en/kibana/current/beats-page.html -applies: - hosted: all - ece: all - eck: all - stack: all +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all --- diff --git a/deploy-manage/monitor/monitoring-data/config-monitoring-data-streams-elastic-agent.md b/deploy-manage/monitor/monitoring-data/config-monitoring-data-streams-elastic-agent.md index e5c4eb87f..07d6813db 100644 --- a/deploy-manage/monitor/monitoring-data/config-monitoring-data-streams-elastic-agent.md +++ b/deploy-manage/monitor/monitoring-data/config-monitoring-data-streams-elastic-agent.md @@ -1,11 +1,12 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/config-monitoring-data-streams-elastic-agent.html -applies: - hosted: all - ece: all - eck: all - stack: all +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all --- # Configuring data streams created by Elastic Agent [config-monitoring-data-streams-elastic-agent] diff --git a/deploy-manage/monitor/monitoring-data/config-monitoring-data-streams-metricbeat-8.md b/deploy-manage/monitor/monitoring-data/config-monitoring-data-streams-metricbeat-8.md index 9449c8618..84672b566 100644 --- a/deploy-manage/monitor/monitoring-data/config-monitoring-data-streams-metricbeat-8.md +++ b/deploy-manage/monitor/monitoring-data/config-monitoring-data-streams-metricbeat-8.md @@ -1,11 +1,12 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/config-monitoring-data-streams-metricbeat-8.html -applies: - hosted: all - ece: all - eck: all - stack: all +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all --- # Configuring data streams created by Metricbeat 8 [config-monitoring-data-streams-metricbeat-8] diff --git a/deploy-manage/monitor/monitoring-data/config-monitoring-indices-metricbeat-7-internal-collection.md b/deploy-manage/monitor/monitoring-data/config-monitoring-indices-metricbeat-7-internal-collection.md index 1f090f1f9..1b82334ee 100644 --- a/deploy-manage/monitor/monitoring-data/config-monitoring-indices-metricbeat-7-internal-collection.md +++ b/deploy-manage/monitor/monitoring-data/config-monitoring-indices-metricbeat-7-internal-collection.md @@ -1,16 +1,17 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/config-monitoring-indices-metricbeat-7-internal-collection.html -applies: - hosted: all - ece: all - eck: all - stack: all +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all --- # Configuring indices created by Metricbeat 7 or internal collection [config-monitoring-indices-metricbeat-7-internal-collection] -When monitoring [using {{metricbeat}} 7](../stack-monitoring/collecting-monitoring-data-with-metricbeat.md) or [internal collection](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/monitoring-internal-collection.md), data is stored in a set of indices called either: +When monitoring [using {{metricbeat}} 7](../stack-monitoring/collecting-monitoring-data-with-metricbeat.md) or [internal collection](asciidocalypse://docs/beats/docs/reference/filebeat/monitoring-internal-collection.md), data is stored in a set of indices called either: * `.monitoring-{{product}}-7-mb-{{date}}`, when using {{metricbeat}} 7. * `.monitoring-{{product}}-7-{{date}}`, when using internal collection. diff --git a/deploy-manage/monitor/monitoring-data/configure-stack-monitoring-alerts.md b/deploy-manage/monitor/monitoring-data/configure-stack-monitoring-alerts.md index 28872c2a0..cc50b4df2 100644 --- a/deploy-manage/monitor/monitoring-data/configure-stack-monitoring-alerts.md +++ b/deploy-manage/monitor/monitoring-data/configure-stack-monitoring-alerts.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-cluster-health-notifications.html -applies: - hosted: all +applies_to: + deployment: + ess: all --- % NEEDS MERGING WITH kibana-alerts.md diff --git a/deploy-manage/monitor/monitoring-data/configuring-data-streamsindices-for-monitoring.md b/deploy-manage/monitor/monitoring-data/configuring-data-streamsindices-for-monitoring.md index e43b9c0a3..a1aa885e4 100644 --- a/deploy-manage/monitor/monitoring-data/configuring-data-streamsindices-for-monitoring.md +++ b/deploy-manage/monitor/monitoring-data/configuring-data-streamsindices-for-monitoring.md @@ -1,11 +1,12 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/config-monitoring-indices.html -applies: - hosted: all - ece: all - eck: all - stack: all +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all --- # Configuring data streams/indices for monitoring [config-monitoring-indices] @@ -13,7 +14,7 @@ applies: Monitoring data is stored in data streams or indices in {{es}}. The default data stream or index settings may not work for your situation. For example, you might want to change index lifecycle management (ILM) settings, add custom mappings, or change the number of shards and replicas. The steps to change these settings depend on the monitoring method: * [Configuring data streams created by {{agent}}](config-monitoring-data-streams-elastic-agent.md) -* [Configuring data streams created by {{metricbeat}} 8](config-monitoring-data-streams-metricbeat-8.md) (the default for version 8 {{ess}} deployments on {{ecloud}}) +* [Configuring data streams created by {{metricbeat}} 8](config-monitoring-data-streams-metricbeat-8.md) (the default for version 8 {{ech}} deployments on {{ecloud}}) * [Configuring indices created by {{metricbeat}} 7 or internal collection](config-monitoring-indices-metricbeat-7-internal-collection.md) ::::{important} diff --git a/deploy-manage/monitor/monitoring-data/ec-memory-pressure.md b/deploy-manage/monitor/monitoring-data/ec-memory-pressure.md index 21b6a69d7..41fd8ae84 100644 --- a/deploy-manage/monitor/monitoring-data/ec-memory-pressure.md +++ b/deploy-manage/monitor/monitoring-data/ec-memory-pressure.md @@ -2,14 +2,15 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-memory-pressure.html - https://www.elastic.co/guide/en/cloud-heroku/current/ech-memory-pressure.html -applies: - hosted: all - ece: all +applies_to: + deployment: + ess: all + ece: all --- # JVM memory pressure indicator [ec-memory-pressure] -In addition to the more detailed [cluster performance metrics](../stack-monitoring.md), the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body) also includes a JVM memory pressure indicator for each node in your cluster. This indicator can help you to determine when you need to upgrade to a larger cluster. +In addition to the more detailed [cluster performance metrics](../stack-monitoring.md), the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) also includes a JVM memory pressure indicator for each node in your cluster. This indicator can help you to determine when you need to upgrade to a larger cluster. The percentage number used in the JVM memory pressure indicator is actually the fill rate of the old generation pool. For a detailed explanation of why this metric is used, check [Understanding Memory Pressure](https://www.elastic.co/blog/found-understanding-memory-pressure-indicator/). diff --git a/deploy-manage/monitor/monitoring-data/ec-saas-metrics-accessing.md b/deploy-manage/monitor/monitoring-data/ec-saas-metrics-accessing.md index e5bfbdcda..373f45c53 100644 --- a/deploy-manage/monitor/monitoring-data/ec-saas-metrics-accessing.md +++ b/deploy-manage/monitor/monitoring-data/ec-saas-metrics-accessing.md @@ -2,22 +2,23 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-saas-metrics-accessing.html - https://www.elastic.co/guide/en/cloud-heroku/current/ech-saas-metrics-accessing.html -applies: - hosted: all +applies_to: + deployment: + ess: all --- # Access performance metrics [ec-saas-metrics-accessing] -Cluster performance metrics are available directly in the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). The graphs on this page include a subset of Elasticsearch Service-specific performance metrics. +Cluster performance metrics are available directly in the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). The graphs on this page include a subset of {{ech}}-specific performance metrics. For advanced views or production monitoring, [enable logging and monitoring](../stack-monitoring/elastic-cloud-stack-monitoring.md). The monitoring application provides more advanced views for Elasticsearch and JVM metrics, and includes a configurable retention period. To access cluster performance metrics: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. For example, you might want to select **Is unhealthy** and **Has master problems** to get a short list of deployments that need attention. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. For example, you might want to select **Is unhealthy** and **Has master problems** to get a short list of deployments that need attention. 3. From your deployment menu, go to the **Performance** page. @@ -30,7 +31,7 @@ The following metrics are available: :alt: Graph showing CPU usage ::: -Shows the maximum usage of the CPU resources assigned to your Elasticsearch cluster, as a percentage. CPU resources are relative to the size of your cluster, so that a cluster with 32GB of RAM gets assigned twice as many CPU resources as a cluster with 16GB of RAM. All clusters are guaranteed their share of CPU resources, as Elasticsearch Service infrastructure does not overcommit any resources. CPU credits permit boosting the performance of smaller clusters temporarily, so that CPU usage can exceed 100%. +Shows the maximum usage of the CPU resources assigned to your Elasticsearch cluster, as a percentage. CPU resources are relative to the size of your cluster, so that a cluster with 32GB of RAM gets assigned twice as many CPU resources as a cluster with 16GB of RAM. All clusters are guaranteed their share of CPU resources, as {{ech}} infrastructure does not overcommit any resources. CPU credits permit boosting the performance of smaller clusters temporarily, so that CPU usage can exceed 100%. ::::{tip} This chart reports the maximum CPU values over the sampling period. [Logs and Metrics](../stack-monitoring/elastic-cloud-stack-monitoring.md) ingested into [Stack Monitoring](visualizing-monitoring-data.md)'s "CPU Usage" instead reflects the average CPU over the sampling period. Therefore, you should not expect the two graphs to look exactly the same. When investigating [CPU-related performance issues](../../../troubleshoot/monitoring/performance.md), you should default to [Stack Monitoring](visualizing-monitoring-data.md). @@ -96,7 +97,7 @@ Indicates the overhead involved in JVM garbage collection to reclaim memory. Performance correlates directly with resources assigned to your cluster, and many of these metrics will show some sort of correlation with each other when you are trying to determine the cause of a performance issue. Take a look at some of the scenarios included in this section to learn how you can determine the cause of performance issues. -It is not uncommon for performance issues on Elasticsearch Service to be caused by an undersized cluster that cannot cope with the workload it is being asked to handle. If your cluster performance metrics often shows high CPU usage or excessive memory pressure, consider increasing the size of your cluster soon to improve performance. This is especially true for clusters that regularly reach 100% of CPU usage or that suffer out-of-memory failures; it is better to resize your cluster early when it is not yet maxed out than to have to resize a cluster that is already overwhelmed. [Changing the configuration of your cluster](../../deploy/elastic-cloud/configure.md) may add some overhead if data needs to be migrated to the new nodes, which can increase the load on a cluster further and delay configuration changes. +It is not uncommon for performance issues on {{ech}} to be caused by an undersized cluster that cannot cope with the workload it is being asked to handle. If your cluster performance metrics often shows high CPU usage or excessive memory pressure, consider increasing the size of your cluster soon to improve performance. This is especially true for clusters that regularly reach 100% of CPU usage or that suffer out-of-memory failures; it is better to resize your cluster early when it is not yet maxed out than to have to resize a cluster that is already overwhelmed. [Changing the configuration of your cluster](../../deploy/elastic-cloud/configure.md) may add some overhead if data needs to be migrated to the new nodes, which can increase the load on a cluster further and delay configuration changes. To help diagnose high CPU usage you can also use the Elasticsearch [nodes hot threads API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-hot-threads), which identifies the threads on each node that have the highest CPU usage or that have been executing for a longer than normal period of time. @@ -127,7 +128,7 @@ Cluster performance metrics are shown per node and are color-coded to indicate w ## Cluster restarts after out-of-memory failures [ec_cluster_restarts_after_out_of_memory_failures] -For clusters that suffer out-of-memory failures, it can be difficult to determine whether the clusters are in a completely healthy state afterwards. For this reason, Elasticsearch Service automatically reboots clusters that suffer out-of-memory failures. +For clusters that suffer out-of-memory failures, it can be difficult to determine whether the clusters are in a completely healthy state afterwards. For this reason, {{ech}} automatically reboots clusters that suffer out-of-memory failures. You will receive an email notification to let you know that a restart occurred. For repeated alerts, the emails are aggregated so that you do not receive an excessive number of notifications. Either [resizing your cluster to reduce memory pressure](../../deploy/elastic-cloud/ec-customize-deployment-components.md#ec-cluster-size) or reducing the workload that a cluster is being asked to handle can help avoid these cluster restarts. diff --git a/deploy-manage/monitor/monitoring-data/ec-vcpu-boost-instance.md b/deploy-manage/monitor/monitoring-data/ec-vcpu-boost-instance.md index 4b408f5c2..aa41afaed 100644 --- a/deploy-manage/monitor/monitoring-data/ec-vcpu-boost-instance.md +++ b/deploy-manage/monitor/monitoring-data/ec-vcpu-boost-instance.md @@ -2,8 +2,9 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-vcpu-boost-instance.html - https://www.elastic.co/guide/en/cloud-heroku/current/ech-vcpu-boost-instance.html -applies: - hosted: all +applies_to: + deployment: + ess: all --- # vCPU boosting and credits [ec-vcpu-boost-instance] @@ -15,7 +16,7 @@ Elastic Cloud allows smaller instance sizes to get temporarily boosted vCPU when Based on the instance size, the vCPU resources assigned to your instance can be boosted to improve performance temporarily, by using vCPU credits. If credits are available, Elastic Cloud will automatically boost your instance when under heavy load. Boosting is available depending on the instance size: -* Instance sizes up to and including 12 GB of RAM get boosted. The boosted vCPU value is `16 * vCPU ratio`, the vCPU ratios are dependent on the [hardware profile](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/hardware.md#ec-getting-started-configurations) selected. If an instance is eligible for boosting, the Elastic Cloud console will display **Up to 2.5 vCPU**, depending on the hardware profile selected. The baseline, or unboosted, vCPU value is calculated as: `RAM size * vCPU ratio`. +* Instance sizes up to and including 12 GB of RAM get boosted. The boosted vCPU value is `16 * vCPU ratio`, the vCPU ratios are dependent on the [hardware profile](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/hardware.md#ec-getting-started-configurations) selected. If an instance is eligible for boosting, the Elastic Cloud console will display **Up to 2.5 vCPU**, depending on the hardware profile selected. The baseline, or unboosted, vCPU value is calculated as: `RAM size * vCPU ratio`. * Instance sizes bigger than 12 GB of RAM do not get boosted. The vCPU value is displayed in the Elastic Cloud console and calculated as follows: `RAM size * vCPU ratio`. @@ -34,12 +35,12 @@ For example: An instance with 4 GB of RAM, can at most accumulate four hours wor If you observe declining performance on a smaller instance over time, you might have depleted your vCPU credits. In this case, increase the size of your cluster to handle the workload with consistent performance. -For more information, check [Elasticsearch Service default provider instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/hardware.md#ec-getting-started-configurations). +For more information, check [{{ech}} default provider instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/hardware.md#ec-getting-started-configurations). ## Where to check vCPU credits status? [ec_where_to_check_vcpu_credits_status] -You can check the **Monitoring > Performance > CPU Credits** section of the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body), and find the related metrics: +You can check the **Monitoring > Performance > CPU Credits** section of the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body), and find the related metrics: :::{image} ../../../images/cloud-metrics-credits.png :alt: CPU usage versus CPU credits over time diff --git a/deploy-manage/monitor/monitoring-data/elasticsearch-metrics.md b/deploy-manage/monitor/monitoring-data/elasticsearch-metrics.md index c0a8741e2..b8777c394 100644 --- a/deploy-manage/monitor/monitoring-data/elasticsearch-metrics.md +++ b/deploy-manage/monitor/monitoring-data/elasticsearch-metrics.md @@ -2,11 +2,12 @@ navigation_title: "{{es}} Metrics" mapped_pages: - https://www.elastic.co/guide/en/kibana/current/elasticsearch-metrics.html -applies: - hosted: all - ece: all - eck: all - stack: all +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all --- diff --git a/deploy-manage/monitor/monitoring-data/kibana-alerts.md b/deploy-manage/monitor/monitoring-data/kibana-alerts.md index c40a090d2..8be37e17c 100644 --- a/deploy-manage/monitor/monitoring-data/kibana-alerts.md +++ b/deploy-manage/monitor/monitoring-data/kibana-alerts.md @@ -1,11 +1,12 @@ --- mapped_pages: - https://www.elastic.co/guide/en/kibana/current/kibana-alerts.html -applies: - hosted: all - ece: all - eck: all - stack: all +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all --- % NEEDS TO BE MERGED WITH configure-stack-monitoring-alerts.md diff --git a/deploy-manage/monitor/monitoring-data/kibana-page.md b/deploy-manage/monitor/monitoring-data/kibana-page.md index 79b0a8bb9..5ec2076f3 100644 --- a/deploy-manage/monitor/monitoring-data/kibana-page.md +++ b/deploy-manage/monitor/monitoring-data/kibana-page.md @@ -2,11 +2,12 @@ navigation_title: "{{kib}} Metrics" mapped_pages: - https://www.elastic.co/guide/en/kibana/current/kibana-page.html -applies: - hosted: all - ece: all - eck: all - stack: all +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all --- diff --git a/deploy-manage/monitor/monitoring-data/logstash-page.md b/deploy-manage/monitor/monitoring-data/logstash-page.md index c188ac0fe..646851b97 100644 --- a/deploy-manage/monitor/monitoring-data/logstash-page.md +++ b/deploy-manage/monitor/monitoring-data/logstash-page.md @@ -2,11 +2,12 @@ navigation_title: "Logstash Metrics" mapped_pages: - https://www.elastic.co/guide/en/kibana/current/logstash-page.html -applies: - hosted: all - ece: all - eck: all - stack: all +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all --- @@ -24,4 +25,4 @@ If you are monitoring Logstash nodes, click **Overview** in the Logstash section 1. To view Logstash node metrics, click **Nodes**. The Nodes section shows the status of each Logstash node. 2. Click the name of a node to view its statistics over time. -For more information, refer to [Monitoring Logstash](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/monitoring-logstash-legacy.md). +For more information, refer to [Monitoring Logstash](asciidocalypse://docs/logstash/docs/reference/monitoring-logstash-legacy.md). diff --git a/deploy-manage/monitor/monitoring-data/monitor-troubleshooting.md b/deploy-manage/monitor/monitoring-data/monitor-troubleshooting.md index fece15767..9a2676395 100644 --- a/deploy-manage/monitor/monitoring-data/monitor-troubleshooting.md +++ b/deploy-manage/monitor/monitoring-data/monitor-troubleshooting.md @@ -2,8 +2,9 @@ navigation_title: "Troubleshooting" mapped_pages: - https://www.elastic.co/guide/en/kibana/current/monitor-troubleshooting.html -applies: - stack: all +applies_to: + deployment: + self: all --- % this page probably needs to be moved diff --git a/deploy-manage/monitor/monitoring-data/visualizing-monitoring-data.md b/deploy-manage/monitor/monitoring-data/visualizing-monitoring-data.md index 69311a09e..801ccb81e 100644 --- a/deploy-manage/monitor/monitoring-data/visualizing-monitoring-data.md +++ b/deploy-manage/monitor/monitoring-data/visualizing-monitoring-data.md @@ -1,11 +1,12 @@ --- mapped_pages: - https://www.elastic.co/guide/en/kibana/current/xpack-monitoring.html -applies: - hosted: all - ece: all - eck: all - stack: all +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all --- # Visualizing monitoring data [xpack-monitoring] diff --git a/deploy-manage/monitor/orchestrators.md b/deploy-manage/monitor/orchestrators.md index b0fdaf278..72fa3074b 100644 --- a/deploy-manage/monitor/orchestrators.md +++ b/deploy-manage/monitor/orchestrators.md @@ -1,7 +1,8 @@ --- -applies: - ece: all - eck: all +applies_to: + deployment: + ece: all + eck: all --- # Monitoring Orchestrators diff --git a/deploy-manage/monitor/orchestrators/ece-monitoring-ece-access.md b/deploy-manage/monitor/orchestrators/ece-monitoring-ece-access.md index b369d1369..75cc639fe 100644 --- a/deploy-manage/monitor/orchestrators/ece-monitoring-ece-access.md +++ b/deploy-manage/monitor/orchestrators/ece-monitoring-ece-access.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-monitoring-ece-access.html -applies: - ece: all +applies_to: + deployment: + ece: all --- # Access logs and metrics [ece-monitoring-ece-access] diff --git a/deploy-manage/monitor/orchestrators/ece-monitoring-ece-set-retention.md b/deploy-manage/monitor/orchestrators/ece-monitoring-ece-set-retention.md index af3a7f194..ba73a0b6f 100644 --- a/deploy-manage/monitor/orchestrators/ece-monitoring-ece-set-retention.md +++ b/deploy-manage/monitor/orchestrators/ece-monitoring-ece-set-retention.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-monitoring-ece-set-retention.html -applies: - ece: all +applies_to: + deployment: + ece: all --- # Set the retention period for logging and metrics indices [ece-monitoring-ece-set-retention] diff --git a/deploy-manage/monitor/orchestrators/ece-platform-monitoring.md b/deploy-manage/monitor/orchestrators/ece-platform-monitoring.md index c2a69e58f..7fa3662dc 100644 --- a/deploy-manage/monitor/orchestrators/ece-platform-monitoring.md +++ b/deploy-manage/monitor/orchestrators/ece-platform-monitoring.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-monitoring-ece.html -applies: - ece: all +applies_to: + deployment: + ece: all --- # ECE platform monitoring [ece-monitoring-ece] diff --git a/deploy-manage/monitor/orchestrators/ece-proxy-log-fields.md b/deploy-manage/monitor/orchestrators/ece-proxy-log-fields.md index d80f84bbe..4c9619274 100644 --- a/deploy-manage/monitor/orchestrators/ece-proxy-log-fields.md +++ b/deploy-manage/monitor/orchestrators/ece-proxy-log-fields.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-proxy-log-fields.html -applies: - ece: all +applies_to: + deployment: + ece: all --- # Proxy Log Fields [ece-proxy-log-fields] diff --git a/deploy-manage/monitor/orchestrators/eck-metrics-configuration.md b/deploy-manage/monitor/orchestrators/eck-metrics-configuration.md index ee351b7ad..907c743bd 100644 --- a/deploy-manage/monitor/orchestrators/eck-metrics-configuration.md +++ b/deploy-manage/monitor/orchestrators/eck-metrics-configuration.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-configure-operator-metrics.html -applies: - eck: all +applies_to: + deployment: + eck: all --- # ECK metrics configuration [k8s-configure-operator-metrics] diff --git a/deploy-manage/monitor/orchestrators/k8s-enabling-metrics-endpoint.md b/deploy-manage/monitor/orchestrators/k8s-enabling-metrics-endpoint.md index 2c8ac4285..d72ff49bb 100644 --- a/deploy-manage/monitor/orchestrators/k8s-enabling-metrics-endpoint.md +++ b/deploy-manage/monitor/orchestrators/k8s-enabling-metrics-endpoint.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-enabling-the-metrics-endpoint.html -applies: - eck: all +applies_to: + deployment: + eck: all --- # Enabling the metrics endpoint [k8s-enabling-the-metrics-endpoint] diff --git a/deploy-manage/monitor/orchestrators/k8s-prometheus-requirements.md b/deploy-manage/monitor/orchestrators/k8s-prometheus-requirements.md index 3e9bb2d39..8609633f9 100644 --- a/deploy-manage/monitor/orchestrators/k8s-prometheus-requirements.md +++ b/deploy-manage/monitor/orchestrators/k8s-prometheus-requirements.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-prometheus-requirements.html -applies: - eck: all +applies_to: + deployment: + eck: all --- # Prometheus requirements [k8s-prometheus-requirements] diff --git a/deploy-manage/monitor/orchestrators/k8s-securing-metrics-endpoint.md b/deploy-manage/monitor/orchestrators/k8s-securing-metrics-endpoint.md index 8177a7697..2938284c4 100644 --- a/deploy-manage/monitor/orchestrators/k8s-securing-metrics-endpoint.md +++ b/deploy-manage/monitor/orchestrators/k8s-securing-metrics-endpoint.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-securing-the-metrics-endpoint.html -applies: - eck: all +applies_to: + deployment: + eck: all --- # Securing the metrics endpoint [k8s-securing-the-metrics-endpoint] diff --git a/deploy-manage/monitor/stack-monitoring.md b/deploy-manage/monitor/stack-monitoring.md index f6394d275..9e02fc315 100644 --- a/deploy-manage/monitor/stack-monitoring.md +++ b/deploy-manage/monitor/stack-monitoring.md @@ -3,11 +3,12 @@ mapped_urls: - https://www.elastic.co/guide/en/elasticsearch/reference/current/monitoring-overview.html - https://www.elastic.co/guide/en/elasticsearch/reference/current/how-monitoring-works.html - https://www.elastic.co/guide/en/cloud/current/ec-monitoring.html -applies: - hosted: all - ece: all - eck: all - stack: all +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all --- # Stack Monitoring diff --git a/deploy-manage/monitor/stack-monitoring/collecting-log-data-with-filebeat.md b/deploy-manage/monitor/stack-monitoring/collecting-log-data-with-filebeat.md index 75f475722..b51415693 100644 --- a/deploy-manage/monitor/stack-monitoring/collecting-log-data-with-filebeat.md +++ b/deploy-manage/monitor/stack-monitoring/collecting-log-data-with-filebeat.md @@ -2,8 +2,9 @@ navigation_title: "Collecting log data with {{filebeat}}" mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-filebeat.html -applies: - stack: all +applies_to: + deployment: + self: all --- @@ -32,7 +33,7 @@ If you’re using {{agent}}, do not deploy {{filebeat}} for log collection. Inst If there are both structured (`*.json`) and unstructured (plain text) versions of the logs, you must use the structured logs. Otherwise, they might not appear in the appropriate context in {{kib}}. :::: -3. [Install {{filebeat}}](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-installation-configuration.md) on the {{es}} nodes that contain logs that you want to monitor. +3. [Install {{filebeat}}](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-installation-configuration.md) on the {{es}} nodes that contain logs that you want to monitor. 4. Identify where to send the log data. For example, specify {{es}} output information for your monitoring cluster in the {{filebeat}} configuration file (`filebeat.yml`): @@ -60,7 +61,7 @@ If you’re using {{agent}}, do not deploy {{filebeat}} for log collection. Inst If {{es}} {{security-features}} are enabled on the monitoring cluster, you must provide a valid user ID and password so that {{filebeat}} can send metrics successfully. - For more information about these configuration options, see [Configure the {{es}} output](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/elasticsearch-output.md). + For more information about these configuration options, see [Configure the {{es}} output](asciidocalypse://docs/beats/docs/reference/filebeat/elasticsearch-output.md). 5. Optional: Identify where to visualize the data. @@ -81,9 +82,9 @@ If you’re using {{agent}}, do not deploy {{filebeat}} for log collection. Inst If {{security-features}} are enabled, you must provide a valid user ID and password so that {{filebeat}} can connect to {{kib}}: 1. Create a user on the monitoring cluster that has the [`kibana_admin` built-in role](../../users-roles/cluster-or-deployment-auth/built-in-roles.md) or equivalent privileges. - 2. Add the `username` and `password` settings to the {{es}} output information in the {{filebeat}} configuration file. The example shows a hard-coded password, but you should store sensitive values in the [secrets keystore](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/keystore.md). + 2. Add the `username` and `password` settings to the {{es}} output information in the {{filebeat}} configuration file. The example shows a hard-coded password, but you should store sensitive values in the [secrets keystore](asciidocalypse://docs/beats/docs/reference/filebeat/keystore.md). - See [Configure the {{kib}} endpoint](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/setup-kibana-endpoint.md). + See [Configure the {{kib}} endpoint](asciidocalypse://docs/beats/docs/reference/filebeat/setup-kibana-endpoint.md). 6. Enable the {{es}} module and set up the initial {{filebeat}} environment on each node. @@ -94,20 +95,20 @@ If you’re using {{agent}}, do not deploy {{filebeat}} for log collection. Inst filebeat setup -e ``` - For more information, see [{{es}} module](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-module-elasticsearch.md). + For more information, see [{{es}} module](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-module-elasticsearch.md). 7. Configure the {{es}} module in {{filebeat}} on each node. - If the logs that you want to monitor aren’t in the default location, set the appropriate path variables in the `modules.d/elasticsearch.yml` file. See [Configure the {{es}} module](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-module-elasticsearch.md#configuring-elasticsearch-module). + If the logs that you want to monitor aren’t in the default location, set the appropriate path variables in the `modules.d/elasticsearch.yml` file. See [Configure the {{es}} module](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-module-elasticsearch.md#configuring-elasticsearch-module). ::::{important} If there are JSON logs, configure the `var.paths` settings to point to them instead of the plain text logs. :::: -8. [Start {{filebeat}}](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-starting.md) on each node. +8. [Start {{filebeat}}](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-starting.md) on each node. ::::{note} - Depending on how you’ve installed {{filebeat}}, you might see errors related to file ownership or permissions when you try to run {{filebeat}} modules. See [Config file ownership and permissions](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-libbeat/config-file-permissions.md). + Depending on how you’ve installed {{filebeat}}, you might see errors related to file ownership or permissions when you try to run {{filebeat}} modules. See [Config file ownership and permissions](asciidocalypse://docs/beats/docs/reference/libbeat/config-file-permissions.md). :::: 9. Check whether the appropriate indices exist on the monitoring cluster. diff --git a/deploy-manage/monitor/stack-monitoring/collecting-monitoring-data-with-elastic-agent.md b/deploy-manage/monitor/stack-monitoring/collecting-monitoring-data-with-elastic-agent.md index d84d877df..f3f7568f0 100644 --- a/deploy-manage/monitor/stack-monitoring/collecting-monitoring-data-with-elastic-agent.md +++ b/deploy-manage/monitor/stack-monitoring/collecting-monitoring-data-with-elastic-agent.md @@ -2,8 +2,9 @@ navigation_title: "Collecting monitoring data with {{agent}}" mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-elastic-agent.html -applies: - stack: all +applies_to: + deployment: + self: all --- diff --git a/deploy-manage/monitor/stack-monitoring/collecting-monitoring-data-with-metricbeat.md b/deploy-manage/monitor/stack-monitoring/collecting-monitoring-data-with-metricbeat.md index 7b1348462..f21852c60 100644 --- a/deploy-manage/monitor/stack-monitoring/collecting-monitoring-data-with-metricbeat.md +++ b/deploy-manage/monitor/stack-monitoring/collecting-monitoring-data-with-metricbeat.md @@ -2,8 +2,9 @@ navigation_title: "Collecting monitoring data with {{metricbeat}}" mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/configuring-metricbeat.html -applies: - stack: all +applies_to: + deployment: + self: all --- @@ -19,7 +20,7 @@ Want to use {{agent}} instead? Refer to [Collecting monitoring data with {{agent :alt: Example monitoring architecture ::: -1. [Install {{metricbeat}}](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/metricbeat-installation-configuration.md). Ideally install a single {{metricbeat}} instance configured with `scope: cluster` and configure `hosts` to point to an endpoint (e.g. a load-balancing proxy) which directs requests to the master-ineligible nodes in the cluster. If this is not possible then install one {{metricbeat}} instance for each {{es}} node in the production cluster and use the default `scope: node`. When {{metricbeat}} is monitoring {{es}} with `scope: node` then you must install a {{metricbeat}} instance for each {{es}} node. If you don’t, some metrics will not be collected. {{metricbeat}} with `scope: node` collects most of the metrics from the elected master of the cluster, so you must scale up all your master-eligible nodes to account for this extra load and you should not use this mode if you have dedicated master nodes. +1. [Install {{metricbeat}}](asciidocalypse://docs/beats/docs/reference/metricbeat/metricbeat-installation-configuration.md). Ideally install a single {{metricbeat}} instance configured with `scope: cluster` and configure `hosts` to point to an endpoint (e.g. a load-balancing proxy) which directs requests to the master-ineligible nodes in the cluster. If this is not possible then install one {{metricbeat}} instance for each {{es}} node in the production cluster and use the default `scope: node`. When {{metricbeat}} is monitoring {{es}} with `scope: node` then you must install a {{metricbeat}} instance for each {{es}} node. If you don’t, some metrics will not be collected. {{metricbeat}} with `scope: node` collects most of the metrics from the elected master of the cluster, so you must scale up all your master-eligible nodes to account for this extra load and you should not use this mode if you have dedicated master nodes. 2. Enable the {{es}} module in {{metricbeat}} on each {{es}} node. For example, to enable the default configuration for the {{stack-monitor-features}} in the `modules.d` directory, run the following command: @@ -28,7 +29,7 @@ Want to use {{agent}} instead? Refer to [Collecting monitoring data with {{agent metricbeat modules enable elasticsearch-xpack ``` - For more information, refer to [{{es}} module](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/metricbeat-module-elasticsearch.md). + For more information, refer to [{{es}} module](asciidocalypse://docs/beats/docs/reference/metricbeat/metricbeat-module-elasticsearch.md). 3. Configure the {{es}} module in {{metricbeat}} on each {{es}} node. @@ -57,11 +58,11 @@ Want to use {{agent}} instead? Refer to [Collecting monitoring data with {{agent 1. Create a user on the production cluster that has the [`remote_monitoring_collector` built-in role](../../users-roles/cluster-or-deployment-auth/built-in-roles.md). Alternatively, use the [`remote_monitoring_user` built-in user](../../users-roles/cluster-or-deployment-auth/built-in-users.md). 2. Add the `username` and `password` settings to the {{es}} module configuration file. - 3. If TLS is enabled on the HTTP layer of your {{es}} cluster, you must either use https as the URL scheme in the `hosts` setting or add the `ssl.enabled: true` setting. Depending on the TLS configuration of your {{es}} cluster, you might also need to specify [additional ssl.*](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/configuration-ssl.md) settings. + 3. If TLS is enabled on the HTTP layer of your {{es}} cluster, you must either use https as the URL scheme in the `hosts` setting or add the `ssl.enabled: true` setting. Depending on the TLS configuration of your {{es}} cluster, you might also need to specify [additional ssl.*](asciidocalypse://docs/beats/docs/reference/metricbeat/configuration-ssl.md) settings. 4. Optional: Disable the system module in {{metricbeat}}. - By default, the [system module](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/metricbeat-module-system.md) is enabled. The information it collects, however, is not shown on the **Monitoring** page in {{kib}}. Unless you want to use that information for other purposes, run the following command: + By default, the [system module](asciidocalypse://docs/beats/docs/reference/metricbeat/metricbeat-module-system.md) is enabled. The information it collects, however, is not shown on the **Monitoring** page in {{kib}}. Unless you want to use that information for other purposes, run the following command: ```sh metricbeat modules disable system @@ -102,8 +103,8 @@ Want to use {{agent}} instead? Refer to [Collecting monitoring data with {{agent 1. Create a user on the monitoring cluster that has the [`remote_monitoring_agent` built-in role](../../users-roles/cluster-or-deployment-auth/built-in-roles.md). Alternatively, use the [`remote_monitoring_user` built-in user](../../users-roles/cluster-or-deployment-auth/built-in-users.md). 2. Add the `username` and `password` settings to the {{es}} output information in the {{metricbeat}} configuration file. - For more information about these configuration options, see [Configure the {{es}} output](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/elasticsearch-output.md). + For more information about these configuration options, see [Configure the {{es}} output](asciidocalypse://docs/beats/docs/reference/metricbeat/elasticsearch-output.md). -6. [Start {{metricbeat}}](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/metricbeat-starting.md) on each node. +6. [Start {{metricbeat}}](asciidocalypse://docs/beats/docs/reference/metricbeat/metricbeat-starting.md) on each node. 7. [View the monitoring data in {{kib}}](kibana-monitoring-data.md). diff --git a/deploy-manage/monitor/stack-monitoring/ece-restrictions-monitoring.md b/deploy-manage/monitor/stack-monitoring/ece-restrictions-monitoring.md index 06391bd4e..30a1025df 100644 --- a/deploy-manage/monitor/stack-monitoring/ece-restrictions-monitoring.md +++ b/deploy-manage/monitor/stack-monitoring/ece-restrictions-monitoring.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-restrictions-monitoring.html -applies: - ece: all +applies_to: + deployment: + ece: all --- # Restrictions and limitations [ece-restrictions-monitoring] diff --git a/deploy-manage/monitor/stack-monitoring/ece-stack-monitoring.md b/deploy-manage/monitor/stack-monitoring/ece-stack-monitoring.md index d58578ffe..964a0bf76 100644 --- a/deploy-manage/monitor/stack-monitoring/ece-stack-monitoring.md +++ b/deploy-manage/monitor/stack-monitoring/ece-stack-monitoring.md @@ -2,8 +2,9 @@ navigation_title: "Elastic Cloud Enterprise (ECE)" mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-enable-logging-and-monitoring.html -applies: - ece: all +applies_to: + deployment: + ece: all --- # Enable stack monitoring on ECE deployments [ece-enable-logging-and-monitoring] @@ -123,7 +124,7 @@ Elastic Cloud Enterprise manages the installation and configuration of the monit To enable monitoring on your deployment: 1. [Log into the Cloud UI](../../deploy/cloud-enterprise/log-into-cloud-ui.md). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. @@ -154,7 +155,7 @@ Enabling logs and monitoring requires some extra resource on a deployment. For p With monitoring enabled for your deployment, you can access the [logs](https://www.elastic.co/guide/en/kibana/current/observability.html) and [stack monitoring](../monitoring-data/visualizing-monitoring-data.md) through Kibana. 1. [Log into the Cloud UI](../../deploy/cloud-enterprise/log-into-cloud-ui.md). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. @@ -222,7 +223,7 @@ With logging and monitoring enabled for a deployment, metrics are collected for Audit logs are useful for tracking security events on your {{es}} and/or {{kib}} clusters. To enable {{es}} audit logs on your deployment: 1. [Log into the Cloud UI](../../deploy/cloud-enterprise/log-into-cloud-ui.md). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. diff --git a/deploy-manage/monitor/stack-monitoring/eck-stack-monitoring.md b/deploy-manage/monitor/stack-monitoring/eck-stack-monitoring.md index 1032c2c7e..169846eca 100644 --- a/deploy-manage/monitor/stack-monitoring/eck-stack-monitoring.md +++ b/deploy-manage/monitor/stack-monitoring/eck-stack-monitoring.md @@ -2,8 +2,9 @@ navigation_title: "Elastic Cloud on Kubernetes (ECK)" mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-stack-monitoring.html -applies: - eck: all +applies_to: + deployment: + eck: all --- # Enable stack monitoring on ECK deployments [k8s-stack-monitoring] diff --git a/deploy-manage/monitor/stack-monitoring/elastic-cloud-stack-monitoring.md b/deploy-manage/monitor/stack-monitoring/elastic-cloud-stack-monitoring.md index 1e7c4dfc7..6ad9cfe39 100644 --- a/deploy-manage/monitor/stack-monitoring/elastic-cloud-stack-monitoring.md +++ b/deploy-manage/monitor/stack-monitoring/elastic-cloud-stack-monitoring.md @@ -7,8 +7,9 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud-heroku/current/ech-enable-logging-and-monitoring.html - https://www.elastic.co/guide/en/cloud-heroku/current/ech-monitoring-setup.html - https://www.elastic.co/guide/en/cloud-heroku/current/ech-restrictions-monitoring.html -applies: - hosted: all +applies_to: + deployment: + ess: all --- # Stack Monitoring on Elastic Cloud deployments diff --git a/deploy-manage/monitor/stack-monitoring/elasticsearch-monitoring-self-managed.md b/deploy-manage/monitor/stack-monitoring/elasticsearch-monitoring-self-managed.md index 37788d76b..04639b89e 100644 --- a/deploy-manage/monitor/stack-monitoring/elasticsearch-monitoring-self-managed.md +++ b/deploy-manage/monitor/stack-monitoring/elasticsearch-monitoring-self-managed.md @@ -3,8 +3,9 @@ navigation_title: "Elasticsearch self-managed" mapped_urls: - https://www.elastic.co/guide/en/elasticsearch/reference/current/monitoring-production.html - https://www.elastic.co/guide/en/elasticsearch/reference/current/secure-monitoring.html -applies: - stack: all +applies_to: + deployment: + self: all --- # Elasticsearch monitoring self-managed diff --git a/deploy-manage/monitor/stack-monitoring/es-http-exporter.md b/deploy-manage/monitor/stack-monitoring/es-http-exporter.md index 74f9dd900..4d3b617e8 100644 --- a/deploy-manage/monitor/stack-monitoring/es-http-exporter.md +++ b/deploy-manage/monitor/stack-monitoring/es-http-exporter.md @@ -1,10 +1,12 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/http-exporter.html -applies: - stack: deprecated 7.16.0 +applies_to: + deployment: + self: deprecated 7.16.0 --- + # HTTP exporters [http-exporter] ::::{important} diff --git a/deploy-manage/monitor/stack-monitoring/es-legacy-collection-methods.md b/deploy-manage/monitor/stack-monitoring/es-legacy-collection-methods.md index d46605a97..1d8afb703 100644 --- a/deploy-manage/monitor/stack-monitoring/es-legacy-collection-methods.md +++ b/deploy-manage/monitor/stack-monitoring/es-legacy-collection-methods.md @@ -2,12 +2,14 @@ navigation_title: "Legacy collection methods" mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/collecting-monitoring-data.html -applies: - stack: deprecated 7.16.0 +applies_to: + deployment: + self: deprecated 7.16.0 --- + # Legacy collection methods [collecting-monitoring-data] diff --git a/deploy-manage/monitor/stack-monitoring/es-local-exporter.md b/deploy-manage/monitor/stack-monitoring/es-local-exporter.md index abaf5b96f..a4e40e0cb 100644 --- a/deploy-manage/monitor/stack-monitoring/es-local-exporter.md +++ b/deploy-manage/monitor/stack-monitoring/es-local-exporter.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/local-exporter.html -applies: - stack: deprecated 7.16.0 +applies_to: + deployment: + self: deprecated 7.16.0 --- # Local exporters [local-exporter] diff --git a/deploy-manage/monitor/stack-monitoring/es-monitoring-collectors.md b/deploy-manage/monitor/stack-monitoring/es-monitoring-collectors.md index 72504e238..fbc39ac14 100644 --- a/deploy-manage/monitor/stack-monitoring/es-monitoring-collectors.md +++ b/deploy-manage/monitor/stack-monitoring/es-monitoring-collectors.md @@ -1,10 +1,12 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/es-monitoring-collectors.html -applies: - stack: deprecated 7.16.0 +applies_to: + deployment: + self: deprecated 7.16.0 --- + # Collectors [es-monitoring-collectors] ::::{important} diff --git a/deploy-manage/monitor/stack-monitoring/es-monitoring-exporters.md b/deploy-manage/monitor/stack-monitoring/es-monitoring-exporters.md index f22994cba..25975f00f 100644 --- a/deploy-manage/monitor/stack-monitoring/es-monitoring-exporters.md +++ b/deploy-manage/monitor/stack-monitoring/es-monitoring-exporters.md @@ -1,10 +1,12 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/es-monitoring-exporters.html -applies: - stack: deprecated 7.16.0 +applies_to: + deployment: + self: deprecated 7.16.0 --- + # Exporters [es-monitoring-exporters] ::::{important} @@ -39,7 +41,7 @@ When the exporters route monitoring data into the monitoring cluster, they use ` Routing monitoring data involves indexing it into the appropriate monitoring indices. Once the data is indexed, it exists in a monitoring index that, by default, is named with a daily index pattern. For {{es}} monitoring data, this is an index that matches `.monitoring-es-6-*`. From there, the data lives inside the monitoring cluster and must be curated or cleaned up as necessary. If you do not curate the monitoring data, it eventually fills up the nodes and the cluster might fail due to lack of disk space. ::::{tip} -You are strongly recommended to manage the curation of indices and particularly the monitoring indices. To do so, you can take advantage of the [cleaner service](es-local-exporter.md#local-exporter-cleaner) or [Elastic Curator](asciidocalypse://docs/curator/docs/reference/elasticsearch/elasticsearch-client-curator/index.md). +You are strongly recommended to manage the curation of indices and particularly the monitoring indices. To do so, you can take advantage of the [cleaner service](es-local-exporter.md#local-exporter-cleaner) or [Elastic Curator](asciidocalypse://docs/curator/docs/reference/index.md). :::: diff --git a/deploy-manage/monitor/stack-monitoring/es-pause-export.md b/deploy-manage/monitor/stack-monitoring/es-pause-export.md index 65bf092d8..a9f66531a 100644 --- a/deploy-manage/monitor/stack-monitoring/es-pause-export.md +++ b/deploy-manage/monitor/stack-monitoring/es-pause-export.md @@ -1,10 +1,12 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/pause-export.html -applies: - stack: deprecated 7.16.0 +applies_to: + deployment: + self: deprecated 7.16.0 --- + # Pausing data collection [pause-export] To stop generating {{monitoring}} data in {{es}}, disable data collection: diff --git a/deploy-manage/monitor/stack-monitoring/k8s_audit_logging.md b/deploy-manage/monitor/stack-monitoring/k8s_audit_logging.md index cf4d89394..638cc8517 100644 --- a/deploy-manage/monitor/stack-monitoring/k8s_audit_logging.md +++ b/deploy-manage/monitor/stack-monitoring/k8s_audit_logging.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s_audit_logging.html -applies: - eck: all +applies_to: + deployment: + eck: all --- # Audit logging [k8s_audit_logging] diff --git a/deploy-manage/monitor/stack-monitoring/k8s_connect_to_an_external_monitoring_elasticsearch_cluster.md b/deploy-manage/monitor/stack-monitoring/k8s_connect_to_an_external_monitoring_elasticsearch_cluster.md index a730bb4a7..71497acb7 100644 --- a/deploy-manage/monitor/stack-monitoring/k8s_connect_to_an_external_monitoring_elasticsearch_cluster.md +++ b/deploy-manage/monitor/stack-monitoring/k8s_connect_to_an_external_monitoring_elasticsearch_cluster.md @@ -2,8 +2,9 @@ navigation_title: "Connect to an external cluster" mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s_connect_to_an_external_monitoring_elasticsearch_cluster.html -applies: - eck: all +applies_to: + deployment: + eck: all --- # Connect to an external monitoring Elasticsearch cluster [k8s_connect_to_an_external_monitoring_elasticsearch_cluster] diff --git a/deploy-manage/monitor/stack-monitoring/k8s_how_it_works.md b/deploy-manage/monitor/stack-monitoring/k8s_how_it_works.md index 38ea5faaf..e96b71380 100644 --- a/deploy-manage/monitor/stack-monitoring/k8s_how_it_works.md +++ b/deploy-manage/monitor/stack-monitoring/k8s_how_it_works.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s_how_it_works.html -applies: - eck: all +applies_to: + deployment: + eck: all --- # How it works [k8s_how_it_works] diff --git a/deploy-manage/monitor/stack-monitoring/k8s_override_the_beats_pod_template.md b/deploy-manage/monitor/stack-monitoring/k8s_override_the_beats_pod_template.md index 0a43e3f70..118d4b2de 100644 --- a/deploy-manage/monitor/stack-monitoring/k8s_override_the_beats_pod_template.md +++ b/deploy-manage/monitor/stack-monitoring/k8s_override_the_beats_pod_template.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s_override_the_beats_pod_template.html -applies: - eck: all +applies_to: + deployment: + eck: all --- # Override the Beats Pod Template [k8s_override_the_beats_pod_template] diff --git a/deploy-manage/monitor/stack-monitoring/k8s_when_to_use_it.md b/deploy-manage/monitor/stack-monitoring/k8s_when_to_use_it.md index 01d44a46b..b774e0dd2 100644 --- a/deploy-manage/monitor/stack-monitoring/k8s_when_to_use_it.md +++ b/deploy-manage/monitor/stack-monitoring/k8s_when_to_use_it.md @@ -1,8 +1,9 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s_when_to_use_it.html -applies: - eck: all +applies_to: + deployment: + eck: all --- # When to use it [k8s_when_to_use_it] diff --git a/deploy-manage/monitor/stack-monitoring/kibana-monitoring-data.md b/deploy-manage/monitor/stack-monitoring/kibana-monitoring-data.md index ab03af0df..2bc817565 100644 --- a/deploy-manage/monitor/stack-monitoring/kibana-monitoring-data.md +++ b/deploy-manage/monitor/stack-monitoring/kibana-monitoring-data.md @@ -2,11 +2,12 @@ navigation_title: "View monitoring data" mapped_pages: - https://www.elastic.co/guide/en/kibana/current/monitoring-data.html -applies: - hosted: all - ece: all - eck: all - stack: all +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all --- diff --git a/deploy-manage/monitor/stack-monitoring/kibana-monitoring-elastic-agent.md b/deploy-manage/monitor/stack-monitoring/kibana-monitoring-elastic-agent.md index 83264edff..842c3fd9c 100644 --- a/deploy-manage/monitor/stack-monitoring/kibana-monitoring-elastic-agent.md +++ b/deploy-manage/monitor/stack-monitoring/kibana-monitoring-elastic-agent.md @@ -2,8 +2,9 @@ navigation_title: "Collect monitoring data with {{agent}}" mapped_pages: - https://www.elastic.co/guide/en/kibana/current/monitoring-elastic-agent.html -applies: - stack: all +applies_to: + deployment: + self: all --- diff --git a/deploy-manage/monitor/stack-monitoring/kibana-monitoring-legacy.md b/deploy-manage/monitor/stack-monitoring/kibana-monitoring-legacy.md index 833f55811..d54ffdfb2 100644 --- a/deploy-manage/monitor/stack-monitoring/kibana-monitoring-legacy.md +++ b/deploy-manage/monitor/stack-monitoring/kibana-monitoring-legacy.md @@ -2,12 +2,14 @@ navigation_title: "Legacy collection methods" mapped_pages: - https://www.elastic.co/guide/en/kibana/current/monitoring-kibana.html -applies: - stack: deprecated 7.16.0 +applies_to: + deployment: + self: deprecated 7.16.0 --- + # Legacy collection methods [monitoring-kibana] diff --git a/deploy-manage/monitor/stack-monitoring/kibana-monitoring-metricbeat.md b/deploy-manage/monitor/stack-monitoring/kibana-monitoring-metricbeat.md index 37b3e6645..0cc7fef06 100644 --- a/deploy-manage/monitor/stack-monitoring/kibana-monitoring-metricbeat.md +++ b/deploy-manage/monitor/stack-monitoring/kibana-monitoring-metricbeat.md @@ -2,8 +2,9 @@ navigation_title: "Collect monitoring data with {{metricbeat}}" mapped_pages: - https://www.elastic.co/guide/en/kibana/current/monitoring-metricbeat.html -applies: - stack: all +applies_to: + deployment: + self: all --- @@ -64,7 +65,7 @@ To learn about monitoring in general, see [Monitor a cluster](../../monitor.md). For more information, see [Monitoring settings in {{es}}](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/monitoring-settings.md) and [Cluster update settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings). -4. [Install {{metricbeat}}](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/metricbeat-installation-configuration.md) on the same server as {{kib}}. +4. [Install {{metricbeat}}](asciidocalypse://docs/beats/docs/reference/metricbeat/metricbeat-installation-configuration.md) on the same server as {{kib}}. 5. Enable the {{kib}} {{xpack}} module in {{metricbeat}}.
For example, to enable the default configuration in the `modules.d` directory, run the following command: @@ -73,7 +74,7 @@ To learn about monitoring in general, see [Monitor a cluster](../../monitor.md). metricbeat modules enable kibana-xpack ``` - For more information, see [Specify which modules to run](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/configuration-metricbeat.md) and [{{kib}} module](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/metricbeat-module-kibana.md). + For more information, see [Specify which modules to run](asciidocalypse://docs/beats/docs/reference/metricbeat/configuration-metricbeat.md) and [{{kib}} module](asciidocalypse://docs/beats/docs/reference/metricbeat/metricbeat-module-kibana.md). 6. Configure the {{kib}} {{xpack}} module in {{metricbeat}}.
@@ -100,7 +101,7 @@ To learn about monitoring in general, see [Monitor a cluster](../../monitor.md). 7. Optional: Disable the system module in {{metricbeat}}. - By default, the [system module](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/metricbeat-module-system.md) is enabled. The information it collects, however, is not shown on the **Monitoring** page in {{kib}}. Unless you want to use that information for other purposes, run the following command: + By default, the [system module](asciidocalypse://docs/beats/docs/reference/metricbeat/metricbeat-module-system.md) is enabled. The information it collects, however, is not shown on the **Monitoring** page in {{kib}}. Unless you want to use that information for other purposes, run the following command: ```sh metricbeat modules disable system @@ -141,8 +142,8 @@ To learn about monitoring in general, see [Monitor a cluster](../../monitor.md). 1. Create a user on the monitoring cluster that has the `remote_monitoring_agent` [built-in role](../../users-roles/cluster-or-deployment-auth/built-in-roles.md). Alternatively, use the `remote_monitoring_user` [built-in user](../../users-roles/cluster-or-deployment-auth/built-in-users.md). 2. Add the `username` and `password` settings to the {{es}} output information in the {{metricbeat}} configuration file. - For more information about these configuration options, see [Configure the {{es}} output](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/elasticsearch-output.md). + For more information about these configuration options, see [Configure the {{es}} output](asciidocalypse://docs/beats/docs/reference/metricbeat/elasticsearch-output.md). -9. [Start {{metricbeat}}](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/metricbeat-starting.md). +9. [Start {{metricbeat}}](asciidocalypse://docs/beats/docs/reference/metricbeat/metricbeat-starting.md). 10. [View the monitoring data in {{kib}}](/deploy-manage/monitor/monitoring-data.md). diff --git a/deploy-manage/monitor/stack-monitoring/kibana-monitoring-self-managed.md b/deploy-manage/monitor/stack-monitoring/kibana-monitoring-self-managed.md index 62b1e8e93..4f975d6a5 100644 --- a/deploy-manage/monitor/stack-monitoring/kibana-monitoring-self-managed.md +++ b/deploy-manage/monitor/stack-monitoring/kibana-monitoring-self-managed.md @@ -2,8 +2,8 @@ navigation_title: "Kibana self-managed" mapped_pages: - https://www.elastic.co/guide/en/kibana/current/configuring-monitoring.html -applies: - stack: all +applies_to: + self: all --- diff --git a/deploy-manage/production-guidance.md b/deploy-manage/production-guidance.md index 27e37ca0d..322e3f7d9 100644 --- a/deploy-manage/production-guidance.md +++ b/deploy-manage/production-guidance.md @@ -10,9 +10,9 @@ This section provides some best practices for managing your data to help you set ## Plan your data structure, availability, and formatting [ec_plan_your_data_structure_availability_and_formatting] -* Build a [data architecture](/manage-data/lifecycle/data-tiers.md) that best fits your needs. Your Elasticsearch Service deployment comes with default hot tier {{es}} nodes that store your most frequently accessed data. Based on your own access and retention policies, you can add warm, cold, frozen data tiers, and automated deletion of old data. +* Build a [data architecture](/manage-data/lifecycle/data-tiers.md) that best fits your needs. Your {{ech}} deployment comes with default hot tier {{es}} nodes that store your most frequently accessed data. Based on your own access and retention policies, you can add warm, cold, frozen data tiers, and automated deletion of old data. * Make your data [highly available](/deploy-manage/tools.md) for production environments or otherwise critical data stores, and take regular [backup snapshots](tools/snapshot-and-restore.md). -* Normalize event data to better analyze, visualize, and correlate your events by adopting the [Elastic Common Schema](asciidocalypse://docs/ecs/docs/reference/ecs/ecs-getting-started.md) (ECS). Elastic integrations use ECS out-of-the-box. If you are writing your own integrations, ECS is recommended. +* Normalize event data to better analyze, visualize, and correlate your events by adopting the [Elastic Common Schema](asciidocalypse://docs/ecs/docs/reference/ecs-getting-started.md) (ECS). Elastic integrations use ECS out-of-the-box. If you are writing your own integrations, ECS is recommended. ## Optimize data storage and retention [ec_optimize_data_storage_and_retention] diff --git a/deploy-manage/reference-architectures.md b/deploy-manage/reference-architectures.md index df943cdc3..c20c65ae3 100644 --- a/deploy-manage/reference-architectures.md +++ b/deploy-manage/reference-architectures.md @@ -2,11 +2,12 @@ mapped_pages: - https://www.elastic.co/guide/en/reference-architectures/current/reference-architectures-overview.html - https://www.elastic.co/guide/en/reference-architectures/current/index.html -applies: - stack: all - hosted: all - ece: all - eck: all +applies_to: + deployment: + self: all + ess: all + ece: all + eck: all --- # Reference architectures [reference-architectures-overview] diff --git a/deploy-manage/reference-architectures/hotfrozen-high-availability.md b/deploy-manage/reference-architectures/hotfrozen-high-availability.md index ce3f7a864..e4883a2cd 100644 --- a/deploy-manage/reference-architectures/hotfrozen-high-availability.md +++ b/deploy-manage/reference-architectures/hotfrozen-high-availability.md @@ -1,11 +1,12 @@ --- mapped_pages: - https://www.elastic.co/guide/en/reference-architectures/current/hot-frozen-architecture.html -applies: - stack: all - hosted: all - ece: all - eck: all +applies_to: + deployment: + self: all + ess: all + ece: all + eck: all --- # Hot/Frozen - High Availability [hot-frozen-architecture] @@ -48,14 +49,14 @@ We use an Availability Zone (AZ) concept in the architecture above. When running The diagram illustrates an {{es}} cluster deployed across 3 availability zones (AZ). For production we recommend a minimum of 2 availability zones and 3 availability zones for mission critical applications. See [Plan for production](/deploy-manage/production-guidance/plan-for-production-elastic-cloud.md) for more details. A cluster that is running in {{ecloud}} that has data nodes in only two AZs will create a third master-eligible node in a third AZ. High availability cannot be achieved without three zones for any distributed computing technology. -The number of data nodes shown for each tier (hot and frozen) is illustrative and would be scaled up depending on ingest volume and retention period. Hot nodes contain both primary and replica shards. By default, primary and replica shards are always guaranteed to be in different availability zones in {{ess}}, but when self-deploying [shard allocation awareness](../distributed-architecture/shard-allocation-relocation-recovery/shard-allocation-awareness.md) would need to be configured. Frozen nodes act as a large high-speed cache and retrieve data from the snapshot store as needed. +The number of data nodes shown for each tier (hot and frozen) is illustrative and would be scaled up depending on ingest volume and retention period. Hot nodes contain both primary and replica shards. By default, primary and replica shards are always guaranteed to be in different availability zones in {{ech}}, but when self-deploying [shard allocation awareness](../distributed-architecture/shard-allocation-relocation-recovery/shard-allocation-awareness.md) would need to be configured. Frozen nodes act as a large high-speed cache and retrieve data from the snapshot store as needed. Machine learning nodes are optional but highly recommended for large scale time series use cases since the amount of data quickly becomes too difficult to analyze. Applying techniques such as machine learning based anomaly detection or Search AI with large language models helps to dramatically speed up problem identification and resolution. ## Recommended hardware specifications [hot-frozen-hardware] -With {{ech}}, you can deploy clusters in AWS, Azure, and Google Cloud. Available hardware types and configurations vary across all three cloud providers but each provides instance types that meet our recommendations for the node types used in this architecture. For more details on these instance types, see our documentation on {{ech}} hardware for [AWS](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/aws-default.md), [Azure](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/azure-default.md), and [GCP](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/gcp-default-provider.md). The **Physical** column below is guidance, based on the cloud node types, when self-deploying {{es}} in your own data center. +With {{ech}}, you can deploy clusters in AWS, Azure, and Google Cloud. Available hardware types and configurations vary across all three cloud providers but each provides instance types that meet our recommendations for the node types used in this architecture. For more details on these instance types, see our documentation on {{ech}} hardware for [AWS](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/aws-default.md), [Azure](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/azure-default.md), and [GCP](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/gcp-default-provider.md). The **Physical** column below is guidance, based on the cloud node types, when self-deploying {{es}} in your own data center. In the links provided above, Elastic has performance tested hardware for each of the cloud providers to find the optimal hardware for each node type. We use ratios to represent the best mix of CPU, RAM, and disk for each type. In some cases the CPU to RAM ratio is key, in others the disk to memory ratio and type of disk is critical. Significantly deviating from these ratios may seem like a way to save on hardware costs, but may result in an {{es}} cluster that does not scale and perform well. @@ -92,7 +93,7 @@ This table shows our specific recommendations for nodes in a Hot/Frozen architec **Kibana:** -* If self-deploying outside of {{ess}}, ensure that {{kib}} is configured for [high availability](/deploy-manage/production-guidance/kibana-in-production-environments.md#high-availability). +* If self-deploying outside of {{ech}}, ensure that {{kib}} is configured for [high availability](/deploy-manage/production-guidance/kibana-in-production-environments.md#high-availability). ## How many nodes of each do you need? [hot-frozen-estimate] @@ -110,5 +111,5 @@ You can [contact us](https://www.elastic.co/contact) for an estimate and recomme ## Resources and references [hot-frozen-resources] * [{{es}} - Get ready for production](/deploy-manage/production-guidance/getting-ready-for-production-elasticsearch.md) -* [{{ess}} - Preparing a deployment for production](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md) +* [{{ech}} - Preparing a deployment for production](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md) * [Size your shards](/deploy-manage/production-guidance/optimize-performance/size-shards.md) diff --git a/deploy-manage/remote-clusters.md b/deploy-manage/remote-clusters.md index d4ca57e75..a8fc77f59 100644 --- a/deploy-manage/remote-clusters.md +++ b/deploy-manage/remote-clusters.md @@ -1,9 +1,40 @@ -# Remote clusters +--- +applies_to: + deployment: + ece: ga + eck: ga + ess: ga + self: ga + serverless: unavailable +--- -% What needs to be done: Write from scratch +# Remote clusters [remote-clusters] -% GitHub issue: https://github.com/elastic/docs-projects/issues/345 +By setting up **remote clusters**, you can connect an {{es}} cluster to other {{es}} clusters. Remote clusters can be located in different data centers, geographic regions, and run on a different type of environment: {{ech}}, {{ece}}, {{eck}}, or self-managed. -% Scope notes: "Landing page for cross cluster comms, used by CCS and CCR. -We will cover here the raw configuration at Elasticsearch level and the docs to enable remote clusters in ESS / ECE / ECK. -We can include links to the use cases of remote clusters, such as CCR and CCS." +Remote clusters are especially useful in two cases: + +- **Cross-cluster replication** + With [cross-cluster replication](/deploy-manage/tools/cross-cluster-replication.md), or CCR, you ingest data to an index on a remote cluster. This leader index is replicated to one or more read-only follower indices on your local cluster. Creating a multi-cluster architecture with cross-cluster replication enables you to configure disaster recovery, bring data closer to your users, or establish a centralized reporting cluster to process reports locally. + +- **Cross-cluster search** + [Cross-cluster search](/solutions/search/cross-cluster-search.md), or CCS, enables you to run a search request against one or more remote clusters. This capability provides each region with a global view of all clusters, allowing you to send a search request from a local cluster and return results from all connected remote clusters. For full {{ccs}} capabilities, the local and remote cluster must be on the same [subscription level](https://www.elastic.co/subscriptions). + +::::{note} about terminology +In the case of remote clusters, the {{es}} cluster or deployment initiating the connection and requests is often referred to as the **local cluster**, while the {{es}} cluster or deployment receiving the requests is referred to as the **remote cluster**. +:::: + +## Setup + +Depending on the environment the local and remote clusters are deployed on and the security model you wish to use, the exact details needed to add a remote cluster vary but generally follow the same path: + +1. **Configure trust between clusters.** In the settings of the local deployment or cluster, configure the trust security model that your remote connections will use to access the remote cluster. This step involves specifying API keys or certificates retrieved from the remote clusters. + +2. **Establish the connection.** In {{kib}} on the local cluster, finalize the connection by specifying each remote cluster's details. + +Find the instructions with details on the supported security models and available connection modes for your specific scenario: + +- [Remote clusters with {{ech}}](remote-clusters/ec-enable-ccs.md) +- [Remote clusters with {{ece}}](remote-clusters/ece-enable-ccs.md) +- [Remote clusters with {{eck}}](remote-clusters/eck-remote-clusters.md) +- [Remote clusters with self-managed installations](remote-clusters/remote-clusters-self-managed.md) \ No newline at end of file diff --git a/deploy-manage/remote-clusters/_snippets/remote-cluster-certificate-compatibility.md b/deploy-manage/remote-clusters/_snippets/remote-cluster-certificate-compatibility.md new file mode 100644 index 000000000..50cf4b529 --- /dev/null +++ b/deploy-manage/remote-clusters/_snippets/remote-cluster-certificate-compatibility.md @@ -0,0 +1,27 @@ +:::::{dropdown} Version compatibility table + +* Any node can communicate with another node on the same major version. For example, 9.0 can talk to any 9.x node. +* Version compatibility is symmetric, meaning that if 7.16 can communicate with 8.0, 8.0 can also communicate with 7.16. The following table depicts version compatibility between local and remote nodes. + +| | | +| --- | --- | +| | Local cluster | +| Remote cluster | 5.0–5.5 | 5.6 | 6.0–6.6 | 6.7 | 6.8 | 7.0 | 7.1–7.16 | 7.17 | 8.0–9.0 | +| 5.0–5.5 | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | +| 5.6 | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | +| 6.0–6.6 | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | +| 6.7 | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | +| 6.8 | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | +| 7.0 | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | +| 7.1–7.16 | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | +| 7.17 | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | +| 8.0–9.0 | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | + + +::::{important} +Elastic only supports {{ccs}} on a subset of these configurations. See [Supported {{ccs}} configurations](../../../solutions/search/cross-cluster-search.md#ccs-supported-configurations). +:::: + +::::: + + diff --git a/deploy-manage/remote-clusters/ec-edit-remove-trusted-environment.md b/deploy-manage/remote-clusters/ec-edit-remove-trusted-environment.md index 88b085094..a91d17c71 100644 --- a/deploy-manage/remote-clusters/ec-edit-remove-trusted-environment.md +++ b/deploy-manage/remote-clusters/ec-edit-remove-trusted-environment.md @@ -1,4 +1,7 @@ --- +applies_to: + deployment: + ess: ga mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-edit-remove-trusted-environment.html --- @@ -12,7 +15,7 @@ From a deployment’s **Security** page, you can manage trusted environments tha * You want to remove or update the access level granted by a cross-cluster API key. -## Remove a trusted environment [ec_remove_a_trusted_environment] +## Remove a certificate-based trusted environment [ec_remove_a_trusted_environment] By removing a trusted environment, this deployment will no longer be able to establish remote connections using certificate trust to clusters of that environment. The remote environment will also no longer be able to connect to this deployment using certificate trust. @@ -25,11 +28,11 @@ With this method, you can only remove trusted environments relying exclusively o 2. In the list of trusted environments, locate the one you want to remove. 3. Remove it using the corresponding `delete` icon. - :::{image} ../../images/cloud-delete-trust-environment.png - :alt: button for deleting a trusted environment - ::: + :::{image} ../../images/cloud-delete-trust-environment.png + :alt: button for deleting a trusted environment + ::: -4. In Kibana, go to **Stack Management** > **Remote Clusters**. +4. In {{kib}}, go to **Stack Management** > **Remote Clusters**. 5. In the list of existing remote clusters, delete the ones corresponding to the trusted environment you removed earlier. @@ -39,14 +42,14 @@ With this method, you can only remove trusted environments relying exclusively o 2. In the list of trusted environments, locate the one you want to edit. 3. Open its details by selecting the `Edit` icon. - :::{image} ../../images/cloud-edit-trust-environment.png - :alt: button for editing a trusted environment - ::: + :::{image} ../../images/cloud-edit-trust-environment.png + :alt: button for editing a trusted environment + ::: 4. Edit the trust configuration for that environment: - * From the **Trust level** tab, you can add or remove trusted deployments. - * From the **Environment settings** tab, you can manage the certificates and the label of the environment. + * From the **Trust level** tab, you can add or remove trusted deployments. + * From the **Environment settings** tab, you can manage the certificates and the label of the environment. 5. Save your changes. @@ -56,11 +59,11 @@ With this method, you can only remove trusted environments relying exclusively o This section describes the steps to change the API key used for an existing remote connection. For example, if the previous key expired and you need to rotate it with a new one. ::::{note} -If you need to update the permissions granted by a cross-cluster API key for a remote connection, you only need to update the privileges granted by the API key directly in Kibana. +If you need to update the permissions granted by a cross-cluster API key for a remote connection, you only need to update the privileges granted by the API key directly in {{kib}}. :::: -1. On the deployment you will use as remote, use the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) or [Kibana](../api-keys/elasticsearch-api-keys.md) to create a cross-cluster API key with the appropriate permissions. Configure it with access to the indices you want to use for {{ccs}} or {{ccr}}. +1. On the deployment you will use as remote, use the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) or [{{kib}}](../api-keys/elasticsearch-api-keys.md) to create a cross-cluster API key with the appropriate permissions. Configure it with access to the indices you want to use for {{ccs}} or {{ccr}}. 2. Copy the encoded key (`encoded` in the response) to a safe location. You will need it in the next steps. 3. Go to the **Security** page of the local deployment and locate the **Remote connections** section. 4. Locate the API key currently used for connecting to the remote cluster, copy its current alias, and delete it. @@ -68,16 +71,14 @@ If you need to update the permissions granted by a cross-cluster API key for a r * For the **Setting name**, enter the same alias that was used for the previous key. - ::::{note} - If you use a different alias, you also need to re-create the remote cluster in Kibana with a **Name** that matches the new alias. - :::: + ::::{note} + If you use a different alias, you also need to re-create the remote cluster in {{kib}} with a **Name** that matches the new alias. + :::: - * For the **Secret**, paste the encoded cross-cluster API key. + * For the **Secret**, paste the encoded cross-cluster API key, then click **Add** to save the API key to the keystore. - 1. Click **Add** to save the API key to the keystore. +6. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart {{es}}**.
-6. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart Elasticsearch**.
- - ::::{note} - If the local deployment runs on version 8.13 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. - :::: + ::::{note} + If the local deployment runs on version 8.14 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. + :::: diff --git a/deploy-manage/remote-clusters/ec-enable-ccs-for-eck.md b/deploy-manage/remote-clusters/ec-enable-ccs-for-eck.md index f3207d452..25ffa5a6e 100644 --- a/deploy-manage/remote-clusters/ec-enable-ccs-for-eck.md +++ b/deploy-manage/remote-clusters/ec-enable-ccs-for-eck.md @@ -1,11 +1,16 @@ --- +applies_to: + deployment: + ess: ga + eck: ga +navigation_title: With {{eck}} mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-enable-ccs-for-eck.html --- -# Enabling CCS/R between Elasticsearch Service and ECK [ec-enable-ccs-for-eck] +# Remote clusters between {{ech}} and ECK [ec-enable-ccs-for-eck] -These steps describe how to configure remote clusters between an {{es}} cluster in Elasticsearch Service and an {{es}} cluster running within [Elastic Cloud on Kubernetes (ECK)](/deploy-manage/deploy/cloud-on-k8s.md). Once that’s done, you’ll be able to [run CCS queries from {{es}}](/solutions/search/cross-cluster-search.md) or [set up CCR](/deploy-manage/tools/cross-cluster-replication/set-up-cross-cluster-replication.md). +These steps describe how to configure remote clusters between an {{es}} cluster in {{ech}} and an {{es}} cluster running within [{{eck}} (ECK)](/deploy-manage/deploy/cloud-on-k8s.md). Once that’s done, you’ll be able to [run CCS queries from {{es}}](/solutions/search/cross-cluster-search.md) or [set up CCR](/deploy-manage/tools/cross-cluster-replication/set-up-cross-cluster-replication.md). ## Establish trust between two clusters [ec_establish_trust_between_two_clusters] @@ -13,7 +18,7 @@ These steps describe how to configure remote clusters between an {{es}} cluster The first step is to establish trust between the two clusters. -### Establish trust in the Elasticsearch Service cluster [ec_establish_trust_in_the_elasticsearch_service_cluster] +### Establish trust in the {{ech}} cluster [ec_establish_trust_in_the_elasticsearch_service_cluster] 1. Save the ECK CA certificate to a file. For a cluster named `quickstart`, run: @@ -22,7 +27,7 @@ The first step is to establish trust between the two clusters. ``` -1. Update the trust settings for the Elasticsearch Service deployment. Follow the steps provided in [Access clusters of a self-managed environment](ec-remote-cluster-self-managed.md), and specifically the first three steps in **Specify the deployments trusted to be used as remote clusters** using TLS certificate as security model. +1. Update the trust settings for the {{ech}} deployment. Follow the steps provided in [Access clusters of a self-managed environment](ec-remote-cluster-self-managed.md), and specifically the first three steps in **Specify the deployments trusted to be used as remote clusters** using TLS certificate as security model. * Use the certificate file saved in the first step. * Select the {{ecloud}} pattern and enter `default.es.local` for the `Scope ID`. @@ -32,7 +37,7 @@ The first step is to establish trust between the two clusters. ### Establish trust in the ECK cluster [ec_establish_trust_in_the_eck_cluster] -1. Upload the Elasticsearch Service certificate (that you downloaded in the last step of the previous section) as a Kubernetes secret. +1. Upload the {{ech}} certificate (that you downloaded in the last step of the previous section) as a Kubernetes secret. ```sh kubectl create secret generic ce-aws-cert --from-file= @@ -73,16 +78,16 @@ The first step is to establish trust between the two clusters. -## Setup CCS/R [ec_setup_ccsr] +## Set up CCS/R [ec_setup_ccsr] -Now that trust has been established, you can set up CCS/R from the ECK cluster to the Elasticsearch Service cluster or from the Elasticsearch Service cluster to the ECK cluster. +Now that trust has been established, you can set up CCS/R from the ECK cluster to the {{ech}} cluster or from the {{ech}} cluster to the ECK cluster. -### ECK Cluster to Elasticsearch Service cluster [ec_eck_cluster_to_elasticsearch_service_cluster] +### ECK Cluster to {{ech}} cluster [ec_eck_cluster_to_elasticsearch_service_cluster] Configure the ECK cluster [using certificate based authentication](ec-remote-cluster-self-managed.md). -### Elasticsearch Service cluster to ECK Cluster [ec_elasticsearch_service_cluster_to_eck_cluster] +### {{ech}} cluster to ECK Cluster [ec_elasticsearch_service_cluster_to_eck_cluster] Follow the steps outlined in the [ECK documentation](/deploy-manage/remote-clusters/eck-remote-clusters.md#k8s_configure_the_remote_cluster_connection_through_the_elasticsearch_rest_api). diff --git a/deploy-manage/remote-clusters/ec-enable-ccs.md b/deploy-manage/remote-clusters/ec-enable-ccs.md index ceb1506f9..94c1335cd 100644 --- a/deploy-manage/remote-clusters/ec-enable-ccs.md +++ b/deploy-manage/remote-clusters/ec-enable-ccs.md @@ -1,57 +1,59 @@ --- +applies_to: + deployment: + ess: ga +navigation_title: Elastic Cloud Hosted mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-enable-ccs.html --- -# Enable cross-cluster search and cross-cluster replication [ec-enable-ccs] +# Remote clusters with {{ech}} [ec-enable-ccs] -[Cross-cluster search (CCS)](/solutions/search/cross-cluster-search.md) allows you to configure multiple remote clusters across different locations and to enable federated search queries across all of the configured remote clusters. +You can configure an {{ech}} deployment to remotely access or (be accessed by) a cluster from: -[Cross-cluster replication (CCR)](/deploy-manage/tools/cross-cluster-replication.md) allows you to replicate indices across multiple remote clusters regardless of where they’re located. This provides tremendous benefit in scenarios of disaster recovery or data locality. - -These remote clusters could be: - -* Another {{es}} cluster of your {{ecloud}} organization across any region or cloud provider (AWS, GCP, Azure…​) -* An {{es}} cluster of another {{ecloud}} organization -* An {{es}} cluster in an {{ece}} installation -* Any other self-managed {{es}} cluster +* Another {{ech}} deployment of your {{ecloud}} organization, across any region or cloud provider (AWS, GCP, Azure…​) +* An {{ech}} deployment of another {{ecloud}} organization +* A deployment in an {{ece}} installation +* A deployment in an {{eck}} installation +* A self-managed installation. ## Prerequisites [ec-ccs-ccr-prerequisites] To use CCS or CCR, your deployments must meet the following criteria: -* Local and remote clusters must be in compatible versions. Review the [{{es}} version compatibility](/deploy-manage/remote-clusters/remote-clusters-cert.md#remote-clusters-prerequisites-cert) table. +* The local and remote clusters must run on compatible versions of {{es}}. Review the version compatibility table. + + :::{include} _snippets/remote-cluster-certificate-compatibility.md + ::: + +* If your deployment was created before February 2021, the **Remote clusters** page in {{kib}} must be enabled manually from the **Security** page of your deployment, by selecting **Enable CCR** under **Trust management**. + +## Set up remote clusters with {{ech}} The steps, information, and authentication method required to configure CCS and CCR can vary depending on where the clusters you want to use as remote are hosted. -* Connect remotely to other clusters from your Elasticsearch Service deployments +* Connect remotely to other clusters from your {{ech}} deployments - * [Access other deployments of the same Elasticsearch Service organization](ec-remote-cluster-same-ess.md) - * [Access deployments of a different Elasticsearch Service organization](ec-remote-cluster-other-ess.md) + * [Access other deployments of the same {{ecloud}} organization](ec-remote-cluster-same-ess.md) + * [Access deployments of a different {{ecloud}} organization](ec-remote-cluster-other-ess.md) * [Access deployments of an {{ECE}} environment](ec-remote-cluster-ece.md) * [Access clusters of a self-managed environment](ec-remote-cluster-self-managed.md) * [Access deployments of an ECK environment](ec-enable-ccs-for-eck.md) -* Use clusters from your Elasticsearch Service deployments as remote - - * [From another deployment of your Elasticsearch Service organization](ec-remote-cluster-same-ess.md) - * [From a deployment of another Elasticsearch Service organization](ec-remote-cluster-other-ess.md) - * [From an ECE deployment](/deploy-manage/remote-clusters/ece-enable-ccs.md) - * [From a self-managed cluster](/deploy-manage/remote-clusters/remote-clusters-self-managed.md) - - - -## Enable CCR and the Remote Clusters UI in Kibana [ec-enable-ccr] +* Use clusters from your {{ech}} deployments as remote -If your deployment was created before February 2021, CCR won’t be enabled by default and you won’t find the Remote Clusters UI in Kibana even though your deployment meets all the [criteria](#ec-ccs-ccr-prerequisites). + * [From another deployment of your {{ecloud}} organization](ec-remote-cluster-same-ess.md) + * [From a deployment of another {{ecloud}} organization](ec-remote-cluster-other-ess.md) + * [From an ECE deployment](ece-remote-cluster-ece-ess.md) + * [From a self-managed cluster](remote-clusters-self-managed.md) + * [From an ECK environment](ec-enable-ccs-for-eck.md) -To enable these features, go to the **Security** page of your deployment and under **Trust management** select **Enable CCR**. ## Remote clusters and traffic filtering [ec-ccs-ccr-traffic-filtering] ::::{note} -Traffic filtering isn’t supported for cross-cluster operations initiated from an {{ece}} environment to a remote {{ess}} deployment. +Traffic filtering isn’t supported for cross-cluster operations initiated from an {{ece}} environment to a remote {{ech}} deployment. :::: @@ -60,8 +62,8 @@ For remote clusters configured using TLS certificate authentication, [traffic fi Traffic filtering for remote clusters supports 2 methods: * [Filtering by IP addresses and Classless Inter-Domain Routing (CIDR) masks](../security/ip-traffic-filtering.md) -* Filtering by Organization or Elasticsearch cluster ID with a Remote cluster type filter. You can configure this type of filter from the **Features** > **Traffic filters** page of your organization or using the [Elasticsearch Service API](https://www.elastic.co/docs/api/doc/cloud) and apply it from each deployment’s **Security** page. +* Filtering by Organization or {{es}} cluster ID with a Remote cluster type filter. You can configure this type of filter from the **Features** > **Traffic filters** page of your organization or using the [{{ecloud}} RESTful API](https://www.elastic.co/docs/api/doc/cloud) and apply it from each deployment’s **Security** page. ::::{note} -When setting up traffic filters for a remote connection to an {{ece}} environment, you also need to upload the region’s TLS certificate of the local cluster to the {{ece}} environment’s proxy. You can find that region’s TLS certificate in the Security page of any deployment of the environment initiating the remote connection. +When setting up traffic filters for a remote connection to an {{ece}} environment, you also need to upload the region’s TLS certificate of the local cluster to the {{ece}} environment’s proxy. You can find that region’s TLS certificate in the **Security** page of any deployment of the environment initiating the remote connection. :::: diff --git a/deploy-manage/remote-clusters/ec-migrate-ccs.md b/deploy-manage/remote-clusters/ec-migrate-ccs.md index a4dbf6183..1a27120a6 100644 --- a/deploy-manage/remote-clusters/ec-migrate-ccs.md +++ b/deploy-manage/remote-clusters/ec-migrate-ccs.md @@ -1,11 +1,15 @@ --- +applies_to: + deployment: + ess: ga mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-migrate-ccs.html +navigation_title: "Migrate the CCS deployment template" --- # Migrate the cross-cluster search deployment template [ec-migrate-ccs] -The cross-cluster search deployment template is now deprecated and has been removed from the Elasticsearch Service user console. You no longer need to use the dedicated cross-cluster template to search across deployments. Instead, you can now use any template to [configure remote clusters](ec-enable-ccs.md) and search across them. Existing deployments created using this template are not affected, but they are required to migrate to another template before upgrading to version 8.x. +The cross-cluster search deployment template is now deprecated and has been removed from the {{ecloud}} Console. You no longer need to use the dedicated cross-cluster template to search across deployments. Instead, you can now use any template to [configure remote clusters](ec-enable-ccs.md) and search across them. Existing deployments created using this template are not affected, but they are required to migrate to another template before upgrading to {{stack}} 8.x. There are two different approaches to do this migration: @@ -17,14 +21,14 @@ There are two different approaches to do this migration: You can use a PUT request to update your deployment, changing both the deployment template ID and the instances required by the new template. -1. First, choose the new template you want to use and obtain its ID. This template ID can be obtained from the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body) **Create Deployment** page by selecting **Equivalent API request** and inspecting the result for the field `deployment_template`. For example, we are going to use the "Storage optimized" deployment template, and in our GCP region the id is `gcp-storage-optimized-v5`. +1. First, choose the new template you want to use and obtain its ID. This template ID can be obtained from the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) **Create Deployment** page by selecting **Equivalent API request** and inspecting the result for the field `deployment_template`. For example, we are going to use the "Storage optimized" deployment template, and in our GCP region the id is `gcp-storage-optimized-v5`. - You can also find the template in the [list of templates available for each region](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-regions-templates-instances.md). + You can also find the template in the [list of templates available for each region](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md). - :::{image} ../../images/cloud-ec-migrate-deployment-template(2).png - :alt: Deployment Template ID - :class: screenshot - ::: + :::{image} ../../images/cloud-ec-migrate-deployment-template(2).png + :alt: Deployment Template ID + :class: screenshot + ::: 2. Make a request to update your deployment with two changes: @@ -32,224 +36,224 @@ You can use a PUT request to update your deployment, changing both the deploymen 2. Change the cluster topology to match the new template that your deployment will migrate to. -```sh -curl -H 'Content-Type: application/json' -X PUT -H "Authorization: ApiKey $EC_API_KEY" https://api.elastic-cloud.com/api/v1/deployments/$DEPLOYMENT_ID -d "{ - "resources": { - "integrations_server": [ - { - "elasticsearch_cluster_ref_id": "main-elasticsearch", - "region": "gcp-us-central1", - "plan": { - "cluster_topology": [ - { - "instance_configuration_id": "gcp.integrationsserver.n2.68x32x45.2", - "zone_count": 1, - "size": { - "resource": "memory", - "value": 1024 - } - } - ], - "integrations_server": { - "version": "8.7.1" - } - }, - "ref_id": "main-integrations_server" - } - ], - "elasticsearch": [ - { - "region": "gcp-us-central1", - "settings": { - "dedicated_masters_threshold": 6 - }, - "plan": { - "cluster_topology": [ - { - "zone_count": 2, - "elasticsearch": { - "node_attributes": { - "data": "hot" + ```sh + curl -H 'Content-Type: application/json' -X PUT -H "Authorization: ApiKey $EC_API_KEY" https://api.elastic-cloud.com/api/v1/deployments/$DEPLOYMENT_ID -d "{ + "resources": { + "integrations_server": [ + { + "elasticsearch_cluster_ref_id": "main-elasticsearch", + "region": "gcp-us-central1", + "plan": { + "cluster_topology": [ + { + "instance_configuration_id": "gcp.integrationsserver.n2.68x32x45.2", + "zone_count": 1, + "size": { + "resource": "memory", + "value": 1024 + } } - }, - "instance_configuration_id": "gcp.es.datahot.n2.68x10x45", - "node_roles": [ - "master", - "ingest", - "transform", - "data_hot", - "remote_cluster_client", - "data_content" ], - "id": "hot_content", - "size": { - "resource": "memory", - "value": 8192 + "integrations_server": { + "version": "8.7.1" } }, - { - "zone_count": 2, - "elasticsearch": { - "node_attributes": { - "data": "warm" - } - }, - "instance_configuration_id": "gcp.es.datawarm.n2.68x10x190", - "node_roles": [ - "data_warm", - "remote_cluster_client" - ], - "id": "warm", - "size": { - "resource": "memory", - "value": 0 - } + "ref_id": "main-integrations_server" + } + ], + "elasticsearch": [ + { + "region": "gcp-us-central1", + "settings": { + "dedicated_masters_threshold": 6 }, - { - "zone_count": 1, - "elasticsearch": { - "node_attributes": { - "data": "cold" + "plan": { + "cluster_topology": [ + { + "zone_count": 2, + "elasticsearch": { + "node_attributes": { + "data": "hot" + } + }, + "instance_configuration_id": "gcp.es.datahot.n2.68x10x45", + "node_roles": [ + "master", + "ingest", + "transform", + "data_hot", + "remote_cluster_client", + "data_content" + ], + "id": "hot_content", + "size": { + "resource": "memory", + "value": 8192 + } + }, + { + "zone_count": 2, + "elasticsearch": { + "node_attributes": { + "data": "warm" + } + }, + "instance_configuration_id": "gcp.es.datawarm.n2.68x10x190", + "node_roles": [ + "data_warm", + "remote_cluster_client" + ], + "id": "warm", + "size": { + "resource": "memory", + "value": 0 + } + }, + { + "zone_count": 1, + "elasticsearch": { + "node_attributes": { + "data": "cold" + } + }, + "instance_configuration_id": "gcp.es.datacold.n2.68x10x190", + "node_roles": [ + "data_cold", + "remote_cluster_client" + ], + "id": "cold", + "size": { + "resource": "memory", + "value": 0 + } + }, + { + "zone_count": 1, + "elasticsearch": { + "node_attributes": { + "data": "frozen" + } + }, + "instance_configuration_id": "gcp.es.datafrozen.n2.68x10x95", + "node_roles": [ + "data_frozen" + ], + "id": "frozen", + "size": { + "resource": "memory", + "value": 0 + } + }, + { + "zone_count": 3, + "instance_configuration_id": "gcp.es.master.n2.68x32x45.2", + "node_roles": [ + "master", + "remote_cluster_client" + ], + "id": "master", + "size": { + "resource": "memory", + "value": 0 + } + }, + { + "zone_count": 2, + "instance_configuration_id": "gcp.es.coordinating.n2.68x16x45.2", + "node_roles": [ + "ingest", + "remote_cluster_client" + ], + "id": "coordinating", + "size": { + "resource": "memory", + "value": 0 + } + }, + { + "zone_count": 1, + "instance_configuration_id": "gcp.es.ml.n2.68x32x45", + "node_roles": [ + "ml", + "remote_cluster_client" + ], + "id": "ml", + "size": { + "resource": "memory", + "value": 0 + } } - }, - "instance_configuration_id": "gcp.es.datacold.n2.68x10x190", - "node_roles": [ - "data_cold", - "remote_cluster_client" ], - "id": "cold", - "size": { - "resource": "memory", - "value": 0 - } - }, - { - "zone_count": 1, "elasticsearch": { - "node_attributes": { - "data": "frozen" - } + "version": "8.7.1", + "enabled_built_in_plugins": [] }, - "instance_configuration_id": "gcp.es.datafrozen.n2.68x10x95", - "node_roles": [ - "data_frozen" - ], - "id": "frozen", - "size": { - "resource": "memory", - "value": 0 + "deployment_template": { + "id": "gcp-storage-optimized-v5" } }, - { - "zone_count": 3, - "instance_configuration_id": "gcp.es.master.n2.68x32x45.2", - "node_roles": [ - "master", - "remote_cluster_client" + "ref_id": "main-elasticsearch" + } + ], + "enterprise_search": [ + { + "elasticsearch_cluster_ref_id": "main-elasticsearch", + "region": "gcp-us-central1", + "plan": { + "cluster_topology": [ + { + "node_type": { + "connector": true, + "appserver": true, + "worker": true + }, + "instance_configuration_id": "gcp.enterprisesearch.n2.68x32x45", + "zone_count": 1, + "size": { + "resource": "memory", + "value": 2048 + } + } ], - "id": "master", - "size": { - "resource": "memory", - "value": 0 + "enterprise_search": { + "version": "8.7.1" } }, - { - "zone_count": 2, - "instance_configuration_id": "gcp.es.coordinating.n2.68x16x45.2", - "node_roles": [ - "ingest", - "remote_cluster_client" + "ref_id": "main-enterprise_search" + } + ], + "kibana": [ + { + "elasticsearch_cluster_ref_id": "main-elasticsearch", + "region": "gcp-us-central1", + "plan": { + "cluster_topology": [ + { + "instance_configuration_id": "gcp.kibana.n2.68x32x45", + "zone_count": 1, + "size": { + "resource": "memory", + "value": 1024 + } + } ], - "id": "coordinating", - "size": { - "resource": "memory", - "value": 0 + "kibana": { + "version": "8.7.1" } }, - { - "zone_count": 1, - "instance_configuration_id": "gcp.es.ml.n2.68x32x45", - "node_roles": [ - "ml", - "remote_cluster_client" - ], - "id": "ml", - "size": { - "resource": "memory", - "value": 0 - } - } - ], - "elasticsearch": { - "version": "8.7.1", - "enabled_built_in_plugins": [] - }, - "deployment_template": { - "id": "gcp-storage-optimized-v5" - } - }, - "ref_id": "main-elasticsearch" - } - ], - "enterprise_search": [ - { - "elasticsearch_cluster_ref_id": "main-elasticsearch", - "region": "gcp-us-central1", - "plan": { - "cluster_topology": [ - { - "node_type": { - "connector": true, - "appserver": true, - "worker": true - }, - "instance_configuration_id": "gcp.enterprisesearch.n2.68x32x45", - "zone_count": 1, - "size": { - "resource": "memory", - "value": 2048 - } - } - ], - "enterprise_search": { - "version": "8.7.1" - } - }, - "ref_id": "main-enterprise_search" - } - ], - "kibana": [ - { - "elasticsearch_cluster_ref_id": "main-elasticsearch", - "region": "gcp-us-central1", - "plan": { - "cluster_topology": [ - { - "instance_configuration_id": "gcp.kibana.n2.68x32x45", - "zone_count": 1, - "size": { - "resource": "memory", - "value": 1024 - } - } - ], - "kibana": { - "version": "8.7.1" + "ref_id": "main-kibana" } - }, - "ref_id": "main-kibana" + ] + }, + "settings": { + "autoscaling_enabled": false + }, + "name": "My deployment", + "metadata": { + "system_owned": false } - ] - }, - "settings": { - "autoscaling_enabled": false - }, - "name": "My deployment", - "metadata": { - "system_owned": false - } -}" -``` + }" + ``` `DEPLOYMENT_ID` : The ID of your deployment, as shown in the Cloud UI or obtained through the API. @@ -262,7 +266,7 @@ Note that the `ref_id` and version numbers for your resources may not be the sam ## Use a snapshot to migrate deployments that use the cross-cluster search deployment template [ec-migrate-ccs-deployment-using-snapshot] -You can make this change in the user [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). The only drawback of this method is that it changes the URL used to access the {{es}} cluster and Kibana. +You can make this change in the user [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). The only drawback of this method is that it changes the URL used to access the {{es}} cluster and {{kib}}. 1. From the deployment menu, open the **Snapshots** page and click **Take Snapshot now**. Wait for the snapshot to finish. 2. From the main **Deployments** page, click **Create deployment**. Next to **Settings** toggle on **Restore snapshot data**, and then select your deployment and the snapshot that you created. diff --git a/deploy-manage/remote-clusters/ec-remote-cluster-ece.md b/deploy-manage/remote-clusters/ec-remote-cluster-ece.md index 31851cdbf..05def9ed7 100644 --- a/deploy-manage/remote-clusters/ec-remote-cluster-ece.md +++ b/deploy-manage/remote-clusters/ec-remote-cluster-ece.md @@ -1,9 +1,14 @@ --- +applies_to: + deployment: + ess: ga + ece: ga +navigation_title: With {{ece}} mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-remote-cluster-ece.html --- -# Access deployments of an Elastic Cloud Enterprise environment [ec-remote-cluster-ece] +# Access deployments of an {{ece}} environment [ec-remote-cluster-ece] This section explains how to configure a deployment to connect remotely to clusters belonging to an {{ECE}} (ECE) environment. @@ -12,71 +17,13 @@ This section explains how to configure a deployment to connect remotely to clust Before you start, consider the security model that you would prefer to use for authenticating remote connections between clusters, and follow the corresponding steps. API key -: For deployments based on {{stack}} version 8.10 or later, you can use an API key to authenticate and authorize cross-cluster operations to a remote cluster. This model offers administrators of both the local and the remote deployment fine-grained access controls. +: For deployments based on {{stack}} 8.14 or later, you can use an API key to authenticate and authorize cross-cluster operations to a remote cluster. This model offers administrators of both the local and the remote deployment fine-grained access controls. -TLS certificate +TLS certificate (deprecated in {{stack}} 9.0.0) : This model uses mutual TLS authentication for cross-cluster operations. User authentication is performed on the local cluster and a user’s role names are passed to the remote cluster. A superuser on the local deployment gains total read access to the remote deployment, so it is only suitable for deployments that are in the same security domain. :::::::{tab-set} -::::::{tab-item} TLS certificate -#### Configuring trust with clusters of an {{ece}} environment [ec-trust-ece] - -A deployment can be configured to trust all or specific deployments in a remote ECE environment: - -1. Access the **Security** page of the deployment you want to use for cross-cluster operations. -2. Select **Remote Connections > Add trusted environment** and choose **{{ece}}**. Then click **Next**. -3. Select **Certificates** as authentication mechanism and click **Next**. -4. Enter the environment ID of the ECE environment. You can find it under Platform > Trust Management in your ECE administration UI. -5. Upload the Certificate Authority of the ECE environment. You can download it from Platform > Trust Management in your ECE administration UI. -6. Choose one of following options to configure the level of trust with the ECE environment: - - * All deployments - This deployment trusts all deployments in the ECE environment, including new deployments when they are created. - * Specific deployments - Specify which of the existing deployments you want to trust in the ECE environment. The full Elasticsearch cluster ID must be entered for each remote cluster. The Elasticsearch `Cluster ID` can be found in the deployment overview page under **Applications**. - -7. Provide a name for the trusted environment. That name will appear in the trust summary of your deployment’s Security page. -8. Select **Create trust** to complete the configuration. -9. Configure the corresponding deployments of the ECE environment to [trust this deployment](/deploy-manage/remote-clusters/ece-enable-ccs.md). You will only be able to connect 2 deployments successfully when both of them trust each other. - -Note that the environment ID and cluster IDs must be entered fully and correctly. For security reasons, no verification of the IDs is possible. If cross-environment trust does not appear to be working, double-checking the IDs is a good place to start. - -::::{dropdown} **Using the API** -You can update a deployment using the appropriate trust settings for the {{es}} payload. - -In order to trust a deployment with cluster id `cf659f7fe6164d9691b284ae36811be1` (NOTE: use the {{es}} cluster ID, not the deployment ID) in an ECE environment with environment ID `1053523734`, you need to update the trust settings with an additional direct trust relationship like this: - -```json -{ - "trust":{ - "accounts":[ - { - "account_id":"ec38dd0aa45f4a69909ca5c81c27138a", - "trust_all":true - } - ], - "direct": [ - { - "type" : "ECE", - "name" : "My ECE environment", - "scope_id" : "1053523734", - "certificates" : [ - { - "pem" : "-----BEGIN CERTIFICATE-----\nMIIDTzCCA...H0=\n-----END CERTIFICATE-----" - } - ], - "trust_all":false, - "trust_allowlist":[ - "cf659f7fe6164d9691b284ae36811be1" - ] - } - ] - } -} -``` - -:::: -:::::: - ::::::{tab-item} API key API key authentication enables a local cluster to authenticate itself with a remote cluster via a [cross-cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key). The API key needs to be created by an administrator of the remote cluster. The local cluster is configured to provide this API key on each request to the remote cluster. The remote cluster verifies the API key and grants access, based on the API key’s privileges. @@ -84,35 +31,33 @@ All cross-cluster requests from the local cluster are bound by the API key’s p On the local cluster side, not every local user needs to access every piece of data allowed by the API key. An administrator of the local cluster can further configure additional permission constraints on local users so each user only gets access to the necessary remote data. Note it is only possible to further reduce the permissions allowed by the API key for individual local users. It is impossible to increase the permissions to go beyond what is allowed by the API key. -If you run into any issues, refer to [Troubleshooting](remote-clusters-troubleshooting.md). +If you run into any issues, refer to [Troubleshooting](/troubleshoot/elasticsearch/remote-clusters.md). -#### Prerequisites and limitations [ec_prerequisites_and_limitations_3] +### Prerequisites and limitations [ec_prerequisites_and_limitations_3] -* The local and remote deployments must be on version 8.12 or later. +* The local and remote deployments must be on {{stack}} 8.14 or later. * API key authentication can’t be used in combination with traffic filters. * Contrary to the certificate security model, the API key security model does not require that both local and remote clusters trust each other. -#### Create a cross-cluster API key on the remote deployment [ec_create_a_cross_cluster_api_key_on_the_remote_deployment_3] +### Create a cross-cluster API key on the remote deployment [ec_create_a_cross_cluster_api_key_on_the_remote_deployment_3] -* On the deployment you will use as remote, use the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) or [Kibana](../api-keys/elasticsearch-api-keys.md) to create a cross-cluster API key. Configure it with access to the indices you want to use for {{ccs}} or {{ccr}}. +* On the deployment you will use as remote, use the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) or [{{kib}}](../api-keys/elasticsearch-api-keys.md) to create a cross-cluster API key. Configure it with access to the indices you want to use for {{ccs}} or {{ccr}}. * Copy the encoded key (`encoded` in the response) to a safe location. You will need it in the next step. -#### Configure the local deployment [ec_configure_the_local_deployment] +### Configure the local deployment [ec_configure_the_local_deployment] The API key created previously will be used by the local deployment to authenticate with the corresponding set of permissions to the remote deployment. For that, you need to add the API key to the local deployment’s keystore. The steps to follow depend on whether the Certificate Authority (CA) of the remote ECE environment’s proxy or load balancing infrastructure is public or private. -**The CA is public** +::::{dropdown} The CA is public +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. On the home page, find your hosted deployment and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. -::::{dropdown} -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. - - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. From the deployment menu, select **Security**. 4. Locate **Remote connections** and select **Add an API key**. @@ -124,10 +69,10 @@ The steps to follow depend on whether the Certificate Authority (CA) of the remo 2. Click **Add** to save the API key to the keystore. -5. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart Elasticsearch**.
+5. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart {{es}}**.
::::{note} - If the local deployment runs on version 8.13 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. + If the local deployment runs on version 8.14 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. :::: @@ -136,13 +81,11 @@ If you later need to update the remote connection with different permissions, yo :::: -**The CA is private** - -::::{dropdown} -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +::::{dropdown} The CA is private +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. On the home page, find your hosted deployment and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. Access the **Security** page of the deployment. 4. Select **Remote Connections > Add trusted environment** and choose **{{ece}}**. Then click **Next**. @@ -168,12 +111,12 @@ If you later need to update the remote connection with different permissions, yo :alt: Certificate to copy from the chain ::: -8. Provide a name for the trusted environment. That name will appear in the trust summary of your deployment’s Security page. +8. Provide a name for the trusted environment. That name will appear in the trust summary of your deployment’s **Security** page. 9. Select **Create trust** to complete the configuration. -10. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart Elasticsearch**.
+10. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart {{es}}**.
::::{note} - If the local deployment runs on version 8.13 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. + If the local deployment runs on version 8.14 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. :::: @@ -182,56 +125,114 @@ If you later need to update the remote connection with different permissions, yo :::: :::::: +::::::{tab-item} TLS certificate (deprecated) +### Configuring trust with clusters of an {{ece}} environment [ec-trust-ece] + +A deployment can be configured to trust all or specific deployments in a remote ECE environment: + +1. Access the **Security** page of the deployment you want to use for cross-cluster operations. +2. Select **Remote Connections > Add trusted environment** and choose **{{ece}}**. Then click **Next**. +3. Select **Certificates** as authentication mechanism and click **Next**. +4. Enter the environment ID of the ECE environment. You can find it under Platform > Trust Management in your ECE administration UI. +5. Upload the Certificate Authority of the ECE environment. You can download it from Platform > Trust Management in your ECE administration UI. +6. Choose one of following options to configure the level of trust with the ECE environment: + + * All deployments - This deployment trusts all deployments in the ECE environment, including new deployments when they are created. + * Specific deployments - Specify which of the existing deployments you want to trust in the ECE environment. The full {{es}} cluster ID must be entered for each remote cluster. The {{es}} `Cluster ID` can be found in the deployment overview page under **Applications**. + +7. Provide a name for the trusted environment. That name will appear in the trust summary of your deployment’s **Security** page. +8. Select **Create trust** to complete the configuration. +9. Configure the corresponding deployments of the ECE environment to [trust this deployment](/deploy-manage/remote-clusters/ece-enable-ccs.md). You will only be able to connect 2 deployments successfully when both of them trust each other. + +Note that the environment ID and cluster IDs must be entered fully and correctly. For security reasons, no verification of the IDs is possible. If cross-environment trust does not appear to be working, double-checking the IDs is a good place to start. + +::::{dropdown} Using the API +You can update a deployment using the appropriate trust settings for the {{es}} payload. + +In order to trust a deployment with cluster id `cf659f7fe6164d9691b284ae36811be1` (NOTE: use the {{es}} cluster ID, not the deployment ID) in an ECE environment with environment ID `1053523734`, you need to update the trust settings with an additional direct trust relationship like this: + +```json +{ + "trust":{ + "accounts":[ + { + "account_id":"ec38dd0aa45f4a69909ca5c81c27138a", + "trust_all":true + } + ], + "direct": [ + { + "type" : "ECE", + "name" : "My ECE environment", + "scope_id" : "1053523734", + "certificates" : [ + { + "pem" : "-----BEGIN CERTIFICATE-----\nMIIDTzCCA...H0=\n-----END CERTIFICATE-----" + } + ], + "trust_all":false, + "trust_allowlist":[ + "cf659f7fe6164d9691b284ae36811be1" + ] + } + ] + } +} +``` + +:::: +:::::: + ::::::: You can now connect remotely to the trusted clusters. ## Connect to the remote cluster [ec_connect_to_the_remote_cluster_3] -On the local cluster, add the remote cluster using Kibana or the {{es}} API. +On the local cluster, add the remote cluster using {{kib}} or the {{es}} API. -### Using Kibana [ec_using_kibana_3] +### Using {{kib}} [ec_using_kibana_3] 1. Open the {{kib}} main menu, and select **Stack Management > Data > Remote Clusters > Add a remote cluster**. 2. Enable **Manually enter proxy address and server name**. 3. Fill in the following fields: * **Name**: This *cluster alias* is a unique identifier that represents the connection to the remote cluster and is used to distinguish between local and remote indices. - * **Proxy address**: This value can be found on the **Security** page of the Elasticsearch Service deployment you want to use as a remote.
+ * **Proxy address**: This value can be found on the **Security** page of the {{ech}} deployment you want to use as a remote.
- ::::{tip} - If you’re using API keys as security model, change the port into `9443`. - :::: + ::::{tip} + If you’re using API keys as security model, change the port into `9443`. + :::: - * **Server name**: This value can be found on the **Security** page of the Elasticsearch Service deployment you want to use as a remote. + * **Server name**: This value can be found on the **Security** page of the {{ech}} deployment you want to use as a remote. - :::{image} ../../images/cloud-ce-copy-remote-cluster-parameters.png - :alt: Remote Cluster Parameters in Deployment - :class: screenshot - ::: + :::{image} ../../images/cloud-ce-copy-remote-cluster-parameters.png + :alt: Remote Cluster Parameters in Deployment + :class: screenshot + ::: - ::::{note} - If you’re having issues establishing the connection and the remote cluster is part of an {{ece}} environment with a private certificate, make sure that the proxy address and server name match with the the certificate information. For more information, refer to [Administering endpoints in {{ece}}](/deploy-manage/deploy/cloud-enterprise/change-endpoint-urls.md). - :::: + ::::{note} + If you’re having issues establishing the connection and the remote cluster is part of an {{ece}} environment with a private certificate, make sure that the proxy address and server name match with the the certificate information. For more information, refer to [Administering endpoints in {{ece}}](/deploy-manage/deploy/cloud-enterprise/change-endpoint-urls.md). + :::: 4. Click **Next**. 5. Click **Add remote cluster** (you have already established trust in a previous step). -### Using the Elasticsearch API [ec_using_the_elasticsearch_api_3] +### Using the {{es}} API [ec_using_the_elasticsearch_api_3] To configure a deployment as a remote cluster, use the [cluster update settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings). Configure the following fields: * `mode`: `proxy` -* `proxy_address`: This value can be found on the **Security** page of the Elasticsearch Service deployment you want to use as a remote. Also, using the API, this value can be obtained from the {{es}} resource info, concatenating the field `metadata.endpoint` and port `9400` using a semicolon. +* `proxy_address`: This value can be found on the **Security** page of the {{ech}} deployment you want to use as a remote. Also, using the API, this value can be obtained from the {{es}} resource info, concatenating the field `metadata.endpoint` and port `9400` using a semicolon. -::::{tip} -If you’re using API keys as security model, change the port into `9443`. -:::: + ::::{tip} + If you’re using API keys as security model, change the port into `9443`. + :::: -* `server_name`: This value can be found on the **Security** page of the Elasticsearch Service deployment you want to use as a remote. Also, using the API, this can be obtained from the {{es}} resource info field `metadata.endpoint`. +* `server_name`: This value can be found on the **Security** page of the {{ech}} deployment you want to use as a remote. Also, using the API, this can be obtained from the {{es}} resource info field `metadata.endpoint`. This is an example of the API call to `_cluster/settings`: @@ -252,45 +253,11 @@ PUT /_cluster/settings } ``` -:::::{dropdown} **Stack Version above 6.7.0 and below 7.6.0** -::::{note} -This section only applies if you’re using TLS certificates as cross-cluster security model. -:::: - - -When the cluster to be configured as a remote is above 6.7.0 and below 7.6.0, the remote cluster must be configured using the [sniff mode](/deploy-manage/remote-clusters/remote-clusters-self-managed.md#sniff-mode) with the proxy field. For each remote cluster you need to pass the following fields: -* **Proxy**: This value can be found on the **Security** page of the deployment you want to use as a remote under the name `Proxy Address`. Also, using the API, this can be obtained from the elasticsearch resource info, concatenating the fields `metadata.endpoint` and `metadata.ports.transport_passthrough` using a semicolon. -* **Seeds**: This field is an array that must contain only one value, which is the `server name` that can be found on the **Security** page of the {{es}} deployment you want to use as a remote concatenated with `:1`. Also, using the API, this can be obtained from the {{es}} resource info, concatenating the fields `metadata.endpoint` and `1` with a semicolon. -* **Mode**: sniff (or empty, since sniff is the default value) - -This is an example of the API call to `_cluster/settings`: - -```json -{ - "persistent": { - "cluster": { - "remote": { - "my-remote-cluster-1": { - "seeds": [ - "a542184a7a7d45b88b83f95392f450ab.192.168.44.10.ip.es.io:1" - ], - "proxy": "a542184a7a7d45b88b83f95392f450ab.192.168.44.10.ip.es.io:9400" - } - } - } - } -} -``` - -::::: - - - -### Using the Elasticsearch Service RESTful API [ec_using_the_elasticsearch_service_restful_api_3] +### Using the {{ecloud}} RESTful API [ec_using_the_elasticsearch_service_restful_api_3] ::::{note} -This section only applies if you’re using TLS certificates as cross-cluster security model and when both clusters belong to the same organization (for other scenarios,the Elasticsearch API should be used instead): +This section only applies if you’re using TLS certificates as cross-cluster security model and when both clusters belong to the same organization. For other scenarios, the [{{es}} API](#ec_using_the_elasticsearch_api_3) should be used instead. :::: @@ -314,7 +281,7 @@ curl -H 'Content-Type: application/json' -X PUT -H "Authorization: ApiKey $EC_AP `REF_ID_REMOTE` : The unique ID of the {{es}} resources inside your remote deployment (you can obtain these values through the API). -Note the following when using the Elasticsearch Service RESTful API: +Note the following when using the {{ecloud}} RESTful API: 1. A cluster alias must contain only letters, numbers, dashes (-), or underscores (_). 2. To learn about skipping disconnected clusters, refer to the [{{es}} documentation](/solutions/search/cross-cluster-search.md#skip-unavailable-clusters). @@ -327,11 +294,9 @@ curl -X GET -H "Authorization: ApiKey $EC_API_KEY" https://api.elastic-cloud.com ``` ::::{note} -The response will include just the remote clusters from the same organization in Elasticsearch Service. In order to obtain the whole list of remote clusters, use Kibana or the Elasticsearch API [Elasticsearch API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-remote-info) directly. +The response will include just the remote clusters from the same {{ecloud}} organization. In order to obtain the whole list of remote clusters, use {{kib}} or the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-remote-info) directly. :::: - - ## Configure roles and users [ec_configure_roles_and_users_3] To use a remote cluster for {{ccr}} or {{ccs}}, you need to create user roles with [remote indices privileges](../users-roles/cluster-or-deployment-auth/defining-roles.md#roles-remote-indices-priv) on the local cluster. Refer to [Configure roles and users](remote-clusters-api-key.md#remote-clusters-privileges-api-key). diff --git a/deploy-manage/remote-clusters/ec-remote-cluster-other-ess.md b/deploy-manage/remote-clusters/ec-remote-cluster-other-ess.md index 4892d367a..de596ba48 100644 --- a/deploy-manage/remote-clusters/ec-remote-cluster-other-ess.md +++ b/deploy-manage/remote-clusters/ec-remote-cluster-other-ess.md @@ -1,71 +1,28 @@ --- +applies_to: + deployment: + ess: ga +navigation_title: With a different {{ecloud}} organization mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-remote-cluster-other-ess.html --- -# Access deployments of another Elasticsearch Service organization [ec-remote-cluster-other-ess] +# Access deployments of another {{ecloud}} organization [ec-remote-cluster-other-ess] -This section explains how to configure a deployment to connect remotely to clusters belonging to a different Elasticsearch Service organization. +This section explains how to configure a deployment to connect remotely to clusters belonging to a different {{ecloud}} organization. ## Allow the remote connection [ec_allow_the_remote_connection_2] Before you start, consider the security model that you would prefer to use for authenticating remote connections between clusters, and follow the corresponding steps. API key -: For deployments based on {{stack}} version 8.10 or later, you can use an API key to authenticate and authorize cross-cluster operations to a remote cluster. This model offers administrators of both the local and the remote deployment fine-grained access controls. +: For deployments based on {{stack}} 8.14 or later, you can use an API key to authenticate and authorize cross-cluster operations to a remote cluster. This model offers administrators of both the local and the remote deployment fine-grained access controls. -TLS certificate +TLS certificate (deprecated in {{stack}} 9.0.0) : This model uses mutual TLS authentication for cross-cluster operations. User authentication is performed on the local cluster and a user’s role names are passed to the remote cluster. A superuser on the local deployment gains total read access to the remote deployment, so it is only suitable for deployments that are in the same security domain. :::::::{tab-set} -::::::{tab-item} TLS certificate -#### Specify the deployments trusted to be used as remote clusters [ec-trust-other-organization] - -A deployment can be configured to trust all or specific deployments in another Elasticsearch Service [organizations](../users-roles/cloud-organization.md). To add cross-organization trust: - -1. From the **Security** menu, select **Remote Connections > Add trusted environment** and select **{{ecloud}}**. Then click **Next**. -2. Select **Certificates** as authentication mechanism and click **Next**. -3. Enter the ID of the deployment’s organization which you want to establish trust with. You can find that ID on the Organization page. It is usually made of 10 digits. -4. Choose one of following options to configure the level of trust with the other organization: - - * All deployments - This deployment trusts all deployments in the other organization, including new deployments when they are created. - * Specific deployments - Specify which of the existing deployments you want to trust in the other organization. The full Elasticsearch cluster ID must be entered for each remote cluster. The Elasticsearch `Cluster ID` can be found in the deployment overview page under **Applications**. - -5. Provide a name for the trusted environment. That name will appear in the trust summary of your deployment’s Security page. -6. Select **Create trust** to complete the configuration. -7. Repeat these steps from each of the deployments you want to use for CCS or CCR in both organizations. You will only be able to connect 2 deployments successfully when both of them trust each other. - -Note that the organization ID and cluster IDs must be entered fully and correctly. For security reasons, no verification of the IDs is possible. If cross-organization trust does not appear to be working, double-checking the IDs is a good place to start. - -::::{dropdown} **Using the API** -You can update a deployment using the appropriate trust settings for the {{es}} payload. - -In order to trust a deployment with cluster id `cf659f7fe6164d9691b284ae36811be1` (NOTE: use the {{es}} cluster ID, not the deployment ID) in another organization with Organization ID `1053523734`, you need to update the trust settings with an additional account like this: - -```json -{ - "trust":{ - "accounts":[ - { - "account_id":"ec38dd0aa45f4a69909ca5c81c27138a", - "trust_all":true - }, - { - "account_id":"1053523734", - "trust_all":false, - "trust_allowlist":[ - "cf659f7fe6164d9691b284ae36811be1" - ] - } - ] - } -} -``` - -:::: -:::::: - ::::::{tab-item} API key API key authentication enables a local cluster to authenticate itself with a remote cluster via a [cross-cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key). The API key needs to be created by an administrator of the remote cluster. The local cluster is configured to provide this API key on each request to the remote cluster. The remote cluster verifies the API key and grants access, based on the API key’s privileges. @@ -73,30 +30,30 @@ All cross-cluster requests from the local cluster are bound by the API key’s p On the local cluster side, not every local user needs to access every piece of data allowed by the API key. An administrator of the local cluster can further configure additional permission constraints on local users so each user only gets access to the necessary remote data. Note it is only possible to further reduce the permissions allowed by the API key for individual local users. It is impossible to increase the permissions to go beyond what is allowed by the API key. -If you run into any issues, refer to [Troubleshooting](remote-clusters-troubleshooting.md). +If you run into any issues, refer to [Troubleshooting](/troubleshoot/elasticsearch/remote-clusters.md). -#### Prerequisites and limitations [ec_prerequisites_and_limitations_2] +### Prerequisites and limitations [ec_prerequisites_and_limitations_2] -* The local and remote deployments must be on version 8.12 or later. +* The local and remote deployments must be on {{stack}} 8.14 or later. * API key authentication can’t be used in combination with traffic filters. * Contrary to the certificate security model, the API key security model does not require that both local and remote clusters trust each other. -#### Create a cross-cluster API key on the remote deployment [ec_create_a_cross_cluster_api_key_on_the_remote_deployment_2] +### Create a cross-cluster API key on the remote deployment [ec_create_a_cross_cluster_api_key_on_the_remote_deployment_2] -* On the deployment you will use as remote, use the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) or [Kibana](../api-keys/elasticsearch-api-keys.md) to create a cross-cluster API key. Configure it with access to the indices you want to use for {{ccs}} or {{ccr}}. +* On the deployment you will use as remote, use the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) or [{{kib}}](../api-keys/elasticsearch-api-keys.md) to create a cross-cluster API key. Configure it with access to the indices you want to use for {{ccs}} or {{ccr}}. * Copy the encoded key (`encoded` in the response) to a safe location. You will need it in the next step. -#### Add the cross-cluster API key to the keystore of the local deployment [ec_add_the_cross_cluster_api_key_to_the_keystore_of_the_local_deployment_2] +### Add the cross-cluster API key to the keystore of the local deployment [ec_add_the_cross_cluster_api_key_to_the_keystore_of_the_local_deployment_2] The API key created previously will be used by the local deployment to authenticate with the corresponding set of permissions to the remote deployment. For that, you need to add the API key to the local deployment’s keystore. -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. On the home page, find your hosted deployment and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. From the deployment menu, select **Security**. 4. Locate **Remote connections** and select **Add an API key**. @@ -108,66 +65,112 @@ The API key created previously will be used by the local deployment to authentic 2. Click **Add** to save the API key to the keystore. -5. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart Elasticsearch**.
+5. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart {{es}}**.
::::{note} - If the local deployment runs on version 8.13 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. + If the local deployment runs on version 8.14 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. :::: If you later need to update the remote connection with different permissions, you can replace the API key as detailed in [Update the access level of a remote cluster connection relying on a cross-cluster API key](ec-edit-remove-trusted-environment.md#ec-edit-remove-trusted-environment-api-key). :::::: +::::::{tab-item} TLS certificate (deprecated) +### Specify the deployments trusted to be used as remote clusters [ec-trust-other-organization] + +A deployment can be configured to trust all or specific deployments in another {{ech}} [organization](../users-roles/cloud-organization.md). To add cross-organization trust: + +1. From the **Security** menu, select **Remote Connections > Add trusted environment** and select **{{ecloud}}**. Then click **Next**. +2. Select **Certificates** as authentication mechanism and click **Next**. +3. Enter the ID of the deployment’s organization which you want to establish trust with. You can find that ID on the Organization page. It is usually made of 10 digits. +4. Choose one of following options to configure the level of trust with the other organization: + + * All deployments - This deployment trusts all deployments in the other organization, including new deployments when they are created. + * Specific deployments - Specify which of the existing deployments you want to trust in the other organization. The full {{es}} cluster ID must be entered for each remote cluster. The {{es}} `Cluster ID` can be found in the deployment overview page under **Applications**. + +5. Provide a name for the trusted environment. That name will appear in the trust summary of your deployment’s **Security** page. +6. Select **Create trust** to complete the configuration. +7. Repeat these steps from each of the deployments you want to use for CCS or CCR in both organizations. You will only be able to connect 2 deployments successfully when both of them trust each other. + +Note that the organization ID and cluster IDs must be entered fully and correctly. For security reasons, no verification of the IDs is possible. If cross-organization trust does not appear to be working, double-checking the IDs is a good place to start. + +::::{dropdown} Using the API +You can update a deployment using the appropriate trust settings for the {{es}} payload. + +In order to trust a deployment with cluster id `cf659f7fe6164d9691b284ae36811be1` (NOTE: use the {{es}} cluster ID, not the deployment ID) in another organization with Organization ID `1053523734`, you need to update the trust settings with an additional account like this: + +```json +{ + "trust":{ + "accounts":[ + { + "account_id":"ec38dd0aa45f4a69909ca5c81c27138a", + "trust_all":true + }, + { + "account_id":"1053523734", + "trust_all":false, + "trust_allowlist":[ + "cf659f7fe6164d9691b284ae36811be1" + ] + } + ] + } +} +``` + +:::: +:::::: ::::::: You can now connect remotely to the trusted clusters. ## Connect to the remote cluster [ec_connect_to_the_remote_cluster_2] -On the local cluster, add the remote cluster using Kibana or the {{es}} API. +On the local cluster, add the remote cluster using {{kib}} or the {{es}} API. -### Using Kibana [ec_using_kibana_2] +### Using {{kib}} [ec_using_kibana_2] 1. Open the {{kib}} main menu, and select **Stack Management > Data > Remote Clusters > Add a remote cluster**. 2. Enable **Manually enter proxy address and server name**. 3. Fill in the following fields: * **Name**: This *cluster alias* is a unique identifier that represents the connection to the remote cluster and is used to distinguish between local and remote indices. - * **Proxy address**: This value can be found on the **Security** page of the Elasticsearch Service deployment you want to use as a remote.
+ * **Proxy address**: This value can be found on the **Security** page of the {{ech}} deployment you want to use as a remote.
- ::::{tip} - If you’re using API keys as security model, change the port into `9443`. - :::: + ::::{tip} + If you’re using API keys as security model, change the port into `9443`. + :::: - * **Server name**: This value can be found on the **Security** page of the Elasticsearch Service deployment you want to use as a remote. + * **Server name**: This value can be found on the **Security** page of the {{ech}} deployment you want to use as a remote. - :::{image} ../../images/cloud-ce-copy-remote-cluster-parameters.png - :alt: Remote Cluster Parameters in Deployment - :class: screenshot - ::: + :::{image} ../../images/cloud-ce-copy-remote-cluster-parameters.png + :alt: Remote Cluster Parameters in Deployment + :class: screenshot + ::: - ::::{note} - If you’re having issues establishing the connection and the remote cluster is part of an {{ece}} environment with a private certificate, make sure that the proxy address and server name match with the the certificate information. For more information, refer to [Administering endpoints in {{ece}}](/deploy-manage/deploy/cloud-enterprise/change-endpoint-urls.md). - :::: + ::::{note} + If you’re having issues establishing the connection and the remote cluster is part of an {{ece}} environment with a private certificate, make sure that the proxy address and server name match with the the certificate information. For more information, refer to [Administering endpoints in {{ece}}](/deploy-manage/deploy/cloud-enterprise/change-endpoint-urls.md). + :::: 4. Click **Next**. 5. Click **Add remote cluster** (you have already established trust in a previous step). -### Using the Elasticsearch API [ec_using_the_elasticsearch_api_2] +### Using the {{es}} API [ec_using_the_elasticsearch_api_2] To configure a deployment as a remote cluster, use the [cluster update settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings). Configure the following fields: * `mode`: `proxy` -* `proxy_address`: This value can be found on the **Security** page of the Elasticsearch Service deployment you want to use as a remote. Also, using the API, this value can be obtained from the {{es}} resource info, concatenating the field `metadata.endpoint` and port `9400` using a semicolon. +* `proxy_address`: This value can be found on the **Security** page of the {{ech}} deployment you want to use as a remote. Also, using the API, this value can be obtained from the {{es}} resource info, concatenating the field `metadata.endpoint` and port `9400` using a semicolon. -::::{tip} -If you’re using API keys as security model, change the port into `9443`. -:::: + ::::{tip} + If you’re using API keys as security model, change the port into `9443`. + :::: -* `server_name`: This value can be found on the **Security** page of the Elasticsearch Service deployment you want to use as a remote. Also, using the API, this can be obtained from the {{es}} resource info field `metadata.endpoint`. +* `server_name`: This value can be found on the **Security** page of the {{ech}} deployment you want to use as a remote. Also, using the API, this can be obtained from the {{es}} resource info field `metadata.endpoint`. This is an example of the API call to `_cluster/settings`: @@ -188,45 +191,10 @@ PUT /_cluster/settings } ``` -:::::{dropdown} **Stack Version above 6.7.0 and below 7.6.0** -::::{note} -This section only applies if you’re using TLS certificates as cross-cluster security model. -:::: - - -When the cluster to be configured as a remote is above 6.7.0 and below 7.6.0, the remote cluster must be configured using the [sniff mode](/deploy-manage/remote-clusters/remote-clusters-self-managed.md#sniff-mode) with the proxy field. For each remote cluster you need to pass the following fields: - -* **Proxy**: This value can be found on the **Security** page of the deployment you want to use as a remote under the name `Proxy Address`. Also, using the API, this can be obtained from the elasticsearch resource info, concatenating the fields `metadata.endpoint` and `metadata.ports.transport_passthrough` using a semicolon. -* **Seeds**: This field is an array that must contain only one value, which is the `server name` that can be found on the **Security** page of the {{es}} deployment you want to use as a remote concatenated with `:1`. Also, using the API, this can be obtained from the {{es}} resource info, concatenating the fields `metadata.endpoint` and `1` with a semicolon. -* **Mode**: sniff (or empty, since sniff is the default value) - -This is an example of the API call to `_cluster/settings`: - -```json -{ - "persistent": { - "cluster": { - "remote": { - "my-remote-cluster-1": { - "seeds": [ - "a542184a7a7d45b88b83f95392f450ab.192.168.44.10.ip.es.io:1" - ], - "proxy": "a542184a7a7d45b88b83f95392f450ab.192.168.44.10.ip.es.io:9400" - } - } - } - } -} -``` - -::::: - - - -### Using the Elasticsearch Service RESTful API [ec_using_the_elasticsearch_service_restful_api_2] +### Using the {{ecloud}} RESTful API [ec_using_the_elasticsearch_service_restful_api_2] ::::{note} -This section only applies if you’re using TLS certificates as cross-cluster security model and when both clusters belong to the same organization (for other scenarios,the Elasticsearch API should be used instead): +This section only applies if you’re using TLS certificates as cross-cluster security model and when both clusters belong to the same organization. For other scenarios, the [{{es}} API](#ec_using_the_elasticsearch_api_2) should be used instead. :::: @@ -250,7 +218,7 @@ curl -H 'Content-Type: application/json' -X PUT -H "Authorization: ApiKey $EC_AP `REF_ID_REMOTE` : The unique ID of the {{es}} resources inside your remote deployment (you can obtain these values through the API). -Note the following when using the Elasticsearch Service RESTful API: +Note the following when using the {{ecloud}} RESTful API: 1. A cluster alias must contain only letters, numbers, dashes (-), or underscores (_). 2. To learn about skipping disconnected clusters, refer to the [{{es}} documentation](/solutions/search/cross-cluster-search.md#skip-unavailable-clusters). @@ -263,11 +231,10 @@ curl -X GET -H "Authorization: ApiKey $EC_API_KEY" https://api.elastic-cloud.com ``` ::::{note} -The response will include just the remote clusters from the same organization in Elasticsearch Service. In order to obtain the whole list of remote clusters, use Kibana or the Elasticsearch API [Elasticsearch API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-remote-info) directly. +The response will include just the remote clusters from the same {{ecloud}} organization. In order to obtain the whole list of remote clusters, use {{kib}} or the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-remote-info) directly. :::: - ## Configure roles and users [ec_configure_roles_and_users_2] -To use a remote cluster for {{ccr}} or {{ccs}}, you need to create user roles with [remote indices privileges](../users-roles/cluster-or-deployment-auth/defining-roles.md#roles-remote-indices-priv) on the local cluster. Refer to [Configure roles and users](remote-clusters-api-key.md#remote-clusters-privileges-api-key). +To use a remote cluster for {{ccr}} or {{ccs}}, you need to create user roles with [remote indices privileges](../users-roles/cluster-or-deployment-auth/defining-roles.md#roles-remote-indices-priv) on the local cluster. Refer to [Configure roles and users](remote-clusters-api-key.md#remote-clusters-privileges-api-key). \ No newline at end of file diff --git a/deploy-manage/remote-clusters/ec-remote-cluster-same-ess.md b/deploy-manage/remote-clusters/ec-remote-cluster-same-ess.md index 838ae0f5c..cf08bc919 100644 --- a/deploy-manage/remote-clusters/ec-remote-cluster-same-ess.md +++ b/deploy-manage/remote-clusters/ec-remote-cluster-same-ess.md @@ -1,28 +1,84 @@ --- +applies_to: + deployment: + ess: ga +navigation_title: Within the same {{ecloud}} organization mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-remote-cluster-same-ess.html --- -# Access other deployments of the same Elasticsearch Service organization [ec-remote-cluster-same-ess] +# Access other deployments of the same {{ecloud}} organization [ec-remote-cluster-same-ess] -This section explains how to configure a deployment to connect remotely to clusters belonging to the same Elasticsearch Service organization. +This section explains how to configure a deployment to connect remotely to clusters belonging to the same {{ecloud}} organization. ## Allow the remote connection [ec_allow_the_remote_connection] Before you start, consider the security model that you would prefer to use for authenticating remote connections between clusters, and follow the corresponding steps. API key -: For deployments based on {{stack}} version 8.10 or later, you can use an API key to authenticate and authorize cross-cluster operations to a remote cluster. This model offers administrators of both the local and the remote deployment fine-grained access controls. +: For deployments based on {{stack}} 8.14 or later, you can use an API key to authenticate and authorize cross-cluster operations to a remote cluster. This model offers administrators of both the local and the remote deployment fine-grained access controls. -TLS certificate +TLS certificate (deprecated in {{stack}} 9.0.0) : This model uses mutual TLS authentication for cross-cluster operations. User authentication is performed on the local cluster and a user’s role names are passed to the remote cluster. A superuser on the local deployment gains total read access to the remote deployment, so it is only suitable for deployments that are in the same security domain. :::::::{tab-set} -::::::{tab-item} TLS certificate -#### Set the default trust with other clusters in the same Elasticsearch Service organization [ec_set_the_default_trust_with_other_clusters_in_the_same_elasticsearch_service_organization] +::::::{tab-item} API key +API key authentication enables a local cluster to authenticate itself with a remote cluster via a [cross-cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key). The API key needs to be created by an administrator of the remote cluster. The local cluster is configured to provide this API key on each request to the remote cluster. The remote cluster verifies the API key and grants access, based on the API key’s privileges. + +All cross-cluster requests from the local cluster are bound by the API key’s privileges, regardless of local users associated with the requests. For example, if the API key only allows read access to `my-index` on the remote cluster, even a superuser from the local cluster is limited by this constraint. This mechanism enables the remote cluster’s administrator to have full control over who can access what data with cross-cluster search and/or cross-cluster replication. The remote cluster’s administrator can be confident that no access is possible beyond what is explicitly assigned to the API key. + +On the local cluster side, not every local user needs to access every piece of data allowed by the API key. An administrator of the local cluster can further configure additional permission constraints on local users so each user only gets access to the necessary remote data. Note it is only possible to further reduce the permissions allowed by the API key for individual local users. It is impossible to increase the permissions to go beyond what is allowed by the API key. + +If you run into any issues, refer to [Troubleshooting](/troubleshoot/elasticsearch/remote-clusters.md). + + +### Prerequisites and limitations [ec_prerequisites_and_limitations] + +* The local and remote deployments must be on {{stack}} 8.14 or later. +* API key authentication can’t be used in combination with traffic filters. +* Contrary to the certificate security model, the API key security model does not require that both local and remote clusters trust each other. + + +### Create a cross-cluster API key on the remote deployment [ec_create_a_cross_cluster_api_key_on_the_remote_deployment] + +* On the deployment you will use as remote, use the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) or [{{kib}}](../api-keys/elasticsearch-api-keys.md) to create a cross-cluster API key. Configure it with access to the indices you want to use for {{ccs}} or {{ccr}}. +* Copy the encoded key (`encoded` in the response) to a safe location. You will need it in the next step. + + +### Add the cross-cluster API key to the keystore of the local deployment [ec_add_the_cross_cluster_api_key_to_the_keystore_of_the_local_deployment] + +The API key created previously will be used by the local deployment to authenticate with the corresponding set of permissions to the remote deployment. For that, you need to add the API key to the local deployment’s keystore. + +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. On the home page, find your hosted deployment and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. + + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + +3. From the deployment menu, select **Security**. +4. Locate **Remote connections** and select **Add an API key**. + + 1. Fill both fields. + + * For the **Setting name**, enter the the alias of your choice. You will use this alias to connect to the remote cluster later. It must be lowercase and only contain letters, numbers, dashes and underscores. + * For the **Secret**, paste the encoded cross-cluster API key. + + 2. Click **Add** to save the API key to the keystore. + +5. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart {{es}}**.
+ + ::::{note} + If the local deployment runs on version 8.14 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. + :::: + -By default, any deployment that you create trusts all other deployments in the same organization. You can change this behavior in the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body) under **Features** > **Trust**, so that when a new deployment is created it does not automatically trust any other deployment. You can choose one of the following options: +If you later need to update the remote connection with different permissions, you can replace the API key as detailed in [Update the access level of a remote cluster connection relying on a cross-cluster API key](ec-edit-remove-trusted-environment.md#ec-edit-remove-trusted-environment-api-key). +:::::: + +::::::{tab-item} TLS certificate (deprecated) +### Set the default trust with other clusters in the same {{ecloud}} organization [ec_set_the_default_trust_with_other_clusters_in_the_same_elasticsearch_service_organization] + +By default, any deployment that you create trusts all other deployments in the same organization. You can change this behavior in the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) under **Features** > **Trust**, so that when a new deployment is created it does not automatically trust any other deployment. You can choose one of the following options: * Trust all my deployments - All of your organization’s deployments created while this option is selected already trust each other. If you keep this option, that includes any deployments you’ll create in the future. You can directly jump to [Connect to the remote cluster](/deploy-manage/remote-clusters/ec-remote-cluster-same-ess.md#ec_connect_to_the_remote_cluster) to finalize the CCS or CCR configuration. * Trust no deployment - New deployments won’t trust any other deployment when they are created. You can instead configure trust individually for each of them in their security settings, as described in the next section. @@ -34,13 +90,13 @@ By default, any deployment that you create trusts all other deployments in the s ::::{note} * The level of trust of existing deployments is not modified when you change this setting. You must instead update the trust settings individually for each deployment you wish to change. -* Deployments created before the Elasticsearch Service February 2021 release trust only themselves. You have to update the trust setting for each deployment that you want to either use as a remote cluster or configure to work with a remote cluster. +* Deployments created before the {{ecloud}} February 2021 release trust only themselves. You have to update the trust setting for each deployment that you want to either use as a remote cluster or configure to work with a remote cluster. :::: -#### Specify the deployments trusted to be used as remote clusters [ec_specify_the_deployments_trusted_to_be_used_as_remote_clusters] +### Specify the deployments trusted to be used as remote clusters [ec_specify_the_deployments_trusted_to_be_used_as_remote_clusters] If your organization’s deployments already trust each other by default, you can skip this section. If that’s not the case, follow these steps to configure which are the specific deployments that should be trusted. @@ -50,17 +106,16 @@ If your organization’s deployments already trust each other by default, you ca * Trust all deployments - This deployment trusts all other deployments in this environment, including new deployments when they are created. * Trust specific deployments - Choose which of the existing deployments from your environment you want to trust. - * Trust no deployment - No deployment in this Elasticsearch Service environment is trusted. - + * Trust no deployment - No deployment in this {{ech}} environment is trusted. -::::{note} -When trusting specific deployments, the more restrictive [CCS](/deploy-manage/remote-clusters/remote-clusters-self-managed.md#sniff-mode) version policy is used (even if you only want to use [CCR](/deploy-manage/tools/cross-cluster-replication.md)). To work around this restriction for CCR-only trust, it is necessary to use the API as described below. -:::: + ::::{note} + When trusting specific deployments, the more restrictive [CCS](/deploy-manage/remote-clusters/remote-clusters-self-managed.md#sniff-mode) version policy is used (even if you only want to use [CCR](/deploy-manage/tools/cross-cluster-replication.md)). To work around this restriction for CCR-only trust, it is necessary to use the API as described below. + :::: 1. Repeat these steps from each of the deployments you want to use for CCS or CCR. You will only be able to connect 2 deployments successfully when both of them trust each other. -::::{dropdown} **Using the API** +::::{dropdown} Using the API You can update a deployment using the appropriate trust settings for the {{es}} payload. The current trust settings can be found in the path `.resources.elasticsearch[0].info.settings.trust` when calling: @@ -102,109 +157,56 @@ The `account_id` above represents the only account in an {{es}} environment, and :::: :::::: - -::::::{tab-item} API key -API key authentication enables a local cluster to authenticate itself with a remote cluster via a [cross-cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key). The API key needs to be created by an administrator of the remote cluster. The local cluster is configured to provide this API key on each request to the remote cluster. The remote cluster verifies the API key and grants access, based on the API key’s privileges. - -All cross-cluster requests from the local cluster are bound by the API key’s privileges, regardless of local users associated with the requests. For example, if the API key only allows read access to `my-index` on the remote cluster, even a superuser from the local cluster is limited by this constraint. This mechanism enables the remote cluster’s administrator to have full control over who can access what data with cross-cluster search and/or cross-cluster replication. The remote cluster’s administrator can be confident that no access is possible beyond what is explicitly assigned to the API key. - -On the local cluster side, not every local user needs to access every piece of data allowed by the API key. An administrator of the local cluster can further configure additional permission constraints on local users so each user only gets access to the necessary remote data. Note it is only possible to further reduce the permissions allowed by the API key for individual local users. It is impossible to increase the permissions to go beyond what is allowed by the API key. - -If you run into any issues, refer to [Troubleshooting](remote-clusters-troubleshooting.md). - - -#### Prerequisites and limitations [ec_prerequisites_and_limitations] - -* The local and remote deployments must be on version 8.12 or later. -* API key authentication can’t be used in combination with traffic filters. -* Contrary to the certificate security model, the API key security model does not require that both local and remote clusters trust each other. - - -#### Create a cross-cluster API key on the remote deployment [ec_create_a_cross_cluster_api_key_on_the_remote_deployment] - -* On the deployment you will use as remote, use the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) or [Kibana](../api-keys/elasticsearch-api-keys.md) to create a cross-cluster API key. Configure it with access to the indices you want to use for {{ccs}} or {{ccr}}. -* Copy the encoded key (`encoded` in the response) to a safe location. You will need it in the next step. - - -#### Add the cross-cluster API key to the keystore of the local deployment [ec_add_the_cross_cluster_api_key_to_the_keystore_of_the_local_deployment] - -The API key created previously will be used by the local deployment to authenticate with the corresponding set of permissions to the remote deployment. For that, you need to add the API key to the local deployment’s keystore. - -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. - - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. From the deployment menu, select **Security**. -4. Locate **Remote connections** and select **Add an API key**. - - 1. Fill both fields. - - * For the **Setting name**, enter the the alias of your choice. You will use this alias to connect to the remote cluster later. It must be lowercase and only contain letters, numbers, dashes and underscores. - * For the **Secret**, paste the encoded cross-cluster API key. - - 2. Click **Add** to save the API key to the keystore. - -5. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart Elasticsearch**.
- - ::::{note} - If the local deployment runs on version 8.13 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. - :::: - - -If you later need to update the remote connection with different permissions, you can replace the API key as detailed in [Update the access level of a remote cluster connection relying on a cross-cluster API key](ec-edit-remove-trusted-environment.md#ec-edit-remove-trusted-environment-api-key). -:::::: - ::::::: You can now connect remotely to the trusted clusters. ## Connect to the remote cluster [ec_connect_to_the_remote_cluster] -On the local cluster, add the remote cluster using Kibana or the {{es}} API. +On the local cluster, add the remote cluster using {{kib}} or the {{es}} API. -### Using Kibana [ec_using_kibana] +### Using {{kib}} [ec_using_kibana] 1. Open the {{kib}} main menu, and select **Stack Management > Data > Remote Clusters > Add a remote cluster**. 2. Enable **Manually enter proxy address and server name**. 3. Fill in the following fields: * **Name**: This *cluster alias* is a unique identifier that represents the connection to the remote cluster and is used to distinguish between local and remote indices. - * **Proxy address**: This value can be found on the **Security** page of the Elasticsearch Service deployment you want to use as a remote.
+ * **Proxy address**: This value can be found on the **Security** page of the {{ech}} deployment you want to use as a remote.
- ::::{tip} - If you’re using API keys as security model, change the port into `9443`. - :::: + ::::{tip} + If you’re using API keys as security model, change the port into `9443`. + :::: - * **Server name**: This value can be found on the **Security** page of the Elasticsearch Service deployment you want to use as a remote. + * **Server name**: This value can be found on the **Security** page of the {{ech}} deployment you want to use as a remote. - :::{image} ../../images/cloud-ce-copy-remote-cluster-parameters.png - :alt: Remote Cluster Parameters in Deployment - :class: screenshot - ::: + :::{image} ../../images/cloud-ce-copy-remote-cluster-parameters.png + :alt: Remote Cluster Parameters in Deployment + :class: screenshot + ::: - ::::{note} - If you’re having issues establishing the connection and the remote cluster is part of an {{ece}} environment with a private certificate, make sure that the proxy address and server name match with the the certificate information. For more information, refer to [Administering endpoints in {{ece}}](/deploy-manage/deploy/cloud-enterprise/change-endpoint-urls.md). - :::: + ::::{note} + If you’re having issues establishing the connection and the remote cluster is part of an {{ece}} environment with a private certificate, make sure that the proxy address and server name match with the the certificate information. For more information, refer to [Administering endpoints in {{ece}}](/deploy-manage/deploy/cloud-enterprise/change-endpoint-urls.md). + :::: 4. Click **Next**. 5. Click **Add remote cluster** (you have already established trust in a previous step). -### Using the Elasticsearch API [ec_using_the_elasticsearch_api] +### Using the {{es}} API [ec_using_the_elasticsearch_api] To configure a deployment as a remote cluster, use the [cluster update settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings). Configure the following fields: * `mode`: `proxy` -* `proxy_address`: This value can be found on the **Security** page of the Elasticsearch Service deployment you want to use as a remote. Also, using the API, this value can be obtained from the {{es}} resource info, concatenating the field `metadata.endpoint` and port `9400` using a semicolon. +* `proxy_address`: This value can be found on the **Security** page of the {{ech}} deployment you want to use as a remote. Also, using the API, this value can be obtained from the {{es}} resource info, concatenating the field `metadata.endpoint` and port `9400` using a semicolon. -::::{tip} -If you’re using API keys as security model, change the port into `9443`. -:::: + ::::{tip} + If you’re using API keys as security model, change the port into `9443`. + :::: -* `server_name`: This value can be found on the **Security** page of the Elasticsearch Service deployment you want to use as a remote. Also, using the API, this can be obtained from the {{es}} resource info field `metadata.endpoint`. +* `server_name`: This value can be found on the **Security** page of the {{ech}} deployment you want to use as a remote. Also, using the API, this can be obtained from the {{es}} resource info field `metadata.endpoint`. This is an example of the API call to `_cluster/settings`: @@ -225,45 +227,11 @@ PUT /_cluster/settings } ``` -:::::{dropdown} **Stack Version above 6.7.0 and below 7.6.0** -::::{note} -This section only applies if you’re using TLS certificates as cross-cluster security model. -:::: - - -When the cluster to be configured as a remote is above 6.7.0 and below 7.6.0, the remote cluster must be configured using the [sniff mode](/deploy-manage/remote-clusters/remote-clusters-self-managed.md#sniff-mode) with the proxy field. For each remote cluster you need to pass the following fields: - -* **Proxy**: This value can be found on the **Security** page of the deployment you want to use as a remote under the name `Proxy Address`. Also, using the API, this can be obtained from the elasticsearch resource info, concatenating the fields `metadata.endpoint` and `metadata.ports.transport_passthrough` using a semicolon. -* **Seeds**: This field is an array that must contain only one value, which is the `server name` that can be found on the **Security** page of the {{es}} deployment you want to use as a remote concatenated with `:1`. Also, using the API, this can be obtained from the {{es}} resource info, concatenating the fields `metadata.endpoint` and `1` with a semicolon. -* **Mode**: sniff (or empty, since sniff is the default value) - -This is an example of the API call to `_cluster/settings`: - -```json -{ - "persistent": { - "cluster": { - "remote": { - "my-remote-cluster-1": { - "seeds": [ - "a542184a7a7d45b88b83f95392f450ab.192.168.44.10.ip.es.io:1" - ], - "proxy": "a542184a7a7d45b88b83f95392f450ab.192.168.44.10.ip.es.io:9400" - } - } - } - } -} -``` - -::::: - - -### Using the Elasticsearch Service RESTful API [ec_using_the_elasticsearch_service_restful_api] +### Using the {{ecloud}} RESTful API [ec_using_the_elasticsearch_service_restful_api] ::::{note} -This section only applies if you’re using TLS certificates as cross-cluster security model and when both clusters belong to the same organization (for other scenarios,the Elasticsearch API should be used instead): +This section only applies if you’re using TLS certificates as cross-cluster security model and when both clusters belong to the same organization. For other scenarios, the [{{es}} API](#ec_using_the_elasticsearch_api) should be used instead. :::: @@ -287,7 +255,7 @@ curl -H 'Content-Type: application/json' -X PUT -H "Authorization: ApiKey $EC_AP `REF_ID_REMOTE` : The unique ID of the {{es}} resources inside your remote deployment (you can obtain these values through the API). -Note the following when using the Elasticsearch Service RESTful API: +Note the following when using the {{ecloud}} RESTful API: 1. A cluster alias must contain only letters, numbers, dashes (-), or underscores (_). 2. To learn about skipping disconnected clusters, refer to the [{{es}} documentation](/solutions/search/cross-cluster-search.md#skip-unavailable-clusters). @@ -300,11 +268,10 @@ curl -X GET -H "Authorization: ApiKey $EC_API_KEY" https://api.elastic-cloud.com ``` ::::{note} -The response will include just the remote clusters from the same organization in Elasticsearch Service. In order to obtain the whole list of remote clusters, use Kibana or the Elasticsearch API [Elasticsearch API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-remote-info) directly. +The response will include just the remote clusters from the same {{ecloud}} organization. In order to obtain the whole list of remote clusters, use {{kib}} or the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-remote-info) directly. :::: - ## Configure roles and users [ec_configure_roles_and_users] To use a remote cluster for {{ccr}} or {{ccs}}, you need to create user roles with [remote indices privileges](../users-roles/cluster-or-deployment-auth/defining-roles.md#roles-remote-indices-priv) on the local cluster. Refer to [Configure roles and users](remote-clusters-api-key.md#remote-clusters-privileges-api-key). diff --git a/deploy-manage/remote-clusters/ec-remote-cluster-self-managed.md b/deploy-manage/remote-clusters/ec-remote-cluster-self-managed.md index 9079c44b7..0e1f93873 100644 --- a/deploy-manage/remote-clusters/ec-remote-cluster-self-managed.md +++ b/deploy-manage/remote-clusters/ec-remote-cluster-self-managed.md @@ -1,4 +1,9 @@ --- +applies_to: + deployment: + ess: ga + self: ga +navigation_title: With a self-managed cluster mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-remote-cluster-self-managed.html --- @@ -12,15 +17,106 @@ This section explains how to configure a deployment to connect remotely to self- Before you start, consider the security model that you would prefer to use for authenticating remote connections between clusters, and follow the corresponding steps. API key -: For deployments based on {{stack}} version 8.10 or later, you can use an API key to authenticate and authorize cross-cluster operations to a remote cluster. This model offers administrators of both the local and the remote deployment fine-grained access controls. +: For deployments based on {{stack}} 8.14 or later, you can use an API key to authenticate and authorize cross-cluster operations to a remote cluster. This model offers administrators of both the local and the remote deployment fine-grained access controls. -TLS certificate +TLS certificate (deprecated in {{stack}} 9.0.0) : This model uses mutual TLS authentication for cross-cluster operations. User authentication is performed on the local cluster and a user’s role names are passed to the remote cluster. A superuser on the local deployment gains total read access to the remote deployment, so it is only suitable for deployments that are in the same security domain. :::::::{tab-set} -::::::{tab-item} TLS certificate -#### Specify the deployments trusted to be used as remote clusters [ec-trust-self-managed] +::::::{tab-item} API key +API key authentication enables a local cluster to authenticate itself with a remote cluster via a [cross-cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key). The API key needs to be created by an administrator of the remote cluster. The local cluster is configured to provide this API key on each request to the remote cluster. The remote cluster verifies the API key and grants access, based on the API key’s privileges. + +All cross-cluster requests from the local cluster are bound by the API key’s privileges, regardless of local users associated with the requests. For example, if the API key only allows read access to `my-index` on the remote cluster, even a superuser from the local cluster is limited by this constraint. This mechanism enables the remote cluster’s administrator to have full control over who can access what data with cross-cluster search and/or cross-cluster replication. The remote cluster’s administrator can be confident that no access is possible beyond what is explicitly assigned to the API key. + +On the local cluster side, not every local user needs to access every piece of data allowed by the API key. An administrator of the local cluster can further configure additional permission constraints on local users so each user only gets access to the necessary remote data. Note it is only possible to further reduce the permissions allowed by the API key for individual local users. It is impossible to increase the permissions to go beyond what is allowed by the API key. + +If you run into any issues, refer to [Troubleshooting](/troubleshoot/elasticsearch/remote-clusters.md). + + +### Prerequisites and limitations [ec_prerequisites_and_limitations_4] + +* The local and remote deployments must be on {{stack}} 8.14 or later. +* API key authentication can’t be used in combination with traffic filters. +* Contrary to the certificate security model, the API key security model does not require that both local and remote clusters trust each other. + + +### Create a cross-cluster API key on the remote deployment [ec_create_a_cross_cluster_api_key_on_the_remote_deployment_4] + +* On the deployment you will use as remote, use the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) or [{{kib}}](../api-keys/elasticsearch-api-keys.md) to create a cross-cluster API key. Configure it with access to the indices you want to use for {{ccs}} or {{ccr}}. +* Copy the encoded key (`encoded` in the response) to a safe location. You will need it in the next step. + + +### Configure the local deployment [ec_configure_the_local_deployment_2] + +The API key created previously will be used by the local deployment to authenticate with the corresponding set of permissions to the remote deployment. For that, you need to add the API key to the local deployment’s keystore. + +The steps to follow depend on whether the Certificate Authority (CA) of the remote environment’s {{es}} HTTPS server, proxy or, load balancing infrastructure is public or private. + +::::{dropdown} The CA is public +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. On the home page, find your hosted deployment and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. + + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + +3. From the deployment menu, select **Security**. +4. Locate **Remote connections** and select **Add an API key**. + + 1. Add a setting: + + * For the **Setting name**, enter the the alias of your choice. You will use this alias to connect to the remote cluster later. It must be lowercase and only contain letters, numbers, dashes and underscores. + * For the **Secret**, paste the encoded cross-cluster API key. + + 2. Click **Add** to save the API key to the keystore. + +5. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart {{es}}**.
+ + ::::{note} + If the local deployment runs on version 8.14 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. + :::: + + +If you later need to update the remote connection with different permissions, you can replace the API key as detailed in [Update the access level of a remote cluster connection relying on a cross-cluster API key](ec-edit-remove-trusted-environment.md#ec-edit-remove-trusted-environment-api-key). + +:::: + + +::::{dropdown} The CA is private +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. On the home page, find your hosted deployment and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. + + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + +3. Access the **Security** page of the deployment. +4. Select **Remote Connections > Add trusted environment** and choose **Self-managed**. Then click **Next**. +5. Select **API keys** as authentication mechanism and click **Next**. +6. Add a the API key: + + 1. Fill both fields. + + * For the **Setting name**, enter the the alias of your choice. You will use this alias to connect to the remote cluster later. It must be lowercase and only contain letters, numbers, dashes and underscores. + * For the **Secret**, paste the encoded cross-cluster API key. + + 2. Click **Add** to save the API key to the keystore. + 3. Repeat these steps for each API key you want to add. For example, if you want to use several clusters of the remote environment for CCR or CCS. + +7. Add the CA certificate of the remote self-managed environment. +8. Provide a name for the trusted environment. That name will appear in the trust summary of your deployment’s **Security** page. +9. Select **Create trust** to complete the configuration. +10. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart {{es}}**.
+ + ::::{note} + If the local deployment runs on version 8.14 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. + :::: + + +If you later need to update the remote connection with different permissions, you can replace the API key as detailed in [Update the access level of a remote cluster connection relying on a cross-cluster API key](ec-edit-remove-trusted-environment.md#ec-edit-remove-trusted-environment-api-key). + +:::: +:::::: + +::::::{tab-item} TLS certificate (deprecated) +### Specify the deployments trusted to be used as remote clusters [ec-trust-self-managed] A deployment can be configured to trust all or specific deployments in any environment: @@ -29,21 +125,21 @@ A deployment can be configured to trust all or specific deployments in any envir 3. Upload the public certificate for the Certificate Authority of the self-managed environment (the one used to sign all the cluster certificates). The certificate needs to be in PEM format and should not contain the private key. If you only have the key in p12 format, then you can create the necessary file like this: `openssl pkcs12 -in elastic-stack-ca.p12 -out newfile.crt.pem -clcerts -nokeys` 4. Select the clusters to trust. There are two options here depending on the subject name of the certificates presented by the nodes in your self managed cluster: - * Following the {{ecloud}} pattern. In {{ecloud}}, the certificates of all Elasticsearch nodes follow this convention: `CN = {{node_id}}.node.{{cluster_id}}.cluster.{{scope_id}}`. If you follow the same convention in your self-managed environment, then choose this option and you will be able to select all or specific clusters to trust. + * Following the {{ecloud}} pattern. In {{ecloud}}, the certificates of all {{es}} nodes follow this convention: `CN = {{node_id}}.node.{{cluster_id}}.cluster.{{scope_id}}`. If you follow the same convention in your self-managed environment, then choose this option and you will be able to select all or specific clusters to trust. * If your clusters don’t follow the previous convention for the certificates subject name of your nodes, you can still specify the node name of each of the nodes that should be trusted by this deployment. (Keep in mind that following this convention will simplify the management of this cluster since otherwise this configuration will need to be updated every time the topology of your self-managed cluster changes along with the trust restriction file. For this reason, it is recommended migrating your cluster certificates to follow the previous convention). ::::{note} - Trust management will not work properly in clusters without an `otherName` value specified, as is the case by default in an out-of-the-box [Elasticsearch installation](../deploy/self-managed/installing-elasticsearch.md). To have the Elasticsearch certutil generate new certificates with the `otherName` attribute, use the file input with the `cn` attribute as in the example below. + Trust management will not work properly in clusters without an `otherName` value specified, as is the case by default in an out-of-the-box [{{es}} installation](../deploy/self-managed/installing-elasticsearch.md). To have the {{es}} certutil generate new certificates with the `otherName` attribute, use the file input with the `cn` attribute as in the example below. :::: -5. . Provide a name for the trusted environment. That name will appear in the trust summary of your deployment’s Security page. +5. . Provide a name for the trusted environment. That name will appear in the trust summary of your deployment’s **Security** page. 6. Select **Create trust** to complete the configuration. 7. Configure the self-managed cluster to trust this deployment, so that both deployments are configured to trust each other: - * Download the Certificate Authority used to sign the certificates of your deployment nodes (it can be found in the Security page of your deployment) - * Trust this CA either using the [setting](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md) `xpack.security.transport.ssl.certificate_authorities` in `elasticsearch.yml` or by [adding it to the trust store](../security/different-ca.md). + * Download the Certificate Authority used to sign the certificates of your deployment nodes (it can be found in the Security page of your deployment) + * Trust this CA either using the [setting](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md) `xpack.security.transport.ssl.certificate_authorities` in `elasticsearch.yml` or by [adding it to the trust store](../security/different-ca.md). -8. Generate certificates with an `otherName` attribute using the Elasticsearch certutil. Create a file called `instances.yaml` with all the details of the nodes in your on-premise cluster like below. The `dns` and `ip` settings are optional, but `cn` is mandatory for use with the `trust_restrictions` path setting in the next step. Next, run `./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 -in instances.yaml` to create new certificates for all the nodes at once. You can then copy the resulting files into each node. +8. Generate certificates with an `otherName` attribute using the {{es}} certutil. Create a file called `instances.yaml` with all the details of the nodes in your on-premise cluster like below. The `dns` and `ip` settings are optional, but `cn` is mandatory for use with the `trust_restrictions` path setting in the next step. Next, run `./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 -in instances.yaml` to create new certificates for all the nodes at once. You can then copy the resulting files into each node. ```yaml instances: @@ -80,7 +176,7 @@ Generate new node certificates for an entire cluster using the file input mode o :::: -::::{dropdown} **Using the API** +::::{dropdown} Using the API You can update a deployment using the appropriate trust settings for the {{es}} payload. In order to trust a cluster whose nodes present certificates with the subject names: "CN = node1.example.com", "CN = node2.example.com" and "CN = node3.example.com" in a self-managed environment, you could update the trust settings with an additional direct trust relationship like this: @@ -113,152 +209,56 @@ In order to trust a cluster whose nodes present certificates with the subject na :::: :::::: - -::::::{tab-item} API key -API key authentication enables a local cluster to authenticate itself with a remote cluster via a [cross-cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key). The API key needs to be created by an administrator of the remote cluster. The local cluster is configured to provide this API key on each request to the remote cluster. The remote cluster verifies the API key and grants access, based on the API key’s privileges. - -All cross-cluster requests from the local cluster are bound by the API key’s privileges, regardless of local users associated with the requests. For example, if the API key only allows read access to `my-index` on the remote cluster, even a superuser from the local cluster is limited by this constraint. This mechanism enables the remote cluster’s administrator to have full control over who can access what data with cross-cluster search and/or cross-cluster replication. The remote cluster’s administrator can be confident that no access is possible beyond what is explicitly assigned to the API key. - -On the local cluster side, not every local user needs to access every piece of data allowed by the API key. An administrator of the local cluster can further configure additional permission constraints on local users so each user only gets access to the necessary remote data. Note it is only possible to further reduce the permissions allowed by the API key for individual local users. It is impossible to increase the permissions to go beyond what is allowed by the API key. - -If you run into any issues, refer to [Troubleshooting](remote-clusters-troubleshooting.md). - - -#### Prerequisites and limitations [ec_prerequisites_and_limitations_4] - -* The local and remote deployments must be on version 8.12 or later. -* API key authentication can’t be used in combination with traffic filters. -* Contrary to the certificate security model, the API key security model does not require that both local and remote clusters trust each other. - - -#### Create a cross-cluster API key on the remote deployment [ec_create_a_cross_cluster_api_key_on_the_remote_deployment_4] - -* On the deployment you will use as remote, use the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) or [Kibana](../api-keys/elasticsearch-api-keys.md) to create a cross-cluster API key. Configure it with access to the indices you want to use for {{ccs}} or {{ccr}}. -* Copy the encoded key (`encoded` in the response) to a safe location. You will need it in the next step. - - -#### Configure the local deployment [ec_configure_the_local_deployment_2] - -The API key created previously will be used by the local deployment to authenticate with the corresponding set of permissions to the remote deployment. For that, you need to add the API key to the local deployment’s keystore. - -The steps to follow depend on whether the Certificate Authority (CA) of the remote environment’s Elasticsearch HTTPS server, proxy or, load balancing infrastructure is public or private. - -**The CA is public** - -::::{dropdown} -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. - - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. From the deployment menu, select **Security**. -4. Locate **Remote connections** and select **Add an API key**. - - 1. Add a setting: - - * For the **Setting name**, enter the the alias of your choice. You will use this alias to connect to the remote cluster later. It must be lowercase and only contain letters, numbers, dashes and underscores. - * For the **Secret**, paste the encoded cross-cluster API key. - - 2. Click **Add** to save the API key to the keystore. - -5. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart Elasticsearch**.
- - ::::{note} - If the local deployment runs on version 8.13 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. - :::: - - -If you later need to update the remote connection with different permissions, you can replace the API key as detailed in [Update the access level of a remote cluster connection relying on a cross-cluster API key](ec-edit-remove-trusted-environment.md#ec-edit-remove-trusted-environment-api-key). - -:::: - - -**The CA is private** - -::::{dropdown} -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. - - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. Access the **Security** page of the deployment. -4. Select **Remote Connections > Add trusted environment** and choose **Self-managed**. Then click **Next**. -5. Select **API keys** as authentication mechanism and click **Next**. -6. Add a the API key: - - 1. Fill both fields. - - * For the **Setting name**, enter the the alias of your choice. You will use this alias to connect to the remote cluster later. It must be lowercase and only contain letters, numbers, dashes and underscores. - * For the **Secret**, paste the encoded cross-cluster API key. - - 2. Click **Add** to save the API key to the keystore. - 3. Repeat these steps for each API key you want to add. For example, if you want to use several clusters of the remote environment for CCR or CCS. - -7. Add the CA certificate of the remote self-managed environment. -8. Provide a name for the trusted environment. That name will appear in the trust summary of your deployment’s Security page. -9. Select **Create trust** to complete the configuration. -10. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart Elasticsearch**.
- - ::::{note} - If the local deployment runs on version 8.13 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. - :::: - - -If you later need to update the remote connection with different permissions, you can replace the API key as detailed in [Update the access level of a remote cluster connection relying on a cross-cluster API key](ec-edit-remove-trusted-environment.md#ec-edit-remove-trusted-environment-api-key). - -:::: -:::::: - ::::::: You can now connect remotely to the trusted clusters. ## Connect to the remote cluster [ec_connect_to_the_remote_cluster_4] -On the local cluster, add the remote cluster using Kibana or the {{es}} API. +On the local cluster, add the remote cluster using {{kib}} or the {{es}} API. -### Using Kibana [ec_using_kibana_4] +### Using {{kib}} [ec_using_kibana_4] 1. Open the {{kib}} main menu, and select **Stack Management > Data > Remote Clusters > Add a remote cluster**. 2. Enable **Manually enter proxy address and server name**. 3. Fill in the following fields: * **Name**: This *cluster alias* is a unique identifier that represents the connection to the remote cluster and is used to distinguish between local and remote indices. - * **Proxy address**: This value can be found on the **Security** page of the Elasticsearch Service deployment you want to use as a remote.
+ * **Proxy address**: This value can be found on the **Security** page of the {{ech}} deployment you want to use as a remote.
- ::::{tip} - If you’re using API keys as security model, change the port into `9443`. - :::: + ::::{tip} + If you’re using API keys as security model, change the port into `9443`. + :::: - * **Server name**: This value can be found on the **Security** page of the Elasticsearch Service deployment you want to use as a remote. + * **Server name**: This value can be found on the **Security** page of the {{ech}} deployment you want to use as a remote. - :::{image} ../../images/cloud-ce-copy-remote-cluster-parameters.png - :alt: Remote Cluster Parameters in Deployment - :class: screenshot - ::: + :::{image} ../../images/cloud-ce-copy-remote-cluster-parameters.png + :alt: Remote Cluster Parameters in Deployment + :class: screenshot + ::: - ::::{note} - If you’re having issues establishing the connection and the remote cluster is part of an {{ece}} environment with a private certificate, make sure that the proxy address and server name match with the the certificate information. For more information, refer to [Administering endpoints in {{ece}}](/deploy-manage/deploy/cloud-enterprise/change-endpoint-urls.md). - :::: + ::::{note} + If you’re having issues establishing the connection and the remote cluster is part of an {{ece}} environment with a private certificate, make sure that the proxy address and server name match with the the certificate information. For more information, refer to [Administering endpoints in {{ece}}](/deploy-manage/deploy/cloud-enterprise/change-endpoint-urls.md). + :::: 4. Click **Next**. 5. Click **Add remote cluster** (you have already established trust in a previous step). -### Using the Elasticsearch API [ec_using_the_elasticsearch_api_4] +### Using the {{es}} API [ec_using_the_elasticsearch_api_4] To configure a deployment as a remote cluster, use the [cluster update settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings). Configure the following fields: * `mode`: `proxy` -* `proxy_address`: This value can be found on the **Security** page of the Elasticsearch Service deployment you want to use as a remote. Also, using the API, this value can be obtained from the {{es}} resource info, concatenating the field `metadata.endpoint` and port `9400` using a semicolon. +* `proxy_address`: This value can be found on the **Security** page of the {{ech}} deployment you want to use as a remote. Also, using the API, this value can be obtained from the {{es}} resource info, concatenating the field `metadata.endpoint` and port `9400` using a semicolon. -::::{tip} -If you’re using API keys as security model, change the port into `9443`. -:::: + ::::{tip} + If you’re using API keys as security model, change the port into `9443`. + :::: -* `server_name`: This value can be found on the **Security** page of the Elasticsearch Service deployment you want to use as a remote. Also, using the API, this can be obtained from the {{es}} resource info field `metadata.endpoint`. +* `server_name`: This value can be found on the **Security** page of the {{ech}} deployment you want to use as a remote. Also, using the API, this can be obtained from the {{es}} resource info field `metadata.endpoint`. This is an example of the API call to `_cluster/settings`: @@ -279,45 +279,10 @@ PUT /_cluster/settings } ``` -:::::{dropdown} **Stack Version above 6.7.0 and below 7.6.0** -::::{note} -This section only applies if you’re using TLS certificates as cross-cluster security model. -:::: - - -When the cluster to be configured as a remote is above 6.7.0 and below 7.6.0, the remote cluster must be configured using the [sniff mode](/deploy-manage/remote-clusters/remote-clusters-self-managed.md#sniff-mode) with the proxy field. For each remote cluster you need to pass the following fields: - -* **Proxy**: This value can be found on the **Security** page of the deployment you want to use as a remote under the name `Proxy Address`. Also, using the API, this can be obtained from the elasticsearch resource info, concatenating the fields `metadata.endpoint` and `metadata.ports.transport_passthrough` using a semicolon. -* **Seeds**: This field is an array that must contain only one value, which is the `server name` that can be found on the **Security** page of the {{es}} deployment you want to use as a remote concatenated with `:1`. Also, using the API, this can be obtained from the {{es}} resource info, concatenating the fields `metadata.endpoint` and `1` with a semicolon. -* **Mode**: sniff (or empty, since sniff is the default value) - -This is an example of the API call to `_cluster/settings`: - -```json -{ - "persistent": { - "cluster": { - "remote": { - "my-remote-cluster-1": { - "seeds": [ - "a542184a7a7d45b88b83f95392f450ab.192.168.44.10.ip.es.io:1" - ], - "proxy": "a542184a7a7d45b88b83f95392f450ab.192.168.44.10.ip.es.io:9400" - } - } - } - } -} -``` - -::::: - - - -### Using the Elasticsearch Service RESTful API [ec_using_the_elasticsearch_service_restful_api_4] +### Using the {{ecloud}} RESTful API [ec_using_the_elasticsearch_service_restful_api_4] ::::{note} -This section only applies if you’re using TLS certificates as cross-cluster security model and when both clusters belong to the same organization (for other scenarios,the Elasticsearch API should be used instead): +This section only applies if you’re using TLS certificates as cross-cluster security model and when both clusters belong to the same organization. For other scenarios, the [{{es}} API](#ec_using_the_elasticsearch_api_4) should be used instead. :::: @@ -341,7 +306,7 @@ curl -H 'Content-Type: application/json' -X PUT -H "Authorization: ApiKey $EC_AP `REF_ID_REMOTE` : The unique ID of the {{es}} resources inside your remote deployment (you can obtain these values through the API). -Note the following when using the Elasticsearch Service RESTful API: +Note the following when using the {{ecloud}} RESTful API: 1. A cluster alias must contain only letters, numbers, dashes (-), or underscores (_). 2. To learn about skipping disconnected clusters, refer to the [{{es}} documentation](/solutions/search/cross-cluster-search.md#skip-unavailable-clusters). @@ -354,11 +319,9 @@ curl -X GET -H "Authorization: ApiKey $EC_API_KEY" https://api.elastic-cloud.com ``` ::::{note} -The response will include just the remote clusters from the same organization in Elasticsearch Service. In order to obtain the whole list of remote clusters, use Kibana or the Elasticsearch API [Elasticsearch API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-remote-info) directly. +The response will include just the remote clusters from the same {{ecloud}} organization. In order to obtain the whole list of remote clusters, use {{kib}} or the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-remote-info) directly. :::: - - ## Configure roles and users [ec_configure_roles_and_users_4] -To use a remote cluster for {{ccr}} or {{ccs}}, you need to create user roles with [remote indices privileges](../users-roles/cluster-or-deployment-auth/defining-roles.md#roles-remote-indices-priv) on the local cluster. Refer to [Configure roles and users](remote-clusters-api-key.md#remote-clusters-privileges-api-key). +To use a remote cluster for {{ccr}} or {{ccs}}, you need to create user roles with [remote indices privileges](../users-roles/cluster-or-deployment-auth/defining-roles.md#roles-remote-indices-priv) on the local cluster. Refer to [Configure roles and users](remote-clusters-api-key.md#remote-clusters-privileges-api-key). \ No newline at end of file diff --git a/deploy-manage/remote-clusters/ece-edit-remove-trusted-environment.md b/deploy-manage/remote-clusters/ece-edit-remove-trusted-environment.md index b49a529f3..5fb727793 100644 --- a/deploy-manage/remote-clusters/ece-edit-remove-trusted-environment.md +++ b/deploy-manage/remote-clusters/ece-edit-remove-trusted-environment.md @@ -1,4 +1,7 @@ --- +applies_to: + deployment: + ece: ga mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-edit-remove-trusted-environment.html --- @@ -12,7 +15,7 @@ From a deployment’s **Security** page, you can manage trusted environments tha * You want to remove or update the access level granted by a cross-cluster API key. -## Remove a trusted environment [ece_remove_a_trusted_environment] +## Remove a certificate-based trusted environment [ece_remove_a_trusted_environment] By removing a trusted environment, this deployment will no longer be able to establish remote connections using certificate trust to clusters of that environment. The remote environment will also no longer be able to connect to this deployment using certificate trust. @@ -25,11 +28,11 @@ With this method, you can only remove trusted environments relying exclusively o 2. In the list of trusted environments, locate the one you want to remove. 3. Remove it using the corresponding `delete` icon. - :::{image} ../../images/cloud-enterprise-delete-trust-environment.png - :alt: button for deleting a trusted environment - ::: + :::{image} ../../images/cloud-enterprise-delete-trust-environment.png + :alt: button for deleting a trusted environment + ::: -4. In Kibana, go to **Stack Management** > **Remote Clusters**. +4. In {{kib}}, go to **Stack Management** > **Remote Clusters**. 5. In the list of existing remote clusters, delete the ones corresponding to the trusted environment you removed earlier. @@ -39,14 +42,14 @@ With this method, you can only remove trusted environments relying exclusively o 2. In the list of trusted environments, locate the one you want to edit. 3. Open its details by selecting the `Edit` icon. - :::{image} ../../images/cloud-enterprise-edit-trust-environment.png - :alt: button for editing a trusted environment - ::: + :::{image} ../../images/cloud-enterprise-edit-trust-environment.png + :alt: button for editing a trusted environment + ::: 4. Edit the trust configuration for that environment: - * From the **Trust level** tab, you can add or remove trusted deployments. - * From the **Environment settings** tab, you can manage the certificates and the label of the environment. + * From the **Trust level** tab, you can add or remove trusted deployments. + * From the **Environment settings** tab, you can manage the certificates and the label of the environment. 5. Save your changes. @@ -56,28 +59,26 @@ With this method, you can only remove trusted environments relying exclusively o This section describes the steps to change the API key used for an existing remote connection. For example, if the previous key expired and you need to rotate it with a new one. ::::{note} -If you need to update the permissions granted by a cross-cluster API key for a remote connection, you only need to update the privileges granted by the API key directly in Kibana. +If you need to update the permissions granted by a cross-cluster API key for a remote connection, you only need to update the privileges granted by the API key directly in {{kib}}. :::: -1. On the deployment you will use as remote, use the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) or [Kibana](../api-keys/elasticsearch-api-keys.md) to create a cross-cluster API key with the appropriate permissions. Configure it with access to the indices you want to use for {{ccs}} or {{ccr}}. +1. On the deployment you will use as remote, use the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) or [{{kib}}](../api-keys/elasticsearch-api-keys.md) to create a cross-cluster API key with the appropriate permissions. Configure it with access to the indices you want to use for {{ccs}} or {{ccr}}. 2. Copy the encoded key (`encoded` in the response) to a safe location. You will need it in the next steps. 3. Go to the **Security** page of the local deployment and locate the **Remote connections** section. 4. Locate the API key currently used for connecting to the remote cluster, copy its current alias, and delete it. 5. Add the new API key by selecting **Add an API key**. - * For the **Setting name**, enter the same alias that was used for the previous key. + * For the **Setting name**, enter the same alias that was used for the previous key. - ::::{note} - If you use a different alias, you also need to re-create the remote cluster in Kibana with a **Name** that matches the new alias. - :::: + ::::{note} + If you use a different alias, you also need to re-create the remote cluster in {{kib}} with a **Name** that matches the new alias. + :::: - * For the **Secret**, paste the encoded cross-cluster API key. + * For the **Secret**, paste the encoded cross-cluster API key, then click **Add** to save the API key to the keystore. - 1. Click **Add** to save the API key to the keystore. +6. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart {{es}}**.
-6. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart Elasticsearch**.
- - ::::{note} - If the local deployment runs on version 8.13 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. - :::: + ::::{note} + If the local deployment runs on version 8.14 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. + :::: diff --git a/deploy-manage/remote-clusters/ece-enable-ccs-for-eck.md b/deploy-manage/remote-clusters/ece-enable-ccs-for-eck.md index 70cfae6ec..5fbc6611c 100644 --- a/deploy-manage/remote-clusters/ece-enable-ccs-for-eck.md +++ b/deploy-manage/remote-clusters/ece-enable-ccs-for-eck.md @@ -1,11 +1,16 @@ --- +applies_to: + deployment: + ece: ga + eck: ga +navigation_title: With {{eck}} mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-enable-ccs-for-eck.html --- -# Enabling CCS/R between Elastic Cloud Enterprise and ECK [ece-enable-ccs-for-eck] +# Remote clusters between {{ece}} and ECK [ece-enable-ccs-for-eck] -These steps describe how to configure remote clusters between an {{es}} cluster in Elastic Cloud Enterprise and an {{es}} cluster running within [Elastic Cloud on Kubernetes (ECK)](/deploy-manage/deploy/cloud-on-k8s.md). Once that’s done, you’ll be able to [run CCS queries from {{es}}](/solutions/search/cross-cluster-search.md) or [set up CCR](/deploy-manage/tools/cross-cluster-replication/set-up-cross-cluster-replication.md). +These steps describe how to configure remote clusters between an {{es}} cluster in {{ece}} and an {{es}} cluster running within [{{eck}} (ECK)](/deploy-manage/deploy/cloud-on-k8s.md). Once that’s done, you’ll be able to [run CCS queries from {{es}}](/solutions/search/cross-cluster-search.md) or [set up CCR](/deploy-manage/tools/cross-cluster-replication/set-up-cross-cluster-replication.md). ## Establish trust between two clusters [ece_establish_trust_between_two_clusters] @@ -13,7 +18,7 @@ These steps describe how to configure remote clusters between an {{es}} cluster The first step is to establish trust between the two clusters. -### Establish trust in the Elastic Cloud Enterprise cluster [ece_establish_trust_in_the_elastic_cloud_enterprise_cluster] +### Establish trust in the {{ece}} cluster [ece_establish_trust_in_the_elastic_cloud_enterprise_cluster] 1. Save the ECK CA certificate to a file. For a cluster named `quickstart`, run: @@ -22,7 +27,7 @@ The first step is to establish trust between the two clusters. ``` -1. Update the trust settings for the Elastic Cloud Enterprise deployment. Follow the steps provided in [Access clusters of a self-managed environment](ece-remote-cluster-self-managed.md), and specifically the first three steps in **Specify the deployments trusted to be used as remote clusters** using TLS certificate as security model. +1. Update the trust settings for the {{ece}} deployment. Follow the steps provided in [Access clusters of a self-managed environment](ece-remote-cluster-self-managed.md), and specifically the first three steps in **Specify the deployments trusted to be used as remote clusters** using TLS certificate as security model. * Use the certificate file saved in the first step. * Select the {{ecloud}} pattern and enter `default.es.local` for the `Scope ID`. @@ -32,7 +37,7 @@ The first step is to establish trust between the two clusters. ### Establish trust in the ECK cluster [ece_establish_trust_in_the_eck_cluster] -1. Upload the Elastic Cloud Enterprise certificate (that you downloaded in the last step of the previous section) as a Kubernetes secret. +1. Upload the {{ece}} certificate (that you downloaded in the last step of the previous section) as a Kubernetes secret. ```sh kubectl create secret generic ce-aws-cert --from-file= @@ -73,16 +78,16 @@ The first step is to establish trust between the two clusters. -## Setup CCS/R [ece_setup_ccsr] +## Set up CCS/R [ece_setup_ccsr] -Now that trust has been established, you can set up CCS/R from the ECK cluster to the Elastic Cloud Enterprise cluster or from the Elastic Cloud Enterprise cluster to the ECK cluster. +Now that trust has been established, you can set up CCS/R from the ECK cluster to the {{ece}} cluster or from the {{ece}} cluster to the ECK cluster. -### ECK Cluster to Elastic Cloud Enterprise cluster [ece_eck_cluster_to_elastic_cloud_enterprise_cluster] +### ECK Cluster to {{ece}} cluster [ece_eck_cluster_to_elastic_cloud_enterprise_cluster] Configure the ECK cluster [using certificate based authentication](ece-remote-cluster-self-managed.md). -### Elastic Cloud Enterprise cluster to ECK Cluster [ece_elastic_cloud_enterprise_cluster_to_eck_cluster] +### {{ece}} cluster to ECK Cluster [ece_elastic_cloud_enterprise_cluster_to_eck_cluster] Follow the steps outlined in the [ECK documentation](/deploy-manage/remote-clusters/eck-remote-clusters.md#k8s_configure_the_remote_cluster_connection_through_the_elasticsearch_rest_api). diff --git a/deploy-manage/remote-clusters/ece-enable-ccs.md b/deploy-manage/remote-clusters/ece-enable-ccs.md index a11864d98..eab268c98 100644 --- a/deploy-manage/remote-clusters/ece-enable-ccs.md +++ b/deploy-manage/remote-clusters/ece-enable-ccs.md @@ -1,63 +1,65 @@ --- +applies_to: + deployment: + ece: ga +navigation_title: Elastic Cloud Enterprise mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-enable-ccs.html --- -# Enable cross-cluster search and cross-cluster replication [ece-enable-ccs] +# Remote clusters with {{ece}} [ece-enable-ccs] -[Cross-cluster search (CCS)](/solutions/search/cross-cluster-search.md) allows you to configure multiple remote clusters across different locations and to enable federated search queries across all of the configured remote clusters. +You can configure an {{ece}} deployment to remotely access or (be accessed by) a cluster from: -[Cross-cluster replication (CCR)](/deploy-manage/tools/cross-cluster-replication.md) allows you to replicate indices across multiple remote clusters regardless of where they’re located. This provides tremendous benefit in scenarios of disaster recovery or data locality. - -These remote clusters could be: - -* Another {{es}} cluster of your ECE installation -* An {{es}} cluster in a remote ECE installation -* An {{es}} cluster hosted on {{ecloud}} -* Any other self-managed {{es}} cluster +* Another deployment of your ECE installation +* A deployment running on a different ECE installation +* An {{ech}} deployment +* A deployment running on an {{eck}} installation +* A self-managed installation ## Prerequisites [ece-ccs-ccr-prerequisites] To use CCS or CCR, your environment must meet the following criteria: -* Local and remote clusters must be in compatible versions. Review the [{{es}} version compatibility](/deploy-manage/remote-clusters/remote-clusters-cert.md#remote-clusters-prerequisites-cert) table. - - * System deployments cannot be used as remote clusters or have remote clusters. - +* The local and remote clusters must run on compatible versions of {{es}}. Review the version compatibility table. + + :::{include} _snippets/remote-cluster-certificate-compatibility.md + ::: + * Proxies must answer TCP requests on the port 9400. Check the [prerequisites for the ports that must permit outbound or inbound traffic](../deploy/cloud-enterprise/ece-networking-prereq.md). * Load balancers must pass-through TCP requests on port 9400. Check the [configuration details](../deploy/cloud-enterprise/ece-load-balancers.md). +* If your deployment was created before ECE version `2.9.0`, the Remote clusters page in {{kib}} must be enabled manually from the **Security** page of your deployment, by selecting **Enable CCR** under **Trust management**. + +::::{note} +System deployments cannot be used as remote clusters or have remote clusters. +:::: + +## Set up remote clusters with {{ece}} The steps, information, and authentication method required to configure CCS and CCR can vary depending on where the clusters you want to use as remote are hosted. -* Connect remotely to other clusters from your Elastic Cloud Enterprise deployments +* Connect remotely to other clusters from your {{ece}} deployments - * [Access other deployments of the same Elastic Cloud Enterprise environment](ece-remote-cluster-same-ece.md) - * [Access deployments of a different Elastic Cloud Enterprise environment](ece-remote-cluster-other-ece.md) - * [Access deployments of an {{ess}} environment](ece-remote-cluster-ece-ess.md) + * [Access other deployments of the same {{ece}} environment](ece-remote-cluster-same-ece.md) + * [Access deployments of a different {{ece}} environment](ece-remote-cluster-other-ece.md) + * [Access deployments of an {{ecloud}} environment](ece-remote-cluster-ece-ess.md) * [Access clusters of a self-managed environment](ece-remote-cluster-self-managed.md) * [Access deployments of an ECK environment](ece-enable-ccs-for-eck.md) -* Use clusters from your Elastic Cloud Enterprise deployments as remote +* Use clusters from your {{ece}} deployments as remote - * [From another deployment of the same Elastic Cloud Enterprise environment](ece-remote-cluster-same-ece.md) - * [From a deployment of another Elastic Cloud Enterprise environment](ece-remote-cluster-other-ece.md) - * [From an {{ess}} deployment](/deploy-manage/remote-clusters/ec-remote-cluster-ece.md) + * [From another deployment of the same {{ece}} environment](ece-remote-cluster-same-ece.md) + * [From a deployment of another {{ece}} environment](ece-remote-cluster-other-ece.md) + * [From an {{ech}} deployment](/deploy-manage/remote-clusters/ec-remote-cluster-ece.md) * [From a self-managed cluster](/deploy-manage/remote-clusters/remote-clusters-self-managed.md) - - - -## Enable CCR and the Remote Clusters UI in Kibana [ece-enable-ccr] - -If your deployment was created before ECE version `2.9.0`, CCR won’t be enabled by default and you won’t find the Remote Clusters UI in Kibana even though your deployment meets all the [criteria](#ece-ccs-ccr-prerequisites). - -To enable these features, go to the **Security** page of your deployment and under **Trust management** select **Enable CCR**. + * [From an ECK environment](ece-enable-ccs-for-eck.md) ## Remote clusters and traffic filtering [ece-ccs-ccr-traffic-filtering] ::::{note} -Traffic filtering isn’t supported for cross-cluster operations initiated from an {{ece}} environment to a remote {{ess}} deployment. +Traffic filtering isn’t supported for cross-cluster operations initiated from an {{ece}} environment to a remote {{ech}} deployment. :::: @@ -66,8 +68,8 @@ For remote clusters configured using TLS certificate authentication, [traffic fi Traffic filtering for remote clusters supports 2 methods: * [Filtering by IP addresses and Classless Inter-Domain Routing (CIDR) masks](../security/ip-traffic-filtering.md) -* Filtering by Organization or Elasticsearch cluster ID with a Remote cluster type filter. You can configure this type of filter from the **Platform** > **Security** page of your environment or using the [Elastic Cloud Enterprise API](https://www.elastic.co/docs/api/doc/cloud-enterprise) and apply it from each deployment’s **Security** page. +* Filtering by Organization or {{es}} cluster ID with a Remote cluster type filter. You can configure this type of filter from the **Platform** > **Security** page of your environment or using the [{{ece}} API](https://www.elastic.co/docs/api/doc/cloud-enterprise) and apply it from each deployment’s **Security** page. ::::{note} -When setting up traffic filters for a remote connection to an {{ece}} environment, you also need to upload the region’s TLS certificate of the local cluster to the {{ece}} environment’s proxy. You can find that region’s TLS certificate in the Security page of any deployment of the environment initiating the remote connection. +When setting up traffic filters for a remote connection to an {{ece}} environment, you also need to upload the region’s TLS certificate of the local cluster to the {{ece}} environment’s proxy. You can find that region’s TLS certificate in the **Security** page of any deployment of the environment initiating the remote connection. :::: diff --git a/deploy-manage/remote-clusters/ece-migrate-ccs.md b/deploy-manage/remote-clusters/ece-migrate-ccs.md index 1b788c0a7..96a95775e 100644 --- a/deploy-manage/remote-clusters/ece-migrate-ccs.md +++ b/deploy-manage/remote-clusters/ece-migrate-ccs.md @@ -1,27 +1,31 @@ --- +applies_to: + deployment: + ece: ga mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-migrate-ccs.html +navigation_title: "Migrate the CCS deployment template" --- # Migrate the cross-cluster search deployment template [ece-migrate-ccs] -The cross-cluster search deployment template is now deprecated was removed in 3.0. You no longer need to use the dedicated cross-cluster template to search across deployments. Instead, you can now use any template to [configure remote clusters](ece-enable-ccs.md) and search across them. Existing deployments created using this template are not affected, but they are required to migrate to another template before upgrading to version 8.x. +The cross-cluster search deployment template is now deprecated was removed in {{ece}} 3.0. You no longer need to use the dedicated cross-cluster template to search across deployments. Instead, you can now use any template to [configure remote clusters](ece-enable-ccs.md) and search across them. Existing deployments created using this template are not affected, but they are required to migrate to another template before upgrading to {{stack}} 8.x. In order to migrate your existing CCS deployment using the CCS Deployment template to the new mechanism which supports CCR and cross-environment remote clusters you will need to migrate your data a new deployment [following these steps](#ece-migrate-ccs-deployment-using-snapshot). ## Use a snapshot to migrate deployments that use the cross-cluster search deployment template [ece-migrate-ccs-deployment-using-snapshot] -You can make this change in the user Cloud UI. The only drawback of this method is that it changes the URL used to access the {{es}} cluster and Kibana. +You can make this change in the user Cloud UI. The only drawback of this method is that it changes the URL used to access the {{es}} cluster and {{kib}}. 1. The first step for any approach is to remove the remote clusters from your deployment. You will need to add them back later. 2. From the deployment menu, open the **Snapshots** page and click **Take Snapshot now**. Wait for the snapshot to finish. 3. From the main **Deployments** page, click **Create deployment**. Next to **Settings** toggle on **Restore snapshot data**, and then select your deployment and the snapshot that you created. - :::{image} ../../images/cloud-enterprise-ce-create-from-snapshot-updated.png - :alt: Create a Deployment using a snapshot - :class: screenshot - ::: + :::{image} ../../images/cloud-enterprise-ce-create-from-snapshot-updated.png + :alt: Create a Deployment using a snapshot + :class: screenshot + ::: 4. Finally, [configure the remote clusters](/deploy-manage/remote-clusters/ece-remote-cluster-other-ece.md). diff --git a/deploy-manage/remote-clusters/ece-remote-cluster-ece-ess.md b/deploy-manage/remote-clusters/ece-remote-cluster-ece-ess.md index ad9d667bd..c400226c0 100644 --- a/deploy-manage/remote-clusters/ece-remote-cluster-ece-ess.md +++ b/deploy-manage/remote-clusters/ece-remote-cluster-ece-ess.md @@ -1,11 +1,16 @@ --- +applies_to: + deployment: + ece: ga + ess: ga +navigation_title: With {{ecloud}} mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-remote-cluster-ece-ess.html --- -# Access deployments of an Elasticsearch Service organization [ece-remote-cluster-ece-ess] +# Access deployments of an {{ecloud}} organization [ece-remote-cluster-ece-ess] -This section explains how to configure a deployment to connect remotely to clusters belonging to an {{ess}} organization. +This section explains how to configure a deployment to connect remotely to clusters belonging to an {{ecloud}} organization. ## Allow the remote connection [ece_allow_the_remote_connection_3] @@ -13,31 +18,81 @@ This section explains how to configure a deployment to connect remotely to clust Before you start, consider the security model that you would prefer to use for authenticating remote connections between clusters, and follow the corresponding steps. API key -: For deployments based on {{stack}} version 8.10 or later, you can use an API key to authenticate and authorize cross-cluster operations to a remote cluster. This model offers administrators of both the local and the remote deployment fine-grained access controls. +: For deployments based on {{stack}} 8.14 or later, you can use an API key to authenticate and authorize cross-cluster operations to a remote cluster. This model offers administrators of both the local and the remote deployment fine-grained access controls. -TLS certificate +TLS certificate (deprecated in {{stack}} 9.0.0) : This model uses mutual TLS authentication for cross-cluster operations. User authentication is performed on the local cluster and a user’s role names are passed to the remote cluster. A superuser on the local deployment gains total read access to the remote deployment, so it is only suitable for deployments that are in the same security domain. :::::::{tab-set} -::::::{tab-item} TLS certificate +::::::{tab-item} API key +API key authentication enables a local cluster to authenticate itself with a remote cluster via a [cross-cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key). The API key needs to be created by an administrator of the remote cluster. The local cluster is configured to provide this API key on each request to the remote cluster. The remote cluster verifies the API key and grants access, based on the API key’s privileges. + +All cross-cluster requests from the local cluster are bound by the API key’s privileges, regardless of local users associated with the requests. For example, if the API key only allows read access to `my-index` on the remote cluster, even a superuser from the local cluster is limited by this constraint. This mechanism enables the remote cluster’s administrator to have full control over who can access what data with cross-cluster search and/or cross-cluster replication. The remote cluster’s administrator can be confident that no access is possible beyond what is explicitly assigned to the API key. + +On the local cluster side, not every local user needs to access every piece of data allowed by the API key. An administrator of the local cluster can further configure additional permission constraints on local users so each user only gets access to the necessary remote data. Note it is only possible to further reduce the permissions allowed by the API key for individual local users. It is impossible to increase the permissions to go beyond what is allowed by the API key. + +If you run into any issues, refer to [Troubleshooting](/troubleshoot/elasticsearch/remote-clusters.md). + + +### Prerequisites and limitations [ece_prerequisites_and_limitations_3] + +* The local and remote deployments must be on {{stack}} 8.14 or later. + + +### Create a cross-cluster API key on the remote deployment [ece_create_a_cross_cluster_api_key_on_the_remote_deployment_3] + +* On the deployment you will use as remote, use the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) or [{{kib}}](../api-keys/elasticsearch-api-keys.md) to create a cross-cluster API key. Configure it with access to the indices you want to use for {{ccs}} or {{ccr}}. +* Copy the encoded key (`encoded` in the response) to a safe location. You will need it in the next step. + + +### Add the cross-cluster API key to the keystore of the local deployment [ece_add_the_cross_cluster_api_key_to_the_keystore_of_the_local_deployment_2] + +The API key created previously will be used by the local deployment to authenticate with the corresponding set of permissions to the remote deployment. For that, you need to add the API key to the local deployment’s keystore. + +1. [Log into the Cloud UI](../deploy/cloud-enterprise/log-into-cloud-ui.md). +2. On the **Deployments** page, select your deployment. + + Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. + +3. From the deployment menu, select **Security**. +4. Locate **Remote connections** and select **Add an API key**. + + 1. Fill both fields. + + * For the **Setting name**, enter the the alias of your choice. You will use this alias to connect to the remote cluster later. It must be lowercase and only contain letters, numbers, dashes and underscores. + * For the **Secret**, paste the encoded cross-cluster API key. + + 2. Click **Add** to save the API key to the keystore. + +5. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart {{es}}**.
+ + ::::{note} + If the local deployment runs on version 8.14 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. + :::: + + +If you later need to update the remote connection with different permissions, you can replace the API key as detailed in [Update the access level of a remote cluster connection relying on a cross-cluster API key](ece-edit-remove-trusted-environment.md#ece-edit-remove-trusted-environment-api-key). +:::::: + +::::::{tab-item} TLS certificate (deprecated) ### Configuring trust with clusters in {{ecloud}} [ece-trust-ec] A deployment can be configured to trust all or specific deployments from an organization in [{{ecloud}}](https://www.elastic.co/guide/en/cloud/current): 1. From the **Security** menu, select **Remote Connections > Add trusted environment** and select **{{ecloud}} Organization**. 2. Enter the organization ID (which can be found near the organization name). -3. Upload the Certificate Authorities of the deployments you want to trust. These can be downloaded from the Security page of each deployment (not only the current CA, but also future certificates in case they are expiring soon since they are periodically rotated). Deployments from the same region are signed by the same CA, so you will only need to upload one for each region. +3. Upload the Certificate Authorities of the deployments you want to trust. These can be downloaded from the **Security** page of each deployment (not only the current CA, but also future certificates in case they are expiring soon since they are periodically rotated). Deployments from the same region are signed by the same CA, so you will only need to upload one for each region. 4. Choose one of following options to configure the level of trust with the Organization: * All deployments - This deployment trusts all deployments in the organization in the regions whose certificate authorities have been uploaded, including new deployments when they are created. - * Specific deployments - Specify which of the existing deployments you want to trust from this organization. The full Elasticsearch cluster ID must be entered for each remote cluster. The Elasticsearch `Cluster ID` can be found in the deployment overview page under **Applications**. + * Specific deployments - Specify which of the existing deployments you want to trust from this organization. The full {{es}} cluster ID must be entered for each remote cluster. The {{es}} `Cluster ID` can be found in the deployment overview page under **Applications**. 5. Configure the deployment in {{ecloud}} to [trust this deployment](/deploy-manage/remote-clusters/ec-remote-cluster-ece.md#ec-trust-ece), so that both deployments are configured to trust each other. Note that the organization ID and cluster IDs must be entered fully and correctly. For security reasons, no verification of the IDs is possible. If cross-environment trust does not appear to be working, double-checking the IDs is a good place to start. -::::{dropdown} **Using the API** +::::{dropdown} Using the API You can update a deployment using the appropriate trust settings for the {{es}} payload. In order to trust a deployment with cluster id `cf659f7fe6164d9691b284ae36811be1` (NOTE: use the {{es}} cluster ID, not the deployment ID) in an organization with organization ID `803289842`, you need to update the trust settings with an additional direct trust relationship like this: @@ -73,89 +128,38 @@ In order to trust a deployment with cluster id `cf659f7fe6164d9691b284ae36811be1 :::: :::::: - -::::::{tab-item} API key -API key authentication enables a local cluster to authenticate itself with a remote cluster via a [cross-cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key). The API key needs to be created by an administrator of the remote cluster. The local cluster is configured to provide this API key on each request to the remote cluster. The remote cluster verifies the API key and grants access, based on the API key’s privileges. - -All cross-cluster requests from the local cluster are bound by the API key’s privileges, regardless of local users associated with the requests. For example, if the API key only allows read access to `my-index` on the remote cluster, even a superuser from the local cluster is limited by this constraint. This mechanism enables the remote cluster’s administrator to have full control over who can access what data with cross-cluster search and/or cross-cluster replication. The remote cluster’s administrator can be confident that no access is possible beyond what is explicitly assigned to the API key. - -On the local cluster side, not every local user needs to access every piece of data allowed by the API key. An administrator of the local cluster can further configure additional permission constraints on local users so each user only gets access to the necessary remote data. Note it is only possible to further reduce the permissions allowed by the API key for individual local users. It is impossible to increase the permissions to go beyond what is allowed by the API key. - -If you run into any issues, refer to [Troubleshooting](remote-clusters-troubleshooting.md). - - -### Prerequisites and limitations [ece_prerequisites_and_limitations_3] - -* The local and remote deployments must be on version 8.12 or later. - - -### Create a cross-cluster API key on the remote deployment [ece_create_a_cross_cluster_api_key_on_the_remote_deployment_3] - -* On the deployment you will use as remote, use the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) or [Kibana](../api-keys/elasticsearch-api-keys.md) to create a cross-cluster API key. Configure it with access to the indices you want to use for {{ccs}} or {{ccr}}. -* Copy the encoded key (`encoded` in the response) to a safe location. You will need it in the next step. - - -### Add the cross-cluster API key to the keystore of the local deployment [ece_add_the_cross_cluster_api_key_to_the_keystore_of_the_local_deployment_2] - -The API key created previously will be used by the local deployment to authenticate with the corresponding set of permissions to the remote deployment. For that, you need to add the API key to the local deployment’s keystore. - -1. [Log into the Cloud UI](../deploy/cloud-enterprise/log-into-cloud-ui.md). -2. On the deployments page, select your deployment. - - Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. - -3. From the deployment menu, select **Security**. -4. Locate **Remote connections** and select **Add an API key**. - - 1. Fill both fields. - - * For the **Setting name**, enter the the alias of your choice. You will use this alias to connect to the remote cluster later. It must be lowercase and only contain letters, numbers, dashes and underscores. - * For the **Secret**, paste the encoded cross-cluster API key. - - 2. Click **Add** to save the API key to the keystore. - -5. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart Elasticsearch**.
- - ::::{note} - If the local deployment runs on version 8.13 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. - :::: - - -If you later need to update the remote connection with different permissions, you can replace the API key as detailed in [Update the access level of a remote cluster connection relying on a cross-cluster API key](ece-edit-remove-trusted-environment.md#ece-edit-remove-trusted-environment-api-key). -:::::: - ::::::: You can now connect remotely to the trusted clusters. ## Connect to the remote cluster [ece_connect_to_the_remote_cluster_3] -On the local cluster, add the remote cluster using Kibana or the {{es}} API. +On the local cluster, add the remote cluster using {{kib}} or the {{es}} API. -### Using Kibana [ece_using_kibana_3] +### Using {{kib}} [ece_using_kibana_3] 1. Open the {{kib}} main menu, and select **Stack Management > Data > Remote Clusters > Add a remote cluster**. 2. Enable **Manually enter proxy address and server name**. 3. Fill in the following fields: * **Name**: This *cluster alias* is a unique identifier that represents the connection to the remote cluster and is used to distinguish between local and remote indices. - * **Proxy address**: This value can be found on the **Security** page of the Elastic Cloud Enterprise deployment you want to use as a remote.
+ * **Proxy address**: This value can be found on the **Security** page of the {{ece}} deployment you want to use as a remote.
- ::::{tip} - If you’re using API keys as security model, change the port into `9443`. - :::: + ::::{tip} + If you’re using API keys as security model, change the port into `9443`. + :::: - * **Server name**: This value can be found on the **Security** page of the Elastic Cloud Enterprise deployment you want to use as a remote. + * **Server name**: This value can be found on the **Security** page of the {{ece}} deployment you want to use as a remote. - :::{image} ../../images/cloud-enterprise-ce-copy-remote-cluster-parameters.png - :alt: Remote Cluster Parameters in Deployment - :class: screenshot - ::: + :::{image} ../../images/cloud-enterprise-ce-copy-remote-cluster-parameters.png + :alt: Remote Cluster Parameters in Deployment + :class: screenshot + ::: - ::::{note} - If you’re having issues establishing the connection and the remote cluster is part of an {{ece}} environment with a private certificate, make sure that the proxy address and server name match with the the certificate information. For more information, refer to [Administering endpoints in {{ece}}](/deploy-manage/deploy/cloud-enterprise/change-endpoint-urls.md). - :::: + ::::{note} + If you’re having issues establishing the connection and the remote cluster is part of an {{ece}} environment with a private certificate, make sure that the proxy address and server name match with the the certificate information. For more information, refer to [Administering endpoints in {{ece}}](/deploy-manage/deploy/cloud-enterprise/change-endpoint-urls.md). + :::: 4. Click **Next**. 5. Click **Add remote cluster** (you have already established trust in a previous step). @@ -166,19 +170,19 @@ This configuration of remote clusters uses the [Proxy mode](/deploy-manage/remot -### Using the Elasticsearch API [ece_using_the_elasticsearch_api_3] +### Using the {{es}} API [ece_using_the_elasticsearch_api_3] To configure a deployment as a remote cluster, use the [cluster update settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings). Configure the following fields: * `mode`: `proxy` -* `proxy_address`: This value can be found on the **Security** page of the Elastic Cloud Enterprise deployment you want to use as a remote. Also, using the API, this value can be obtained from the {{es}} resource info, concatenating the field `metadata.endpoint` and port `9300` using a semicolon. +* `proxy_address`: This value can be found on the **Security** page of the {{ece}} deployment you want to use as a remote. Also, using the API, this value can be obtained from the {{es}} resource info, concatenating the field `metadata.endpoint` and port `9300` using a semicolon. -::::{tip} -If you’re using API keys as security model, change the port into `9443`. -:::: + ::::{tip} + If you’re using API keys as security model, change the port into `9443`. + :::: -* `server_name`: This value can be found on the **Security** page of the Elastic Cloud Enterprise deployment you want to use as a remote. Also, using the API, this can be obtained from the {{es}} resource info field `metadata.endpoint`. +* `server_name`: This value can be found on the **Security** page of the {{ece}} deployment you want to use as a remote. Also, using the API, this can be obtained from the {{es}} resource info field `metadata.endpoint`. This is an example of the API call to `_cluster/settings`: @@ -199,45 +203,10 @@ PUT /_cluster/settings } ``` -:::::{dropdown} **Stack Version above 6.7.0 and below 7.6.0** -::::{note} -This section only applies if you’re using TLS certificates as cross-cluster security model. -:::: - - -When the cluster to be configured as a remote is above 6.7.0 and below 7.6.0, the remote cluster must be configured using the [sniff mode](/deploy-manage/remote-clusters/remote-clusters-self-managed.md#sniff-mode) with the proxy field. For each remote cluster you need to pass the following fields: - -* **Proxy**: This value can be found on the **Security** page of the deployment you want to use as a remote under the name `Proxy Address`. Also, using the API, this can be obtained from the elasticsearch resource info, concatenating the fields `metadata.endpoint` and `metadata.ports.transport_passthrough` using a semicolon. -* **Seeds**: This field is an array that must contain only one value, which is the `server name` that can be found on the **Security** page of the ECE deployment you want to use as a remote concatenated with `:1`. Also, using the API, this can be obtained from the {{es}} resource info, concatenating the fields `metadata.endpoint` and `1` with a semicolon. -* **Mode**: sniff (or empty, since sniff is the default value) - -This is an example of the API call to `_cluster/settings`: - -```json -{ - "persistent": { - "cluster": { - "remote": { - "my-remote-cluster-1": { - "seeds": [ - "a542184a7a7d45b88b83f95392f450ab.192.168.44.10.ip.es.io:1" - ], - "proxy": "a542184a7a7d45b88b83f95392f450ab.192.168.44.10.ip.es.io:9400" - } - } - } - } -} -``` - -::::: - - - -### Using the Elastic Cloud Enterprise RESTful API [ece_using_the_elastic_cloud_enterprise_restful_api_3] +### Using the {{ece}} RESTful API [ece_using_the_elastic_cloud_enterprise_restful_api_3] ::::{note} -This section only applies if you’re using TLS certificates as cross-cluster security model and when both clusters belong to the same ECE environment (for other scenarios, the {{es}} API should be used instead): +This section only applies if you’re using TLS certificates as cross-cluster security model and when both clusters belong to the same ECE environment. For other scenarios, the [{{es}} API](#ece_using_the_elasticsearch_api_3) should be used instead. :::: @@ -261,7 +230,7 @@ curl -k -H 'Content-Type: application/json' -X PUT -H "Authorization: ApiKey $EC `REF_ID_REMOTE` : The unique ID of the {{es}} resources inside your remote deployment (you can obtain these values through the API). -Note the following when using the Elastic Cloud Enterprise RESTful API: +Note the following when using the {{ece}} RESTful API: 1. A cluster alias must contain only letters, numbers, dashes (-), or underscores (_). 2. To learn about skipping disconnected clusters, refer to the [{{es}} documentation](/solutions/search/cross-cluster-search.md#skip-unavailable-clusters). @@ -274,11 +243,9 @@ curl -k -X GET -H "Authorization: ApiKey $ECE_API_KEY" https://COORDINATOR_HOST: ``` ::::{note} -The response includes just the remote clusters from the same ECE environment. In order to obtain the whole list of remote clusters, use Kibana or the {{es}} API [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-remote-info) directly. +The response includes just the remote clusters from the same ECE environment. In order to obtain the whole list of remote clusters, use {{kib}} or the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-remote-info) directly. :::: - - ## Configure roles and users [ece_configure_roles_and_users_3] -To use a remote cluster for {{ccr}} or {{ccs}}, you need to create user roles with [remote indices privileges](../users-roles/cluster-or-deployment-auth/defining-roles.md#roles-remote-indices-priv) on the local cluster. Refer to [Configure roles and users](remote-clusters-api-key.md#remote-clusters-privileges-api-key). +To use a remote cluster for {{ccr}} or {{ccs}}, you need to create user roles with [remote indices privileges](../users-roles/cluster-or-deployment-auth/defining-roles.md#roles-remote-indices-priv) on the local cluster. Refer to [Configure roles and users](remote-clusters-api-key.md#remote-clusters-privileges-api-key). \ No newline at end of file diff --git a/deploy-manage/remote-clusters/ece-remote-cluster-other-ece.md b/deploy-manage/remote-clusters/ece-remote-cluster-other-ece.md index 9dcd76e4b..9cfdc15a7 100644 --- a/deploy-manage/remote-clusters/ece-remote-cluster-other-ece.md +++ b/deploy-manage/remote-clusters/ece-remote-cluster-other-ece.md @@ -1,11 +1,15 @@ --- +applies_to: + deployment: + ece: ga +navigation_title: With a different ECE environment mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-remote-cluster-other-ece.html --- -# Access deployments of another Elastic Cloud Enterprise environment [ece-remote-cluster-other-ece] +# Access deployments of another {{ece}} environment [ece-remote-cluster-other-ece] -This section explains how to configure a deployment to connect remotely to clusters belonging to a different Elastic Cloud Enterprise environment. +This section explains how to configure a deployment to connect remotely to clusters belonging to a different {{ece}} environment. ## Allow the remote connection [ece_allow_the_remote_connection_2] @@ -13,92 +17,13 @@ This section explains how to configure a deployment to connect remotely to clust Before you start, consider the security model that you would prefer to use for authenticating remote connections between clusters, and follow the corresponding steps. API key -: For deployments based on {{stack}} version 8.10 or later, you can use an API key to authenticate and authorize cross-cluster operations to a remote cluster. This model offers administrators of both the local and the remote deployment fine-grained access controls. +: For deployments based on {{stack}} 8.14 or later, you can use an API key to authenticate and authorize cross-cluster operations to a remote cluster. This model offers administrators of both the local and the remote deployment fine-grained access controls. -TLS certificate +TLS certificate (deprecated in {{stack}} 9.0.0) : This model uses mutual TLS authentication for cross-cluster operations. User authentication is performed on the local cluster and a user’s role names are passed to the remote cluster. A superuser on the local deployment gains total read access to the remote deployment, so it is only suitable for deployments that are in the same security domain. :::::::{tab-set} -::::::{tab-item} TLS certificate -### Configuring platform level trust [ece-trust-remote-environments] - -In order to configure remote clusters in other ECE environments, you first need to establish a bi-directional trust relationship between both ECE environment’s platform: - -1. Download the certificate and copy the environment ID from your first ECE environment under **Platform** > **Trust Management** > **Trust parameters**. -2. Create a new trust relationship in the other ECE environment under **Platform** > **Trust Management** > **Trusted environments** using the certificate and environment ID from the previous step. -3. Download the certificate and copy the environment ID from your second ECE environment and create a new trust relationship with those in the first ECE environment. - -Now, deployments in those environments will be able to configure trust with deployments in the other environment. Trust must always be bi-directional (local cluster must trust remote cluster and vice versa) and it can be configured in each deployment’s security settings. - - -### Configuring trust with clusters of an {{ece}} environment [ece-trust-ece] - -1. Access the **Security** page of the deployment you want to use for cross-cluster operations. -2. Select **Remote Connections > Add trusted environment** and choose **{{ece}}**. Then click **Next**. -3. Select **Certificates** as authentication mechanism and click **Next**. -4. From the dropdown, select one of the environments configured in [Configuring platform level trust](#ece-trust-remote-environments). -5. Choose one of following options to configure the level of trust with the ECE environment: - - * All deployments - This deployment trusts all deployments in the ECE environment, including new deployments when they are created. - * Specific deployments - Specify which of the existing deployments you want to trust in the ECE environment. The full Elasticsearch cluster ID must be entered for each remote cluster. The Elasticsearch `Cluster ID` can be found in the deployment overview page under **Applications**. - -6. Select **Create trust** to complete the configuration. -7. Configure the corresponding deployments of the ECE environment to [trust this deployment](/deploy-manage/remote-clusters/ece-enable-ccs.md). You will only be able to connect 2 deployments successfully when both of them trust each other. - -Note that the environment ID and cluster IDs must be entered fully and correctly. For security reasons, no verification of the IDs is possible. If cross-environment trust does not appear to be working, double-checking the IDs is a good place to start. - -::::{dropdown} **Using the API** -You can update a deployment using the appropriate trust settings for the {{es}} payload. - -Establishing the trust between the two {{ece}} environments can be done using the [trust relationships API](https://www.elastic.co/docs/api/doc/cloud-enterprise/group/endpoint-platformconfigurationtrustrelationships). For example, the list of trusted environments can be obtained calling the [list trust relationships endpoint](https://www.elastic.co/docs/api/doc/cloud-enterprise/group/endpoint-platformconfigurationtrustrelationships): - -```sh -curl -k -X GET -H "Authorization: ApiKey $ECE_API_KEY" https://COORDINATOR_HOST:12443//api/v1/regions/ece-region/platform/configuration/trust-relationships?include_certificate=false -``` - -For each remote ECE environment, it will return something like this: - -```json -{ - "id":"83a7b03f2a4343fe99f09bd27ca3d9ec", - "name":"ECE2", - "trust_by_default":false, - "account_ids":[ - "651598b101e54ccab1bfdcd8b6e3b8be" - ], - "local":false, - "last_modified":"2022-01-9T14:33:20.465Z" -} -``` - -In order to trust a deployment with cluster id `cf659f7fe6164d9691b284ae36811be1` (NOTE: use the {{es}} cluster ID, not the deployment ID) in this environment named `ECE2`, you need to update the trust settings with an external trust relationship like this: - -```json -{ - "trust":{ - "accounts":[ - { - "account_id":"ec38dd0aa45f4a69909ca5c81c27138a", - "trust_all":true - } - ], - "external":[ - { - "trust_relationship_id":"83a7b03f2a4343fe99f09bd27ca3d9ec", - "trust_all":false, - "trust_allowlist":[ - "cf659f7fe6164d9691b284ae36811be1" - ] - } - ] - } -} -``` - -:::: -:::::: - ::::::{tab-item} API key API key authentication enables a local cluster to authenticate itself with a remote cluster via a [cross-cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key). The API key needs to be created by an administrator of the remote cluster. The local cluster is configured to provide this API key on each request to the remote cluster. The remote cluster verifies the API key and grants access, based on the API key’s privileges. @@ -106,17 +31,16 @@ All cross-cluster requests from the local cluster are bound by the API key’s p On the local cluster side, not every local user needs to access every piece of data allowed by the API key. An administrator of the local cluster can further configure additional permission constraints on local users so each user only gets access to the necessary remote data. Note it is only possible to further reduce the permissions allowed by the API key for individual local users. It is impossible to increase the permissions to go beyond what is allowed by the API key. -If you run into any issues, refer to [Troubleshooting](remote-clusters-troubleshooting.md). +If you run into any issues, refer to [Troubleshooting](/troubleshoot/elasticsearch/remote-clusters.md). ### Prerequisites and limitations [ece_prerequisites_and_limitations_2] -* The local and remote deployments must be on version 8.12 or later. - +* The local and remote deployments must be on {{stack}} 8.14 or later. ### Create a cross-cluster API key on the remote deployment [ece_create_a_cross_cluster_api_key_on_the_remote_deployment_2] -* On the deployment you will use as remote, use the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) or [Kibana](../api-keys/elasticsearch-api-keys.md) to create a cross-cluster API key. Configure it with access to the indices you want to use for {{ccs}} or {{ccr}}. +* On the deployment you will use as remote, use the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) or [{{kib}}](../api-keys/elasticsearch-api-keys.md) to create a cross-cluster API key. Configure it with access to the indices you want to use for {{ccs}} or {{ccr}}. * Copy the encoded key (`encoded` in the response) to a safe location. You will need it in the next step. @@ -126,11 +50,9 @@ The API key created previously will be used by the local deployment to authentic The steps to follow depend on whether the Certificate Authority (CA) of the remote ECE environment’s proxy or load balancing infrastructure is public or private. -**The CA is public** - -::::{dropdown} +::::{dropdown} The CA is public 1. [Log into the Cloud UI](../deploy/cloud-enterprise/log-into-cloud-ui.md). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. @@ -144,10 +66,10 @@ The steps to follow depend on whether the Certificate Authority (CA) of the remo 2. Click **Add** to save the API key to the keystore. -5. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart Elasticsearch**.
+5. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart {{es}}**.
::::{note} - If the local deployment runs on version 8.13 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. + If the local deployment runs on version 8.14 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. :::: @@ -156,11 +78,9 @@ If you later need to update the remote connection with different permissions, yo :::: -**The CA is private** - -::::{dropdown} +::::{dropdown} The CA is private 1. [Log into the Cloud UI](../deploy/cloud-enterprise/log-into-cloud-ui.md). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. @@ -188,12 +108,12 @@ If you later need to update the remote connection with different permissions, yo :alt: Certificate to copy from the chain ::: -8. Provide a name for the trusted environment. That name will appear in the trust summary of your deployment’s Security page. +8. Provide a name for the trusted environment. That name will appear in the trust summary of your deployment’s **Security** page. 9. Select **Create trust** to complete the configuration. -10. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart Elasticsearch**.
+10. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart {{es}}**.
::::{note} - If the local deployment runs on version 8.13 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. + If the local deployment runs on version 8.14 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. :::: @@ -202,38 +122,116 @@ If you later need to update the remote connection with different permissions, yo :::: :::::: +::::::{tab-item} TLS certificate (deprecated) +### Configuring platform level trust [ece-trust-remote-environments] + +In order to configure remote clusters in other ECE environments, you first need to establish a bi-directional trust relationship between both ECE environment’s platform: + +1. Download the certificate and copy the environment ID from your first ECE environment under **Platform** > **Trust Management** > **Trust parameters**. +2. Create a new trust relationship in the other ECE environment under **Platform** > **Trust Management** > **Trusted environments** using the certificate and environment ID from the previous step. +3. Download the certificate and copy the environment ID from your second ECE environment and create a new trust relationship with those in the first ECE environment. + +Now, deployments in those environments will be able to configure trust with deployments in the other environment. Trust must always be bi-directional (local cluster must trust remote cluster and vice versa) and it can be configured in each deployment’s security settings. + + +### Configuring trust with clusters of an {{ece}} environment [ece-trust-ece] + +1. Access the **Security** page of the deployment you want to use for cross-cluster operations. +2. Select **Remote Connections > Add trusted environment** and choose **{{ece}}**. Then click **Next**. +3. Select **Certificates** as authentication mechanism and click **Next**. +4. From the dropdown, select one of the environments configured in [Configuring platform level trust](#ece-trust-remote-environments). +5. Choose one of following options to configure the level of trust with the ECE environment: + + * All deployments - This deployment trusts all deployments in the ECE environment, including new deployments when they are created. + * Specific deployments - Specify which of the existing deployments you want to trust in the ECE environment. The full {{es}} cluster ID must be entered for each remote cluster. The {{es}} `Cluster ID` can be found in the deployment overview page under **Applications**. + +6. Select **Create trust** to complete the configuration. +7. Configure the corresponding deployments of the ECE environment to [trust this deployment](/deploy-manage/remote-clusters/ece-enable-ccs.md). You will only be able to connect 2 deployments successfully when both of them trust each other. + +Note that the environment ID and cluster IDs must be entered fully and correctly. For security reasons, no verification of the IDs is possible. If cross-environment trust does not appear to be working, double-checking the IDs is a good place to start. + +::::{dropdown} Using the API +You can update a deployment using the appropriate trust settings for the {{es}} payload. + +Establishing the trust between the two {{ece}} environments can be done using the [trust relationships API](https://www.elastic.co/docs/api/doc/cloud-enterprise/group/endpoint-platformconfigurationtrustrelationships). For example, the list of trusted environments can be obtained calling the [list trust relationships endpoint](https://www.elastic.co/docs/api/doc/cloud-enterprise/group/endpoint-platformconfigurationtrustrelationships): + +```sh +curl -k -X GET -H "Authorization: ApiKey $ECE_API_KEY" https://COORDINATOR_HOST:12443//api/v1/regions/ece-region/platform/configuration/trust-relationships?include_certificate=false +``` + +For each remote ECE environment, it will return something like this: + +```json +{ + "id":"83a7b03f2a4343fe99f09bd27ca3d9ec", + "name":"ECE2", + "trust_by_default":false, + "account_ids":[ + "651598b101e54ccab1bfdcd8b6e3b8be" + ], + "local":false, + "last_modified":"2022-01-9T14:33:20.465Z" +} +``` + +In order to trust a deployment with cluster id `cf659f7fe6164d9691b284ae36811be1` (NOTE: use the {{es}} cluster ID, not the deployment ID) in this environment named `ECE2`, you need to update the trust settings with an external trust relationship like this: + +```json +{ + "trust":{ + "accounts":[ + { + "account_id":"ec38dd0aa45f4a69909ca5c81c27138a", + "trust_all":true + } + ], + "external":[ + { + "trust_relationship_id":"83a7b03f2a4343fe99f09bd27ca3d9ec", + "trust_all":false, + "trust_allowlist":[ + "cf659f7fe6164d9691b284ae36811be1" + ] + } + ] + } +} +``` + +:::: +:::::: ::::::: You can now connect remotely to the trusted clusters. ## Connect to the remote cluster [ece_connect_to_the_remote_cluster_2] -On the local cluster, add the remote cluster using Kibana or the {{es}} API. +On the local cluster, add the remote cluster using {{kib}} or the {{es}} API. -### Using Kibana [ece_using_kibana_2] +### Using {{kib}} [ece_using_kibana_2] 1. Open the {{kib}} main menu, and select **Stack Management > Data > Remote Clusters > Add a remote cluster**. 2. Enable **Manually enter proxy address and server name**. 3. Fill in the following fields: * **Name**: This *cluster alias* is a unique identifier that represents the connection to the remote cluster and is used to distinguish between local and remote indices. - * **Proxy address**: This value can be found on the **Security** page of the Elastic Cloud Enterprise deployment you want to use as a remote.
+ * **Proxy address**: This value can be found on the **Security** page of the {{ece}} deployment you want to use as a remote.
- ::::{tip} - If you’re using API keys as security model, change the port into `9443`. - :::: + ::::{tip} + If you’re using API keys as security model, change the port into `9443`. + :::: - * **Server name**: This value can be found on the **Security** page of the Elastic Cloud Enterprise deployment you want to use as a remote. + * **Server name**: This value can be found on the **Security** page of the {{ece}} deployment you want to use as a remote. - :::{image} ../../images/cloud-enterprise-ce-copy-remote-cluster-parameters.png - :alt: Remote Cluster Parameters in Deployment - :class: screenshot - ::: + :::{image} ../../images/cloud-enterprise-ce-copy-remote-cluster-parameters.png + :alt: Remote Cluster Parameters in Deployment + :class: screenshot + ::: - ::::{note} - If you’re having issues establishing the connection and the remote cluster is part of an {{ece}} environment with a private certificate, make sure that the proxy address and server name match with the the certificate information. For more information, refer to [Administering endpoints in {{ece}}](/deploy-manage/deploy/cloud-enterprise/change-endpoint-urls.md). - :::: + ::::{note} + If you’re having issues establishing the connection and the remote cluster is part of an {{ece}} environment with a private certificate, make sure that the proxy address and server name match with the the certificate information. For more information, refer to [Administering endpoints in {{ece}}](/deploy-manage/deploy/cloud-enterprise/change-endpoint-urls.md). + :::: 4. Click **Next**. 5. Click **Add remote cluster** (you have already established trust in a previous step). @@ -244,19 +242,19 @@ This configuration of remote clusters uses the [Proxy mode](/deploy-manage/remot -### Using the Elasticsearch API [ece_using_the_elasticsearch_api_2] +### Using the {{es}} API [ece_using_the_elasticsearch_api_2] To configure a deployment as a remote cluster, use the [cluster update settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings). Configure the following fields: * `mode`: `proxy` -* `proxy_address`: This value can be found on the **Security** page of the Elastic Cloud Enterprise deployment you want to use as a remote. Also, using the API, this value can be obtained from the {{es}} resource info, concatenating the field `metadata.endpoint` and port `9300` using a semicolon. +* `proxy_address`: This value can be found on the **Security** page of the {{ece}} deployment you want to use as a remote. Also, using the API, this value can be obtained from the {{es}} resource info, concatenating the field `metadata.endpoint` and port `9300` using a semicolon. -::::{tip} -If you’re using API keys as security model, change the port into `9443`. -:::: + ::::{tip} + If you’re using API keys as security model, change the port into `9443`. + :::: -* `server_name`: This value can be found on the **Security** page of the Elastic Cloud Enterprise deployment you want to use as a remote. Also, using the API, this can be obtained from the {{es}} resource info field `metadata.endpoint`. +* `server_name`: This value can be found on the **Security** page of the {{ece}} deployment you want to use as a remote. Also, using the API, this can be obtained from the {{es}} resource info field `metadata.endpoint`. This is an example of the API call to `_cluster/settings`: @@ -277,45 +275,11 @@ PUT /_cluster/settings } ``` -:::::{dropdown} **Stack Version above 6.7.0 and below 7.6.0** -::::{note} -This section only applies if you’re using TLS certificates as cross-cluster security model. -:::: - -When the cluster to be configured as a remote is above 6.7.0 and below 7.6.0, the remote cluster must be configured using the [sniff mode](/deploy-manage/remote-clusters/remote-clusters-self-managed.md#sniff-mode) with the proxy field. For each remote cluster you need to pass the following fields: - -* **Proxy**: This value can be found on the **Security** page of the deployment you want to use as a remote under the name `Proxy Address`. Also, using the API, this can be obtained from the elasticsearch resource info, concatenating the fields `metadata.endpoint` and `metadata.ports.transport_passthrough` using a semicolon. -* **Seeds**: This field is an array that must contain only one value, which is the `server name` that can be found on the **Security** page of the ECE deployment you want to use as a remote concatenated with `:1`. Also, using the API, this can be obtained from the {{es}} resource info, concatenating the fields `metadata.endpoint` and `1` with a semicolon. -* **Mode**: sniff (or empty, since sniff is the default value) - -This is an example of the API call to `_cluster/settings`: - -```json -{ - "persistent": { - "cluster": { - "remote": { - "my-remote-cluster-1": { - "seeds": [ - "a542184a7a7d45b88b83f95392f450ab.192.168.44.10.ip.es.io:1" - ], - "proxy": "a542184a7a7d45b88b83f95392f450ab.192.168.44.10.ip.es.io:9400" - } - } - } - } -} -``` - -::::: - - - -### Using the Elastic Cloud Enterprise RESTful API [ece_using_the_elastic_cloud_enterprise_restful_api_2] +### Using the {{ece}} RESTful API [ece_using_the_elastic_cloud_enterprise_restful_api_2] ::::{note} -This section only applies if you’re using TLS certificates as cross-cluster security model and when both clusters belong to the same ECE environment (for other scenarios, the {{es}} API should be used instead): +This section only applies if you’re using TLS certificates as cross-cluster security model and when both clusters belong to the same ECE environment. For other scenarios, the [{{es}} API](#ece_using_the_elasticsearch_api_2) should be used instead. :::: @@ -339,7 +303,7 @@ curl -k -H 'Content-Type: application/json' -X PUT -H "Authorization: ApiKey $EC `REF_ID_REMOTE` : The unique ID of the {{es}} resources inside your remote deployment (you can obtain these values through the API). -Note the following when using the Elastic Cloud Enterprise RESTful API: +Note the following when using the {{ece}} RESTful API: 1. A cluster alias must contain only letters, numbers, dashes (-), or underscores (_). 2. To learn about skipping disconnected clusters, refer to the [{{es}} documentation](/solutions/search/cross-cluster-search.md#skip-unavailable-clusters). @@ -352,11 +316,9 @@ curl -k -X GET -H "Authorization: ApiKey $ECE_API_KEY" https://COORDINATOR_HOST: ``` ::::{note} -The response includes just the remote clusters from the same ECE environment. In order to obtain the whole list of remote clusters, use Kibana or the {{es}} API [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-remote-info) directly. +The response includes just the remote clusters from the same ECE environment. In order to obtain the whole list of remote clusters, use {{kib}} or the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-remote-info) directly. :::: - - ## Configure roles and users [ece_configure_roles_and_users_2] -To use a remote cluster for {{ccr}} or {{ccs}}, you need to create user roles with [remote indices privileges](../users-roles/cluster-or-deployment-auth/defining-roles.md#roles-remote-indices-priv) on the local cluster. Refer to [Configure roles and users](remote-clusters-api-key.md#remote-clusters-privileges-api-key). +To use a remote cluster for {{ccr}} or {{ccs}}, you need to create user roles with [remote indices privileges](../users-roles/cluster-or-deployment-auth/defining-roles.md#roles-remote-indices-priv) on the local cluster. Refer to [Configure roles and users](remote-clusters-api-key.md#remote-clusters-privileges-api-key). \ No newline at end of file diff --git a/deploy-manage/remote-clusters/ece-remote-cluster-same-ece.md b/deploy-manage/remote-clusters/ece-remote-cluster-same-ece.md index 5f5f4f411..fab812ee3 100644 --- a/deploy-manage/remote-clusters/ece-remote-cluster-same-ece.md +++ b/deploy-manage/remote-clusters/ece-remote-cluster-same-ece.md @@ -1,11 +1,15 @@ --- +applies_to: + deployment: + ece: ga +navigation_title: Within the same ECE environment mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-remote-cluster-same-ece.html --- -# Access other deployments of the same Elastic Cloud Enterprise environment [ece-remote-cluster-same-ece] +# Access other deployments of the same {{ece}} environment [ece-remote-cluster-same-ece] -This section explains how to configure a deployment to connect remotely to clusters belonging to the same Elastic Cloud Enterprise environment. +This section explains how to configure a deployment to connect remotely to clusters belonging to the same {{ece}} environment. ## Allow the remote connection [ece_allow_the_remote_connection] @@ -13,17 +17,67 @@ This section explains how to configure a deployment to connect remotely to clust Before you start, consider the security model that you would prefer to use for authenticating remote connections between clusters, and follow the corresponding steps. API key -: For deployments based on {{stack}} version 8.10 or later, you can use an API key to authenticate and authorize cross-cluster operations to a remote cluster. This model offers administrators of both the local and the remote deployment fine-grained access controls. +: For deployments based on {{stack}} 8.14 or later, you can use an API key to authenticate and authorize cross-cluster operations to a remote cluster. This model offers administrators of both the local and the remote deployment fine-grained access controls. -TLS certificate +TLS certificate (deprecated in {{stack}} 9.0.0) : This model uses mutual TLS authentication for cross-cluster operations. User authentication is performed on the local cluster and a user’s role names are passed to the remote cluster. A superuser on the local deployment gains total read access to the remote deployment, so it is only suitable for deployments that are in the same security domain. :::::::{tab-set} -::::::{tab-item} TLS certificate +::::::{tab-item} API key +API key authentication enables a local cluster to authenticate itself with a remote cluster via a [cross-cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key). The API key needs to be created by an administrator of the remote cluster. The local cluster is configured to provide this API key on each request to the remote cluster. The remote cluster verifies the API key and grants access, based on the API key’s privileges. + +All cross-cluster requests from the local cluster are bound by the API key’s privileges, regardless of local users associated with the requests. For example, if the API key only allows read access to `my-index` on the remote cluster, even a superuser from the local cluster is limited by this constraint. This mechanism enables the remote cluster’s administrator to have full control over who can access what data with cross-cluster search and/or cross-cluster replication. The remote cluster’s administrator can be confident that no access is possible beyond what is explicitly assigned to the API key. + +On the local cluster side, not every local user needs to access every piece of data allowed by the API key. An administrator of the local cluster can further configure additional permission constraints on local users so each user only gets access to the necessary remote data. Note it is only possible to further reduce the permissions allowed by the API key for individual local users. It is impossible to increase the permissions to go beyond what is allowed by the API key. + +If you run into any issues, refer to [Troubleshooting](/troubleshoot/elasticsearch/remote-clusters.md). + + +### Prerequisites and limitations [ece_prerequisites_and_limitations] + +* The local and remote deployments must be on {{stack}} 8.14 or later. + + +### Create a cross-cluster API key on the remote deployment [ece_create_a_cross_cluster_api_key_on_the_remote_deployment] + +* On the deployment you will use as remote, use the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) or [{{kib}}](../api-keys/elasticsearch-api-keys.md) to create a cross-cluster API key. Configure it with access to the indices you want to use for {{ccs}} or {{ccr}}. +* Copy the encoded key (`encoded` in the response) to a safe location. You will need it in the next step. + + +### Add the cross-cluster API key to the keystore of the local deployment [ece_add_the_cross_cluster_api_key_to_the_keystore_of_the_local_deployment] + +The API key created previously will be used by the local deployment to authenticate with the corresponding set of permissions to the remote deployment. For that, you need to add the API key to the local deployment’s keystore. + +1. [Log into the Cloud UI](../deploy/cloud-enterprise/log-into-cloud-ui.md). +2. On the **Deployments** page, select your deployment. + + Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. + +3. From the deployment menu, select **Security**. +4. Locate **Remote connections** and select **Add an API key**. + + 1. Fill both fields. + + * For the **Setting name**, enter the the alias of your choice. You will use this alias to connect to the remote cluster later. It must be lowercase and only contain letters, numbers, dashes and underscores. + * For the **Secret**, paste the encoded cross-cluster API key. + + 2. Click **Add** to save the API key to the keystore. + +5. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart {{es}}**.
+ + ::::{note} + If the local deployment runs on version 8.14 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. + :::: + + +If you later need to update the remote connection with different permissions, you can replace the API key as detailed in [Update the access level of a remote cluster connection relying on a cross-cluster API key](ece-edit-remove-trusted-environment.md#ece-edit-remove-trusted-environment-api-key). +:::::: + +::::::{tab-item} TLS certificate (deprecated) ### Default trust with other clusters in the same ECE environment [ece_default_trust_with_other_clusters_in_the_same_ece_environment] -By default, any deployment that you or your users create trusts all other deployments in the same Elastic Cloud Enterprise environment. You can change this behavior in the Cloud UI under **Platform** > **Trust Management**, so that when a new deployment is created it does not automatically trust any other deployment. You can choose one of the following options: +By default, any deployment that you or your users create trusts all other deployments in the same {{ece}} environment. You can change this behavior in the Cloud UI under **Platform** > **Trust Management**, so that when a new deployment is created it does not automatically trust any other deployment. You can choose one of the following options: * Trust all my deployments - All of your organization’s deployments created while this option is selected already trust each other. If you keep this option, that includes any deployments you’ll create in the future. You can directly jump to [Connect to the remote cluster](/deploy-manage/remote-clusters/ece-remote-cluster-same-ece.md#ece_connect_to_the_remote_cluster) to finalize the CCS or CCR configuration. * Trust no deployment - New deployments won’t trust any other deployment when they are created. You can instead configure trust individually for each of them in their security settings, as described in the next section. @@ -35,7 +89,7 @@ By default, any deployment that you or your users create trusts all other deploy ::::{note} * The level of trust of existing deployments is not modified when you change this setting. You must instead update the trust settings individually for each deployment you wish to change. -* Deployments created before Elastic Cloud Enterprise version `2.9.0` trust only themselves. You have to update the trust setting for each deployment that you want to either use as a remote cluster or configure to work with a remote cluster. +* Deployments created before {{ece}} version `2.9.0` trust only themselves. You have to update the trust setting for each deployment that you want to either use as a remote cluster or configure to work with a remote cluster. :::: @@ -51,17 +105,16 @@ If your organization’s deployments already trust each other by default, you ca * Trust all deployments - This deployment trusts all other deployments in this environment, including new deployments when they are created. * Trust specific deployments - Choose which of the existing deployments from your environment you want to trust. - * Trust no deployment - No deployment in this Elastic Cloud Enterprise environment is trusted. - + * Trust no deployment - No deployment in this {{ece}} environment is trusted. -::::{note} -When trusting specific deployments, the more restrictive [CCS](/deploy-manage/remote-clusters/remote-clusters-self-managed.md#sniff-mode) version policy is used (even if you only want to use [CCR](/deploy-manage/tools/cross-cluster-replication.md)). To work around this restriction for CCR-only trust, it is necessary to use the API as described below. -:::: + ::::{note} + When trusting specific deployments, the more restrictive [CCS](/deploy-manage/remote-clusters/remote-clusters-self-managed.md#sniff-mode) version policy is used (even if you only want to use [CCR](/deploy-manage/tools/cross-cluster-replication.md)). To work around this restriction for CCR-only trust, it is necessary to use the API as described below. + :::: 1. Repeat these steps from each of the deployments you want to use for CCS or CCR. You will only be able to connect 2 deployments successfully when both of them trust each other. -::::{dropdown} **Using the API** +::::{dropdown} Using the API You can update a deployment using the appropriate trust settings for the {{es}} payload. The current trust settings can be found in the path `.resources.elasticsearch[0].info.settings.trust` when calling: @@ -103,89 +156,38 @@ The `account_id` above represents the only account in an ECE environment, and th :::: :::::: - -::::::{tab-item} API key -API key authentication enables a local cluster to authenticate itself with a remote cluster via a [cross-cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key). The API key needs to be created by an administrator of the remote cluster. The local cluster is configured to provide this API key on each request to the remote cluster. The remote cluster verifies the API key and grants access, based on the API key’s privileges. - -All cross-cluster requests from the local cluster are bound by the API key’s privileges, regardless of local users associated with the requests. For example, if the API key only allows read access to `my-index` on the remote cluster, even a superuser from the local cluster is limited by this constraint. This mechanism enables the remote cluster’s administrator to have full control over who can access what data with cross-cluster search and/or cross-cluster replication. The remote cluster’s administrator can be confident that no access is possible beyond what is explicitly assigned to the API key. - -On the local cluster side, not every local user needs to access every piece of data allowed by the API key. An administrator of the local cluster can further configure additional permission constraints on local users so each user only gets access to the necessary remote data. Note it is only possible to further reduce the permissions allowed by the API key for individual local users. It is impossible to increase the permissions to go beyond what is allowed by the API key. - -If you run into any issues, refer to [Troubleshooting](remote-clusters-troubleshooting.md). - - -### Prerequisites and limitations [ece_prerequisites_and_limitations] - -* The local and remote deployments must be on version 8.12 or later. - - -### Create a cross-cluster API key on the remote deployment [ece_create_a_cross_cluster_api_key_on_the_remote_deployment] - -* On the deployment you will use as remote, use the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) or [Kibana](../api-keys/elasticsearch-api-keys.md) to create a cross-cluster API key. Configure it with access to the indices you want to use for {{ccs}} or {{ccr}}. -* Copy the encoded key (`encoded` in the response) to a safe location. You will need it in the next step. - - -### Add the cross-cluster API key to the keystore of the local deployment [ece_add_the_cross_cluster_api_key_to_the_keystore_of_the_local_deployment] - -The API key created previously will be used by the local deployment to authenticate with the corresponding set of permissions to the remote deployment. For that, you need to add the API key to the local deployment’s keystore. - -1. [Log into the Cloud UI](../deploy/cloud-enterprise/log-into-cloud-ui.md). -2. On the deployments page, select your deployment. - - Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. - -3. From the deployment menu, select **Security**. -4. Locate **Remote connections** and select **Add an API key**. - - 1. Fill both fields. - - * For the **Setting name**, enter the the alias of your choice. You will use this alias to connect to the remote cluster later. It must be lowercase and only contain letters, numbers, dashes and underscores. - * For the **Secret**, paste the encoded cross-cluster API key. - - 2. Click **Add** to save the API key to the keystore. - -5. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart Elasticsearch**.
- - ::::{note} - If the local deployment runs on version 8.13 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. - :::: - - -If you later need to update the remote connection with different permissions, you can replace the API key as detailed in [Update the access level of a remote cluster connection relying on a cross-cluster API key](ece-edit-remove-trusted-environment.md#ece-edit-remove-trusted-environment-api-key). -:::::: - ::::::: You can now connect remotely to the trusted clusters. ## Connect to the remote cluster [ece_connect_to_the_remote_cluster] -On the local cluster, add the remote cluster using Kibana or the {{es}} API. +On the local cluster, add the remote cluster using {{kib}} or the {{es}} API. -### Using Kibana [ece_using_kibana] +### Using {{kib}} [ece_using_kibana] 1. Open the {{kib}} main menu, and select **Stack Management > Data > Remote Clusters > Add a remote cluster**. 2. Enable **Manually enter proxy address and server name**. 3. Fill in the following fields: * **Name**: This *cluster alias* is a unique identifier that represents the connection to the remote cluster and is used to distinguish between local and remote indices. - * **Proxy address**: This value can be found on the **Security** page of the Elastic Cloud Enterprise deployment you want to use as a remote.
+ * **Proxy address**: This value can be found on the **Security** page of the {{ece}} deployment you want to use as a remote.
- ::::{tip} - If you’re using API keys as security model, change the port into `9443`. - :::: + ::::{tip} + If you’re using API keys as security model, change the port into `9443`. + :::: - * **Server name**: This value can be found on the **Security** page of the Elastic Cloud Enterprise deployment you want to use as a remote. + * **Server name**: This value can be found on the **Security** page of the {{ece}} deployment you want to use as a remote. - :::{image} ../../images/cloud-enterprise-ce-copy-remote-cluster-parameters.png - :alt: Remote Cluster Parameters in Deployment - :class: screenshot - ::: + :::{image} ../../images/cloud-enterprise-ce-copy-remote-cluster-parameters.png + :alt: Remote Cluster Parameters in Deployment + :class: screenshot + ::: - ::::{note} - If you’re having issues establishing the connection and the remote cluster is part of an {{ece}} environment with a private certificate, make sure that the proxy address and server name match with the the certificate information. For more information, refer to [Administering endpoints in {{ece}}](/deploy-manage/deploy/cloud-enterprise/change-endpoint-urls.md). - :::: + ::::{note} + If you’re having issues establishing the connection and the remote cluster is part of an {{ece}} environment with a private certificate, make sure that the proxy address and server name match with the the certificate information. For more information, refer to [Administering endpoints in {{ece}}](/deploy-manage/deploy/cloud-enterprise/change-endpoint-urls.md). + :::: 4. Click **Next**. 5. Click **Add remote cluster** (you have already established trust in a previous step). @@ -196,19 +198,19 @@ This configuration of remote clusters uses the [Proxy mode](/deploy-manage/remot -### Using the Elasticsearch API [ece_using_the_elasticsearch_api] +### Using the {{es}} API [ece_using_the_elasticsearch_api] To configure a deployment as a remote cluster, use the [cluster update settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings). Configure the following fields: * `mode`: `proxy` -* `proxy_address`: This value can be found on the **Security** page of the Elastic Cloud Enterprise deployment you want to use as a remote. Also, using the API, this value can be obtained from the {{es}} resource info, concatenating the field `metadata.endpoint` and port `9300` using a semicolon. +* `proxy_address`: This value can be found on the **Security** page of the {{ece}} deployment you want to use as a remote. Also, using the API, this value can be obtained from the {{es}} resource info, concatenating the field `metadata.endpoint` and port `9300` using a semicolon. -::::{tip} -If you’re using API keys as security model, change the port into `9443`. -:::: + ::::{tip} + If you’re using API keys as security model, change the port into `9443`. + :::: -* `server_name`: This value can be found on the **Security** page of the Elastic Cloud Enterprise deployment you want to use as a remote. Also, using the API, this can be obtained from the {{es}} resource info field `metadata.endpoint`. +* `server_name`: This value can be found on the **Security** page of the {{ece}} deployment you want to use as a remote. Also, using the API, this can be obtained from the {{es}} resource info field `metadata.endpoint`. This is an example of the API call to `_cluster/settings`: @@ -229,45 +231,11 @@ PUT /_cluster/settings } ``` -:::::{dropdown} **Stack Version above 6.7.0 and below 7.6.0** -::::{note} -This section only applies if you’re using TLS certificates as cross-cluster security model. -:::: - -When the cluster to be configured as a remote is above 6.7.0 and below 7.6.0, the remote cluster must be configured using the [sniff mode](/deploy-manage/remote-clusters/remote-clusters-self-managed.md#sniff-mode) with the proxy field. For each remote cluster you need to pass the following fields: - -* **Proxy**: This value can be found on the **Security** page of the deployment you want to use as a remote under the name `Proxy Address`. Also, using the API, this can be obtained from the elasticsearch resource info, concatenating the fields `metadata.endpoint` and `metadata.ports.transport_passthrough` using a semicolon. -* **Seeds**: This field is an array that must contain only one value, which is the `server name` that can be found on the **Security** page of the ECE deployment you want to use as a remote concatenated with `:1`. Also, using the API, this can be obtained from the {{es}} resource info, concatenating the fields `metadata.endpoint` and `1` with a semicolon. -* **Mode**: sniff (or empty, since sniff is the default value) - -This is an example of the API call to `_cluster/settings`: - -```json -{ - "persistent": { - "cluster": { - "remote": { - "my-remote-cluster-1": { - "seeds": [ - "a542184a7a7d45b88b83f95392f450ab.192.168.44.10.ip.es.io:1" - ], - "proxy": "a542184a7a7d45b88b83f95392f450ab.192.168.44.10.ip.es.io:9400" - } - } - } - } -} -``` - -::::: - - - -### Using the Elastic Cloud Enterprise RESTful API [ece_using_the_elastic_cloud_enterprise_restful_api] +### Using the {{ece}} RESTful API [ece_using_the_elastic_cloud_enterprise_restful_api] ::::{note} -This section only applies if you’re using TLS certificates as cross-cluster security model and when both clusters belong to the same ECE environment (for other scenarios, the {{es}} API should be used instead): +This section only applies if you’re using TLS certificates as cross-cluster security model and when both clusters belong to the same ECE environment. For other scenarios, the [{{es}} API](#ece_using_the_elasticsearch_api) should be used instead. :::: @@ -291,7 +259,7 @@ curl -k -H 'Content-Type: application/json' -X PUT -H "Authorization: ApiKey $EC `REF_ID_REMOTE` : The unique ID of the {{es}} resources inside your remote deployment (you can obtain these values through the API). -Note the following when using the Elastic Cloud Enterprise RESTful API: +Note the following when using the {{ece}} RESTful API: 1. A cluster alias must contain only letters, numbers, dashes (-), or underscores (_). 2. To learn about skipping disconnected clusters, refer to the [{{es}} documentation](/solutions/search/cross-cluster-search.md#skip-unavailable-clusters). @@ -304,11 +272,9 @@ curl -k -X GET -H "Authorization: ApiKey $ECE_API_KEY" https://COORDINATOR_HOST: ``` ::::{note} -The response includes just the remote clusters from the same ECE environment. In order to obtain the whole list of remote clusters, use Kibana or the {{es}} API [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-remote-info) directly. +The response includes just the remote clusters from the same ECE environment. In order to obtain the whole list of remote clusters, use {{kib}} or the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-remote-info) directly. :::: - - ## Configure roles and users [ece_configure_roles_and_users] To use a remote cluster for {{ccr}} or {{ccs}}, you need to create user roles with [remote indices privileges](../users-roles/cluster-or-deployment-auth/defining-roles.md#roles-remote-indices-priv) on the local cluster. Refer to [Configure roles and users](remote-clusters-api-key.md#remote-clusters-privileges-api-key). diff --git a/deploy-manage/remote-clusters/ece-remote-cluster-self-managed.md b/deploy-manage/remote-clusters/ece-remote-cluster-self-managed.md index 276dbed8e..88b107c9e 100644 --- a/deploy-manage/remote-clusters/ece-remote-cluster-self-managed.md +++ b/deploy-manage/remote-clusters/ece-remote-cluster-self-managed.md @@ -1,4 +1,9 @@ --- +applies_to: + deployment: + ece: ga + self: ga +navigation_title: With a self-managed cluster mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-remote-cluster-self-managed.html --- @@ -13,14 +18,103 @@ This section explains how to configure a deployment to connect remotely to self- Before you start, consider the security model that you would prefer to use for authenticating remote connections between clusters, and follow the corresponding steps. API key -: For deployments based on {{stack}} version 8.10 or later, you can use an API key to authenticate and authorize cross-cluster operations to a remote cluster. This model offers administrators of both the local and the remote deployment fine-grained access controls. +: For deployments based on {{stack}} 8.14 or later, you can use an API key to authenticate and authorize cross-cluster operations to a remote cluster. This model offers administrators of both the local and the remote deployment fine-grained access controls. -TLS certificate +TLS certificate (deprecated in {{stack}} 9.0.0) : This model uses mutual TLS authentication for cross-cluster operations. User authentication is performed on the local cluster and a user’s role names are passed to the remote cluster. A superuser on the local deployment gains total read access to the remote deployment, so it is only suitable for deployments that are in the same security domain. :::::::{tab-set} -::::::{tab-item} TLS certificate +::::::{tab-item} API key +API key authentication enables a local cluster to authenticate itself with a remote cluster via a [cross-cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key). The API key needs to be created by an administrator of the remote cluster. The local cluster is configured to provide this API key on each request to the remote cluster. The remote cluster verifies the API key and grants access, based on the API key’s privileges. + +All cross-cluster requests from the local cluster are bound by the API key’s privileges, regardless of local users associated with the requests. For example, if the API key only allows read access to `my-index` on the remote cluster, even a superuser from the local cluster is limited by this constraint. This mechanism enables the remote cluster’s administrator to have full control over who can access what data with cross-cluster search and/or cross-cluster replication. The remote cluster’s administrator can be confident that no access is possible beyond what is explicitly assigned to the API key. + +On the local cluster side, not every local user needs to access every piece of data allowed by the API key. An administrator of the local cluster can further configure additional permission constraints on local users so each user only gets access to the necessary remote data. Note it is only possible to further reduce the permissions allowed by the API key for individual local users. It is impossible to increase the permissions to go beyond what is allowed by the API key. + +If you run into any issues, refer to [Troubleshooting](/troubleshoot/elasticsearch/remote-clusters.md). + + +### Prerequisites and limitations [ece_prerequisites_and_limitations_4] + +* The local and remote deployments must be on {{stack}} 8.14 or later. + + +### Create a cross-cluster API key on the remote deployment [ece_create_a_cross_cluster_api_key_on_the_remote_deployment_4] + +* On the deployment you will use as remote, use the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) or [{{kib}}](../api-keys/elasticsearch-api-keys.md) to create a cross-cluster API key. Configure it with access to the indices you want to use for {{ccs}} or {{ccr}}. +* Copy the encoded key (`encoded` in the response) to a safe location. You will need it in the next step. + + +### Configure the local deployment [ece_configure_the_local_deployment_2] + +The API key created previously will be used by the local deployment to authenticate with the corresponding set of permissions to the remote deployment. For that, you need to add the API key to the local deployment’s keystore. + +The steps to follow depend on whether the Certificate Authority (CA) of the remote environment’s {{es}} HTTPS server, proxy or, load balancing infrastructure is public or private. + +::::{dropdown} The CA is public +1. [Log into the Cloud UI](../deploy/cloud-enterprise/log-into-cloud-ui.md). +2. On the **Deployments** page, select your deployment. + + Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. + +3. From the deployment menu, select **Security**. +4. Locate **Remote connections** and select **Add an API key**. + + 1. Add a setting: + + * For the **Setting name**, enter the the alias of your choice. You will use this alias to connect to the remote cluster later. It must be lowercase and only contain letters, numbers, dashes and underscores. + * For the **Secret**, paste the encoded cross-cluster API key. + + 2. Click **Add** to save the API key to the keystore. + +5. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart {{es}}**.
+ + ::::{note} + If the local deployment runs on version 8.14 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. + :::: + + +If you later need to update the remote connection with different permissions, you can replace the API key as detailed in [Update the access level of a remote cluster connection relying on a cross-cluster API key](ece-edit-remove-trusted-environment.md#ece-edit-remove-trusted-environment-api-key). + +:::: + + +::::{dropdown} The CA is private +1. [Log into the Cloud UI](../deploy/cloud-enterprise/log-into-cloud-ui.md). +2. On the **Deployments** page, select your deployment. + + Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. + +3. Access the **Security** page of the deployment. +4. Select **Remote Connections > Add trusted environment** and choose **Self-managed**. Then click **Next**. +5. Select **API keys** as authentication mechanism and click **Next**. +6. Add a the API key: + + 1. Fill both fields. + + * For the **Setting name**, enter the the alias of your choice. You will use this alias to connect to the remote cluster later. It must be lowercase and only contain letters, numbers, dashes and underscores. + * For the **Secret**, paste the encoded cross-cluster API key. + + 2. Click **Add** to save the API key to the keystore. + 3. Repeat these steps for each API key you want to add. For example, if you want to use several clusters of the remote environment for CCR or CCS. + +7. Add the CA certificate of the remote self-managed environment. +8. Provide a name for the trusted environment. That name will appear in the trust summary of your deployment’s **Security** page. +9. Select **Create trust** to complete the configuration. +10. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart {{es}}**.
+ + ::::{note} + If the local deployment runs on version 8.14 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. + :::: + + +If you later need to update the remote connection with different permissions, you can replace the API key as detailed in [Update the access level of a remote cluster connection relying on a cross-cluster API key](ece-edit-remove-trusted-environment.md#ece-edit-remove-trusted-environment-api-key). + +:::: +:::::: + +::::::{tab-item} TLS certificate (deprecated) ### Specify the deployments trusted to be used as remote clusters [ece-trust-self-managed] A deployment can be configured to trust all or specific deployments in any environment: @@ -30,21 +124,21 @@ A deployment can be configured to trust all or specific deployments in any envir 3. Upload the public certificate for the Certificate Authority of the self-managed environment (the one used to sign all the cluster certificates). The certificate needs to be in PEM format and should not contain the private key. If you only have the key in p12 format, then you can create the necessary file like this: `openssl pkcs12 -in elastic-stack-ca.p12 -out newfile.crt.pem -clcerts -nokeys` 4. Select the clusters to trust. There are two options here depending on the subject name of the certificates presented by the nodes in your self managed cluster: - * Following the {{ecloud}} pattern. In {{ecloud}}, the certificates of all Elasticsearch nodes follow this convention: `CN = {{node_id}}.node.{{cluster_id}}.cluster.{{scope_id}}`. If you follow the same convention in your self-managed environment, then choose this option and you will be able to select all or specific clusters to trust. + * Following the {{ecloud}} pattern. In {{ecloud}}, the certificates of all {{es}} nodes follow this convention: `CN = {{node_id}}.node.{{cluster_id}}.cluster.{{scope_id}}`. If you follow the same convention in your self-managed environment, then choose this option and you will be able to select all or specific clusters to trust. * If your clusters don’t follow the previous convention for the certificates subject name of your nodes, you can still specify the node name of each of the nodes that should be trusted by this deployment. (Keep in mind that following this convention will simplify the management of this cluster since otherwise this configuration will need to be updated every time the topology of your self-managed cluster changes along with the trust restriction file. For this reason, it is recommended migrating your cluster certificates to follow the previous convention). ::::{note} - Trust management will not work properly in clusters without an `otherName` value specified, as is the case by default in an out-of-the-box [Elasticsearch installation](../deploy/self-managed/installing-elasticsearch.md). To have the Elasticsearch certutil generate new certificates with the `otherName` attribute, use the file input with the `cn` attribute as in the example below. + Trust management will not work properly in clusters without an `otherName` value specified, as is the case by default in an out-of-the-box [{{es}} installation](../deploy/self-managed/installing-elasticsearch.md). To have the {{es}} certutil generate new certificates with the `otherName` attribute, use the file input with the `cn` attribute as in the example below. :::: -5. . Provide a name for the trusted environment. That name will appear in the trust summary of your deployment’s Security page. +5. Provide a name for the trusted environment. That name will appear in the trust summary of your deployment’s **Security** page. 6. Select **Create trust** to complete the configuration. 7. Configure the self-managed cluster to trust this deployment, so that both deployments are configured to trust each other: - * Download the Certificate Authority used to sign the certificates of your deployment nodes (it can be found in the Security page of your deployment) - * Trust this CA either using the [setting](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md) `xpack.security.transport.ssl.certificate_authorities` in `elasticsearch.yml` or by [adding it to the trust store](../security/different-ca.md). + * Download the Certificate Authority used to sign the certificates of your deployment nodes (it can be found in the Security page of your deployment) + * Trust this CA either using the [setting](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md) `xpack.security.transport.ssl.certificate_authorities` in `elasticsearch.yml` or by [adding it to the trust store](../security/different-ca.md). -8. Generate certificates with an `otherName` attribute using the Elasticsearch certutil. Create a file called `instances.yaml` with all the details of the nodes in your on-premise cluster like below. The `dns` and `ip` settings are optional, but `cn` is mandatory for use with the `trust_restrictions` path setting in the next step. Next, run `./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 -in instances.yaml` to create new certificates for all the nodes at once. You can then copy the resulting files into each node. +8. Generate certificates with an `otherName` attribute using the {{es}} certutil. Create a file called `instances.yaml` with all the details of the nodes in your on-premise cluster like below. The `dns` and `ip` settings are optional, but `cn` is mandatory for use with the `trust_restrictions` path setting in the next step. Next, run `./bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 -in instances.yaml` to create new certificates for all the nodes at once. You can then copy the resulting files into each node. ```yaml instances: @@ -60,7 +154,7 @@ A deployment can be configured to trust all or specific deployments in any envir 9. Restrict the trusted clusters to allow only the ones which your self-managed cluster should trust. - * All the clusters in your Elastic Cloud Enterprise environment are signed by the same certificate authority. Therefore, adding this CA would make the self-managed cluster trust all your clusters in your ECE environment. This should be limited using the setting `xpack.security.transport.ssl.trust_restrictions.path` in `elasticsearch.yml`, which points to a file that limits the certificates to trust based on their `otherName`-attribute. + * All the clusters in your {{ece}} environment are signed by the same certificate authority. Therefore, adding this CA would make the self-managed cluster trust all your clusters in your ECE environment. This should be limited using the setting `xpack.security.transport.ssl.trust_restrictions.path` in `elasticsearch.yml`, which points to a file that limits the certificates to trust based on their `otherName`-attribute. * For example, the following file would trust: ``` @@ -80,7 +174,7 @@ Generate new node certificates for an entire cluster using the file input mode o :::: -::::{dropdown} **Using the API** +::::{dropdown} Using the API You can update a deployment using the appropriate trust settings for the {{es}} payload. In order to trust a cluster whose nodes present certificates with the subject names: "CN = node1.example.com", "CN = node2.example.com" and "CN = node3.example.com" in a self-managed environment, you could update the trust settings with an additional direct trust relationship like this: @@ -113,132 +207,38 @@ In order to trust a cluster whose nodes present certificates with the subject na :::: :::::: - -::::::{tab-item} API key -API key authentication enables a local cluster to authenticate itself with a remote cluster via a [cross-cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key). The API key needs to be created by an administrator of the remote cluster. The local cluster is configured to provide this API key on each request to the remote cluster. The remote cluster verifies the API key and grants access, based on the API key’s privileges. - -All cross-cluster requests from the local cluster are bound by the API key’s privileges, regardless of local users associated with the requests. For example, if the API key only allows read access to `my-index` on the remote cluster, even a superuser from the local cluster is limited by this constraint. This mechanism enables the remote cluster’s administrator to have full control over who can access what data with cross-cluster search and/or cross-cluster replication. The remote cluster’s administrator can be confident that no access is possible beyond what is explicitly assigned to the API key. - -On the local cluster side, not every local user needs to access every piece of data allowed by the API key. An administrator of the local cluster can further configure additional permission constraints on local users so each user only gets access to the necessary remote data. Note it is only possible to further reduce the permissions allowed by the API key for individual local users. It is impossible to increase the permissions to go beyond what is allowed by the API key. - -If you run into any issues, refer to [Troubleshooting](remote-clusters-troubleshooting.md). - - -### Prerequisites and limitations [ece_prerequisites_and_limitations_4] - -* The local and remote deployments must be on version 8.12 or later. - - -### Create a cross-cluster API key on the remote deployment [ece_create_a_cross_cluster_api_key_on_the_remote_deployment_4] - -* On the deployment you will use as remote, use the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) or [Kibana](../api-keys/elasticsearch-api-keys.md) to create a cross-cluster API key. Configure it with access to the indices you want to use for {{ccs}} or {{ccr}}. -* Copy the encoded key (`encoded` in the response) to a safe location. You will need it in the next step. - - -### Configure the local deployment [ece_configure_the_local_deployment_2] - -The API key created previously will be used by the local deployment to authenticate with the corresponding set of permissions to the remote deployment. For that, you need to add the API key to the local deployment’s keystore. - -The steps to follow depend on whether the Certificate Authority (CA) of the remote environment’s Elasticsearch HTTPS server, proxy or, load balancing infrastructure is public or private. - -**The CA is public** - -::::{dropdown} -1. [Log into the Cloud UI](../deploy/cloud-enterprise/log-into-cloud-ui.md). -2. On the deployments page, select your deployment. - - Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. - -3. From the deployment menu, select **Security**. -4. Locate **Remote connections** and select **Add an API key**. - - 1. Add a setting: - - * For the **Setting name**, enter the the alias of your choice. You will use this alias to connect to the remote cluster later. It must be lowercase and only contain letters, numbers, dashes and underscores. - * For the **Secret**, paste the encoded cross-cluster API key. - - 2. Click **Add** to save the API key to the keystore. - -5. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart Elasticsearch**.
- - ::::{note} - If the local deployment runs on version 8.13 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. - :::: - - -If you later need to update the remote connection with different permissions, you can replace the API key as detailed in [Update the access level of a remote cluster connection relying on a cross-cluster API key](ece-edit-remove-trusted-environment.md#ece-edit-remove-trusted-environment-api-key). - -:::: - - -**The CA is private** - -::::{dropdown} -1. [Log into the Cloud UI](../deploy/cloud-enterprise/log-into-cloud-ui.md). -2. On the deployments page, select your deployment. - - Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. - -3. Access the **Security** page of the deployment. -4. Select **Remote Connections > Add trusted environment** and choose **Self-managed**. Then click **Next**. -5. Select **API keys** as authentication mechanism and click **Next**. -6. Add a the API key: - - 1. Fill both fields. - - * For the **Setting name**, enter the the alias of your choice. You will use this alias to connect to the remote cluster later. It must be lowercase and only contain letters, numbers, dashes and underscores. - * For the **Secret**, paste the encoded cross-cluster API key. - - 2. Click **Add** to save the API key to the keystore. - 3. Repeat these steps for each API key you want to add. For example, if you want to use several clusters of the remote environment for CCR or CCS. - -7. Add the CA certificate of the remote self-managed environment. -8. Provide a name for the trusted environment. That name will appear in the trust summary of your deployment’s Security page. -9. Select **Create trust** to complete the configuration. -10. Restart the local deployment to reload the keystore with its new setting. To do that, go to the deployment’s main page (named after your deployment’s name), locate the **Actions** menu, and select **Restart Elasticsearch**.
- - ::::{note} - If the local deployment runs on version 8.13 or greater, you no longer need to perform this step because the keystore is reloaded automatically with the new API keys. - :::: - - -If you later need to update the remote connection with different permissions, you can replace the API key as detailed in [Update the access level of a remote cluster connection relying on a cross-cluster API key](ece-edit-remove-trusted-environment.md#ece-edit-remove-trusted-environment-api-key). - -:::: -:::::: - ::::::: You can now connect remotely to the trusted clusters. ## Connect to the remote cluster [ece_connect_to_the_remote_cluster_4] -On the local cluster, add the remote cluster using Kibana or the {{es}} API. +On the local cluster, add the remote cluster using {{kib}} or the {{es}} API. -### Using Kibana [ece_using_kibana_4] +### Using {{kib}} [ece_using_kibana_4] 1. Open the {{kib}} main menu, and select **Stack Management > Data > Remote Clusters > Add a remote cluster**. 2. Enable **Manually enter proxy address and server name**. 3. Fill in the following fields: * **Name**: This *cluster alias* is a unique identifier that represents the connection to the remote cluster and is used to distinguish between local and remote indices. - * **Proxy address**: This value can be found on the **Security** page of the Elastic Cloud Enterprise deployment you want to use as a remote.
+ * **Proxy address**: This value can be found on the **Security** page of the {{ece}} deployment you want to use as a remote.
- ::::{tip} - If you’re using API keys as security model, change the port into `9443`. - :::: + ::::{tip} + If you’re using API keys as security model, change the port into `9443`. + :::: - * **Server name**: This value can be found on the **Security** page of the Elastic Cloud Enterprise deployment you want to use as a remote. + * **Server name**: This value can be found on the **Security** page of the {{ece}} deployment you want to use as a remote. - :::{image} ../../images/cloud-enterprise-ce-copy-remote-cluster-parameters.png - :alt: Remote Cluster Parameters in Deployment - :class: screenshot - ::: + :::{image} ../../images/cloud-enterprise-ce-copy-remote-cluster-parameters.png + :alt: Remote Cluster Parameters in Deployment + :class: screenshot + ::: - ::::{note} - If you’re having issues establishing the connection and the remote cluster is part of an {{ece}} environment with a private certificate, make sure that the proxy address and server name match with the the certificate information. For more information, refer to [Administering endpoints in {{ece}}](/deploy-manage/deploy/cloud-enterprise/change-endpoint-urls.md). - :::: + ::::{note} + If you’re having issues establishing the connection and the remote cluster is part of an {{ece}} environment with a private certificate, make sure that the proxy address and server name match with the the certificate information. For more information, refer to [Administering endpoints in {{ece}}](/deploy-manage/deploy/cloud-enterprise/change-endpoint-urls.md). + :::: 4. Click **Next**. 5. Click **Add remote cluster** (you have already established trust in a previous step). @@ -249,19 +249,19 @@ This configuration of remote clusters uses the [Proxy mode](/deploy-manage/remot -### Using the Elasticsearch API [ece_using_the_elasticsearch_api_4] +### Using the {{es}} API [ece_using_the_elasticsearch_api_4] To configure a deployment as a remote cluster, use the [cluster update settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings). Configure the following fields: * `mode`: `proxy` -* `proxy_address`: This value can be found on the **Security** page of the Elastic Cloud Enterprise deployment you want to use as a remote. Also, using the API, this value can be obtained from the {{es}} resource info, concatenating the field `metadata.endpoint` and port `9300` using a semicolon. +* `proxy_address`: This value can be found on the **Security** page of the {{ece}} deployment you want to use as a remote. Also, using the API, this value can be obtained from the {{es}} resource info, concatenating the field `metadata.endpoint` and port `9300` using a semicolon. -::::{tip} -If you’re using API keys as security model, change the port into `9443`. -:::: + ::::{tip} + If you’re using API keys as security model, change the port into `9443`. + :::: -* `server_name`: This value can be found on the **Security** page of the Elastic Cloud Enterprise deployment you want to use as a remote. Also, using the API, this can be obtained from the {{es}} resource info field `metadata.endpoint`. +* `server_name`: This value can be found on the **Security** page of the {{ece}} deployment you want to use as a remote. Also, using the API, this can be obtained from the {{es}} resource info field `metadata.endpoint`. This is an example of the API call to `_cluster/settings`: @@ -282,45 +282,11 @@ PUT /_cluster/settings } ``` -:::::{dropdown} **Stack Version above 6.7.0 and below 7.6.0** -::::{note} -This section only applies if you’re using TLS certificates as cross-cluster security model. -:::: - - -When the cluster to be configured as a remote is above 6.7.0 and below 7.6.0, the remote cluster must be configured using the [sniff mode](/deploy-manage/remote-clusters/remote-clusters-self-managed.md#sniff-mode) with the proxy field. For each remote cluster you need to pass the following fields: - -* **Proxy**: This value can be found on the **Security** page of the deployment you want to use as a remote under the name `Proxy Address`. Also, using the API, this can be obtained from the elasticsearch resource info, concatenating the fields `metadata.endpoint` and `metadata.ports.transport_passthrough` using a semicolon. -* **Seeds**: This field is an array that must contain only one value, which is the `server name` that can be found on the **Security** page of the ECE deployment you want to use as a remote concatenated with `:1`. Also, using the API, this can be obtained from the {{es}} resource info, concatenating the fields `metadata.endpoint` and `1` with a semicolon. -* **Mode**: sniff (or empty, since sniff is the default value) - -This is an example of the API call to `_cluster/settings`: - -```json -{ - "persistent": { - "cluster": { - "remote": { - "my-remote-cluster-1": { - "seeds": [ - "a542184a7a7d45b88b83f95392f450ab.192.168.44.10.ip.es.io:1" - ], - "proxy": "a542184a7a7d45b88b83f95392f450ab.192.168.44.10.ip.es.io:9400" - } - } - } - } -} -``` - -::::: - - -### Using the Elastic Cloud Enterprise RESTful API [ece_using_the_elastic_cloud_enterprise_restful_api_4] +### Using the {{ece}} RESTful API [ece_using_the_elastic_cloud_enterprise_restful_api_4] ::::{note} -This section only applies if you’re using TLS certificates as cross-cluster security model and when both clusters belong to the same ECE environment (for other scenarios, the {{es}} API should be used instead): +This section only applies if you’re using TLS certificates as cross-cluster security model and when both clusters belong to the same ECE environment. For other scenarios, the [{{es}} API](#ece_using_the_elasticsearch_api_4) should be used instead. :::: @@ -344,7 +310,7 @@ curl -k -H 'Content-Type: application/json' -X PUT -H "Authorization: ApiKey $EC `REF_ID_REMOTE` : The unique ID of the {{es}} resources inside your remote deployment (you can obtain these values through the API). -Note the following when using the Elastic Cloud Enterprise RESTful API: +Note the following when using the {{ece}} RESTful API: 1. A cluster alias must contain only letters, numbers, dashes (-), or underscores (_). 2. To learn about skipping disconnected clusters, refer to the [{{es}} documentation](/solutions/search/cross-cluster-search.md#skip-unavailable-clusters). @@ -357,11 +323,9 @@ curl -k -X GET -H "Authorization: ApiKey $ECE_API_KEY" https://COORDINATOR_HOST: ``` ::::{note} -The response includes just the remote clusters from the same ECE environment. In order to obtain the whole list of remote clusters, use Kibana or the {{es}} API [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-remote-info) directly. +The response includes just the remote clusters from the same ECE environment. In order to obtain the whole list of remote clusters, use {{kib}} or the [{{es}} API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-remote-info) directly. :::: - - ## Configure roles and users [ece_configure_roles_and_users_4] -To use a remote cluster for {{ccr}} or {{ccs}}, you need to create user roles with [remote indices privileges](../users-roles/cluster-or-deployment-auth/defining-roles.md#roles-remote-indices-priv) on the local cluster. Refer to [Configure roles and users](remote-clusters-api-key.md#remote-clusters-privileges-api-key). +To use a remote cluster for {{ccr}} or {{ccs}}, you need to create user roles with [remote indices privileges](../users-roles/cluster-or-deployment-auth/defining-roles.md#roles-remote-indices-priv) on the local cluster. Refer to [Configure roles and users](remote-clusters-api-key.md#remote-clusters-privileges-api-key). \ No newline at end of file diff --git a/deploy-manage/remote-clusters/eck-remote-clusters.md b/deploy-manage/remote-clusters/eck-remote-clusters.md index 33d951e7c..10c4e0f98 100644 --- a/deploy-manage/remote-clusters/eck-remote-clusters.md +++ b/deploy-manage/remote-clusters/eck-remote-clusters.md @@ -1,26 +1,36 @@ --- +applies_to: + deployment: + eck: ga +navigation_title: Elastic Cloud on Kubernetes mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-remote-clusters.html --- -# ECK remote clusters [k8s-remote-clusters] +# Remote clusters with {{eck}} [k8s-remote-clusters] -The [remote clusters module](/deploy-manage/remote-clusters/remote-clusters-self-managed.md) in Elasticsearch enables you to establish uni-directional connections to a remote cluster. This functionality is used in cross-cluster replication and cross-cluster search. +The [remote clusters module](/deploy-manage/remote-clusters.md) in Elasticsearch enables you to establish uni-directional connections to a remote cluster. This functionality is used in cross-cluster replication and cross-cluster search. When using remote cluster connections with ECK, the setup process depends on where the remote cluster is deployed. -## Connect from an Elasticsearch cluster running in the same Kubernetes cluster [k8s-remote-clusters-connect-internal] +## Connect from an {{es}} cluster running in the same Kubernetes cluster [k8s-remote-clusters-connect-internal] ::::{note} The remote clusters feature requires a valid Enterprise license or Enterprise trial license. Check [the license documentation](../license/manage-your-license-in-eck.md) for more details about managing licenses. :::: -To create a remote cluster connection to another Elasticsearch cluster deployed within the same Kubernetes cluster, specify the `remoteClusters` attribute in your Elasticsearch spec. +To create a remote cluster connection to another {{es}} cluster deployed within the same Kubernetes cluster, specify the `remoteClusters` attribute in your {{es}} spec. -### Security Models [k8s_security_models] +### Security models [k8s_security_models] -ECK supports two different security models: the API key based security model, and the certificate security model. These two security models are described in the [Remote clusters](/deploy-manage/remote-clusters/remote-clusters-self-managed.md#remote-clusters-security-models) section of the {{es}} documentation. +Before you start, consider the security model that you would prefer to use for authenticating remote connections between clusters, and follow the corresponding steps. + +API key +: For deployments based on {{stack}} 8.14 or later, you can use an API key to authenticate and authorize cross-cluster operations to a remote cluster. This model offers administrators of both the local and the remote deployment fine-grained access controls. + +TLS certificate (deprecated in {{stack}} 9.0.0) +: This model uses mutual TLS authentication for cross-cluster operations. User authentication is performed on the local cluster and a user’s role names are passed to the remote cluster. A superuser on the local deployment gains total read access to the remote deployment, so it is only suitable for deployments that are in the same security domain. ### Using the API key security model [k8s_using_the_api_key_security_model] @@ -29,7 +39,7 @@ To enable the API key security model you must first enable the remote cluster se ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 -kind: Elasticsearch +kind: {{es}} metadata: name: cluster-two namespace: ns-two @@ -47,13 +57,13 @@ Enabling the remote cluster server triggers a restart of the {{es}} cluster. :::: -Once the remote cluster server is enabled and started on the remote cluster you can configure the Elasticsearch reference on the local cluster to include the desired permissions for cross-cluster search, and cross-cluster replication. +Once the remote cluster server is enabled and started on the remote cluster you can configure the {{es}} reference on the local cluster to include the desired permissions for cross-cluster search, and cross-cluster replication. -Permissions have to be included under the `apiKey` field. The API model of the Elasticsearch resource is compatible with the [{{es}} Cross-Cluster API key API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) model. Fine-grained permissions can therefore be configured in both the `search` and `replication` fields: +Permissions have to be included under the `apiKey` field. The API model of the {{es}} resource is compatible with the [{{es}} Cross-Cluster API key API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) model. Fine-grained permissions can therefore be configured in both the `search` and `replication` fields: ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 -kind: Elasticsearch +kind: {{es}} metadata: name: cluster-one namespace: ns-one @@ -89,7 +99,7 @@ The following example describes how to configure `cluster-two` as a remote clust ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 -kind: Elasticsearch +kind: {{es}} metadata: name: cluster-one namespace: ns-one @@ -110,17 +120,17 @@ spec: -## Connect from an Elasticsearch cluster running outside the Kubernetes cluster [k8s-remote-clusters-connect-external] +## Connect from an {{es}} cluster running outside the Kubernetes cluster [k8s-remote-clusters-connect-external] ::::{note} -While it is technically possible to configure remote cluster connections using older versions of Elasticsearch, this guide only covers the setup for Elasticsearch 7.6 and later. The setup process is significantly simplified in Elasticsearch 7.6 due to improved support for the indirection of Kubernetes services. +While it is technically possible to configure remote cluster connections using older versions of {{es}}, this guide only covers the setup for {{es}} 7.6 and later. The setup process is significantly simplified in {{es}} 7.6 due to improved support for the indirection of Kubernetes services. :::: -You can configure a remote cluster connection to an ECK-managed Elasticsearch cluster from another cluster running outside the Kubernetes cluster as follows: +You can configure a remote cluster connection to an ECK-managed {{es}} cluster from another cluster running outside the Kubernetes cluster as follows: 1. Make sure that both clusters trust each other’s certificate authority. -2. Configure the remote cluster connection through the Elasticsearch REST API. +2. Configure the remote cluster connection through the {{es}} REST API. Consider the following example: @@ -131,7 +141,7 @@ To configure `cluster-one` as a remote cluster in `cluster-two`: ### Make sure both clusters trust each other’s certificate authority [k8s_make_sure_both_clusters_trust_each_others_certificate_authority] -The certificate authority (CA) used by ECK to issue certificates for the Elasticsearch transport layer is stored in a secret named `-es-transport-certs-public`. Extract the certificate for `cluster-one` as follows: +The certificate authority (CA) used by ECK to issue certificates for the {{es}} transport layer is stored in a secret named `-es-transport-certs-public`. Extract the certificate for `cluster-one` as follows: ```sh kubectl get secret cluster-one-es-transport-certs-public \ @@ -162,7 +172,7 @@ If `cluster-two` is also managed by an ECK instance, proceed as follows: ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 - kind: Elasticsearch + kind: {{es}} metadata: name: cluster-two spec: @@ -179,13 +189,13 @@ If `cluster-two` is also managed by an ECK instance, proceed as follows: 3. Repeat steps 1 and 2 to add the CA of `cluster-two` to `cluster-one` as well. -### Configure the remote cluster connection through the Elasticsearch REST API [k8s_configure_the_remote_cluster_connection_through_the_elasticsearch_rest_api] +### Configure the remote cluster connection through the {{es}} REST API [k8s_configure_the_remote_cluster_connection_through_the_elasticsearch_rest_api] Expose the transport layer of `cluster-one`. ```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 -kind: Elasticsearch +kind: {{es}} metadata: name: cluster-one spec: @@ -198,7 +208,7 @@ spec: 1. On cloud providers which support external load balancers, setting the type field to LoadBalancer provisions a load balancer for your Service. Alternatively, expose the service through one of the Kubernetes Ingress controllers that support TCP services. -Finally, configure `cluster-one` as a remote cluster in `cluster-two` using the Elasticsearch REST API: +Finally, configure `cluster-one` as a remote cluster in `cluster-two` using the {{es}} REST API: ```sh PUT _cluster/settings diff --git a/deploy-manage/remote-clusters/remote-clusters-api-key.md b/deploy-manage/remote-clusters/remote-clusters-api-key.md index b56ec50ea..63e243673 100644 --- a/deploy-manage/remote-clusters/remote-clusters-api-key.md +++ b/deploy-manage/remote-clusters/remote-clusters-api-key.md @@ -1,4 +1,7 @@ --- +applies_to: + deployment: + self: ga mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/remote-clusters-api-key.html --- @@ -22,19 +25,19 @@ To add a remote cluster using API key authentication: 3. [Connect to a remote cluster](#remote-clusters-connect-api-key) 4. [Configure roles and users](#remote-clusters-privileges-api-key) -If you run into any issues, refer to [Troubleshooting](remote-clusters-troubleshooting.md). +If you run into any issues, refer to [Troubleshooting](/troubleshoot/elasticsearch/remote-clusters.md). ## Prerequisites [remote-clusters-prerequisites-api-key] * The {{es}} security features need to be enabled on both clusters, on every node. Security is enabled by default. If it’s disabled, set `xpack.security.enabled` to `true` in `elasticsearch.yml`. Refer to [General security settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md#general-security-settings). -* The nodes of the local and remote clusters must be on version 8.10 or later. +* The nodes of the local and remote clusters must be on {{stack}} 8.14 or later. * The local and remote clusters must have an appropriate license. For more information, refer to [https://www.elastic.co/subscriptions](https://www.elastic.co/subscriptions). ## Establish trust with a remote cluster [remote-clusters-security-api-key] ::::{note} -If a remote cluster is part of an {{ess}} deployment, it has a valid certificate by default. You can therefore skip steps related to certificates in these instructions. +If a remote cluster is part of an {{ech}} deployment, it has a valid certificate by default. You can therefore skip steps related to certificates in these instructions. :::: @@ -99,7 +102,7 @@ If a remote cluster is part of an {{ess}} deployment, it has a valid certificate When prompted, enter the `CERT_PASSWORD` from the earlier step. 4. Restart the remote cluster. -5. On the remote cluster, generate a cross-cluster API key that provides access to the indices you want to use for {{ccs}} or {{ccr}}. You can use the [Create Cross-Cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) API or [Kibana](../api-keys/elasticsearch-api-keys.md). +5. On the remote cluster, generate a cross-cluster API key that provides access to the indices you want to use for {{ccs}} or {{ccr}}. You can use the [Create Cross-Cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) API or [{{kib}}](../api-keys/elasticsearch-api-keys.md). 6. Copy the encoded key (`encoded` in the response) to a safe location. You will need it to connect to the remote cluster later. diff --git a/deploy-manage/remote-clusters/remote-clusters-cert.md b/deploy-manage/remote-clusters/remote-clusters-cert.md index 14e428d06..54514c3cf 100644 --- a/deploy-manage/remote-clusters/remote-clusters-cert.md +++ b/deploy-manage/remote-clusters/remote-clusters-cert.md @@ -1,4 +1,7 @@ --- +applies_to: + deployment: + self: ga mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/remote-clusters-cert.html --- @@ -19,45 +22,15 @@ To add a remote cluster using TLS certificate authentication: 3. [Connect to a remote cluster](#remote-clusters-connect-cert) 4. [Configure roles and users for remote clusters](#remote-clusters-privileges-cert) -If you run into any issues, refer to [Troubleshooting](remote-clusters-troubleshooting.md). +If you run into any issues, refer to [Troubleshooting](/troubleshoot/elasticsearch/remote-clusters.md). ## Prerequisites [remote-clusters-prerequisites-cert] 1. The {{es}} security features need to be enabled on both clusters, on every node. Security is enabled by default. If it’s disabled, set `xpack.security.enabled` to `true` in `elasticsearch.yml`. Refer to [General security settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md#general-security-settings). 2. The local and remote clusters versions must be compatible. - * Any node can communicate with another node on the same major version. For example, 7.0 can talk to any 7.x node. - * Only nodes on the last minor version of a certain major version can communicate with nodes on the following major version. In the 6.x series, 6.8 can communicate with any 7.x node, while 6.7 can only communicate with 7.0. - * Version compatibility is symmetric, meaning that if 6.7 can communicate with 7.0, 7.0 can also communicate with 6.7. The following table depicts version compatibility between local and remote nodes. - - :::::{dropdown} Version compatibility table - | | | - | --- | --- | - | | Local cluster | - | Remote cluster | 5.0–5.5 | 5.6 | 6.0–6.6 | 6.7 | 6.8 | 7.0 | 7.1–7.16 | 7.17 | 8.0–9.0 | - | 5.0–5.5 | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | - | 5.6 | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | - | 6.0–6.6 | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | - | 6.7 | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | - | 6.8 | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | - | 7.0 | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | - | 7.1–7.16 | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | - | 7.17 | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | - | 8.0–9.0 | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | - - ::::{note} - This documentation is for {{es}} version 9.0.0-beta1, which is not yet released. The above compatibility table applies if both clusters are running a released version of {{es}}, or if one of the clusters is running a released version and the other is running a pre-release build with a later build date. A cluster running a pre-release build of {{es}} can also communicate with remote clusters running the same pre-release build. Running a mix of pre-release builds is unsupported and typically will not work, even if the builds have the same version number. - :::: - - - ::::: - - - ::::{important} - Elastic only supports {{ccs}} on a subset of these configurations. See [Supported {{ccs}} configurations](../../solutions/search/cross-cluster-search.md#ccs-supported-configurations). - :::: - - + :::{include} _snippets/remote-cluster-certificate-compatibility.md + ::: ## Establish trust with a remote cluster [remote-clusters-security-cert] @@ -280,7 +253,7 @@ The following requests use the [create or update roles API](https://www.elastic. The {{ccr}} user requires different cluster and index privileges on the remote cluster and local cluster. Use the following requests to create separate roles on the local and remote clusters, and then create a user with the required roles. -##### Remote cluster [_remote_cluster] +#### Remote cluster [_remote_cluster] On the remote cluster that contains the leader index, the {{ccr}} role requires the `read_ccr` cluster privilege, and `monitor` and `read` privileges on the leader index. @@ -317,7 +290,7 @@ POST /_security/role/remote-replication ``` -##### Local cluster [_local_cluster] +#### Local cluster [_local_cluster] On the local cluster that contains the follower index, the `remote-replication` role requires the `manage_ccr` cluster privilege, and `monitor`, `read`, `write`, and `manage_follow_index` privileges on the follower index. @@ -368,7 +341,7 @@ You can then [configure {{ccr}}](../tools/cross-cluster-replication/set-up-cross The {{ccs}} user requires different cluster and index privileges on the remote cluster and local cluster. The following requests create separate roles on the local and remote clusters, and then create a user with the required roles. -##### Remote cluster [_remote_cluster_2] +#### Remote cluster [_remote_cluster_2] On the remote cluster, the {{ccs}} role requires the `read` and `read_cross_cluster` privileges for the target indices. @@ -402,7 +375,7 @@ POST /_security/role/remote-search ``` -##### Local cluster [_local_cluster_2] +#### Local cluster [_local_cluster_2] On the local cluster, which is the cluster used to initiate cross cluster search, a user only needs the `remote-search` role. The role privileges can be empty. @@ -445,7 +418,7 @@ To grant users read access on the remote data streams and indices, you must crea For example, you might be actively indexing {{ls}} data on a local cluster and and periodically offload older time-based indices to an archive on your remote cluster. You want to search across both clusters, so you must enable {{kib}} users on both clusters. -##### Local cluster [_local_cluster_3] +#### Local cluster [_local_cluster_3] On the local cluster, create a `logstash-reader` role that grants `read` and `view_index_metadata` privileges on the local `logstash-*` indices. @@ -485,7 +458,7 @@ PUT /_security/user/cross-cluster-kibana ``` -##### Remote cluster [_remote_cluster_3] +#### Remote cluster [_remote_cluster_3] On the remote cluster, create a `logstash-reader` role that grants the `read_cross_cluster` privilege and `read` and `view_index_metadata` privileges for the `logstash-*` indices. diff --git a/deploy-manage/remote-clusters/remote-clusters-migrate.md b/deploy-manage/remote-clusters/remote-clusters-migrate.md index a666ca6a5..fc36820df 100644 --- a/deploy-manage/remote-clusters/remote-clusters-migrate.md +++ b/deploy-manage/remote-clusters/remote-clusters-migrate.md @@ -1,4 +1,7 @@ --- +applies_to: + deployment: + self: ga navigation_title: "Migrate from certificate to API key authentication" mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/remote-clusters-migrate.html @@ -25,11 +28,11 @@ For these reasons, you may prefer to migrate a remote cluster in-place, by follo 5. [Resume cross-cluster operations](#remote-clusters-migration-resume) 6. [Disable certificate based authentication and authorization](#remote-clusters-migration-disable-cert) -If you run into any issues, refer to [Troubleshooting](remote-clusters-troubleshooting.md). +If you run into any issues, refer to [Troubleshooting](/troubleshoot/elasticsearch/remote-clusters.md). ## Prerequisites [remote-clusters-migration-prerequisites] -* The nodes of the local and remote clusters must be on version 8.10 or later. +* The nodes of the local and remote clusters must be on {{stack}} 8.14 or later. * The local and remote clusters must have an appropriate license. For more information, refer to [https://www.elastic.co/subscriptions](https://www.elastic.co/subscriptions). @@ -96,7 +99,7 @@ On the remote cluster: When prompted, enter the `CERT_PASSWORD` from the earlier step. 4. Restart the remote cluster. -5. On the remote cluster, generate a cross-cluster API key that provides access to the indices you want to use for {{ccs}} or {{ccr}}. You can use the [Create Cross-Cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) API or [Kibana](../api-keys/elasticsearch-api-keys.md). +5. On the remote cluster, generate a cross-cluster API key that provides access to the indices you want to use for {{ccs}} or {{ccr}}. You can use the [Create Cross-Cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key) API or [{{kib}}](../api-keys/elasticsearch-api-keys.md). 6. Copy the encoded key (`encoded` in the response) to a safe location. You will need it to connect to the remote cluster later. @@ -222,7 +225,7 @@ Resume any persistent tasks that you stopped earlier. Tasks should be restarted ## Disable certificate based authentication and authorization [remote-clusters-migration-disable-cert] ::::{note} -Only proceed with this step if the migration has been proved successful on the local cluster. If the migration is unsuccessful, either [find out what the problem is and attempt to fix it](remote-clusters-troubleshooting.md) or [roll back](#remote-clusters-migration-rollback). +Only proceed with this step if the migration has been proved successful on the local cluster. If the migration is unsuccessful, either [find out what the problem is and attempt to fix it](/troubleshoot/elasticsearch/remote-clusters.md) or [roll back](#remote-clusters-migration-rollback). :::: diff --git a/deploy-manage/remote-clusters/remote-clusters-self-managed.md b/deploy-manage/remote-clusters/remote-clusters-self-managed.md index 0e952b2d0..624fb4e0c 100644 --- a/deploy-manage/remote-clusters/remote-clusters-self-managed.md +++ b/deploy-manage/remote-clusters/remote-clusters-self-managed.md @@ -1,48 +1,30 @@ --- +applies_to: + deployment: + self: ga +navigation_title: Self-managed {{stack}} mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/remote-clusters.html --- -# Remote clusters (self-managed) [remote-clusters] +# Remote clusters with self-managed installations [remote-clusters] -You can connect a local cluster to other {{es}} clusters, known as *remote clusters*. Remote clusters can be located in different datacenters or geographic regions, and contain indices or data streams that can be replicated with {{ccr}} or searched by a local cluster using {{ccs}}. - - -## {{ccr-cap}} [remote-clusters-ccr] - -With [{{ccr}}](/deploy-manage/tools/cross-cluster-replication.md), you ingest data to an index on a remote cluster. This *leader* index is replicated to one or more read-only *follower* indices on your local cluster. Creating a multi-cluster architecture with {{ccr}} enables you to configure disaster recovery, bring data closer to your users, or establish a centralized reporting cluster to process reports locally. - - -## {{ccs-cap}} [remote-clusters-ccs] - -[{{ccs-cap}}](/solutions/search/cross-cluster-search.md) enables you to run a search request against one or more remote clusters. This capability provides each region with a global view of all clusters, allowing you to send a search request from a local cluster and return results from all connected remote clusters. For full {{ccs}} capabilities, the local and remote cluster must be on the same [subscription level](https://www.elastic.co/subscriptions). +The instructions that follow describe how to create a remote connection from a self-managed cluster. You can also set up {{ccs}} and {{ccr}} from an [{{ech}} deployment](/deploy-manage/remote-clusters/ec-enable-ccs.md) or from an [{{ece}} deployment](/deploy-manage/remote-clusters/ece-enable-ccs.md). ## Add remote clusters [add-remote-clusters] -::::{note} -The instructions that follow describe how to create a remote connection from a self-managed cluster. You can also set up {{ccs}} and {{ccr}} from an [{{ess}} deployment](/deploy-manage/remote-clusters/ec-enable-ccs.md) or from an [{{ece}} deployment](/deploy-manage/remote-clusters/ece-enable-ccs.md). -:::: - - To add remote clusters, you can choose between [two security models](#remote-clusters-security-models) and [two connection modes](#sniff-proxy-modes). Both security models are compatible with either of the connection modes. ### Security models [remote-clusters-security-models] -API key based security model -: For clusters on version 8.14 or later, you can use an API key to authenticate and authorize cross-cluster operations to a remote cluster. This model offers administrators of both the local and the remote cluster fine-grained access controls. [Add remote clusters using API key authentication](remote-clusters-api-key.md). +API key +: For clusters on {{stack}} 8.14 or later, you can use an API key to authenticate and authorize cross-cluster operations to a remote cluster. This model offers administrators of both the local and the remote cluster fine-grained access controls. [Add remote clusters using API key authentication](remote-clusters-api-key.md). -Certificate based security model +TLS certificate (deprecated in {{stack}} 9.0.0) : Uses mutual TLS authentication for cross-cluster operations. User authentication is performed on the local cluster and a user’s role names are passed to the remote cluster. In this model, a superuser on the local cluster gains total read access to the remote cluster, so it is only suitable for clusters that are in the same security domain. [Add remote clusters using TLS certificate authentication](remote-clusters-cert.md). - ::::{admonition} Deprecated in 9.0.0. - :class: warning - - Use [API key based security model](remote-clusters-api-key.md) instead. - :::: - - ### Connection modes [sniff-proxy-modes] diff --git a/deploy-manage/remote-clusters/remote-clusters-settings.md b/deploy-manage/remote-clusters/remote-clusters-settings.md index d733f90f8..acf9c3610 100644 --- a/deploy-manage/remote-clusters/remote-clusters-settings.md +++ b/deploy-manage/remote-clusters/remote-clusters-settings.md @@ -1,4 +1,7 @@ --- +applies_to: + deployment: + self: ga mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/remote-clusters-settings.html --- @@ -20,7 +23,7 @@ The following settings apply to both [sniff mode](/deploy-manage/remote-clusters : Per cluster boolean setting that allows to skip specific clusters when no nodes belonging to them are available and they are the target of a remote cluster request. ::::{important} -In Elasticsearch 8.15, the default value for `skip_unavailable` was changed from `false` to `true`. Before Elasticsearch 8.15, if you want a cluster to be treated as optional for a {{ccs}}, then you need to set that configuration. From Elasticsearch 8.15 forward, you need to set the configuration in order to make a cluster required for the {{ccs}}. Once you upgrade the local ("querying") cluster search coordinator node (the node you send CCS requests to) to 8.15 or later, any remote clusters that do not have an explicit setting for `skip_unavailable` will immediately change over to using the new default of true. This is true regardless of whether you have upgraded the remote clusters to 8.15, as the `skip_unavailable` search behavior is entirely determined by the setting on the local cluster where you configure the remotes. +In {{es}} 8.15, the default value for `skip_unavailable` was changed from `false` to `true`. Before {{es}} 8.15, if you want a cluster to be treated as optional for a {{ccs}}, then you need to set that configuration. From {{es}} 8.15 forward, you need to set the configuration in order to make a cluster required for the {{ccs}}. Once you upgrade the local ("querying") cluster search coordinator node (the node you send CCS requests to) to 8.15 or later, any remote clusters that do not have an explicit setting for `skip_unavailable` will immediately change over to using the new default of true. This is true regardless of whether you have upgraded the remote clusters to 8.15, as the `skip_unavailable` search behavior is entirely determined by the setting on the local cluster where you configure the remotes. :::: diff --git a/deploy-manage/remote-clusters/remote-clusters-troubleshooting.md b/deploy-manage/remote-clusters/remote-clusters-troubleshooting.md deleted file mode 100644 index d5f068b7f..000000000 --- a/deploy-manage/remote-clusters/remote-clusters-troubleshooting.md +++ /dev/null @@ -1,406 +0,0 @@ ---- -navigation_title: "Troubleshooting" -mapped_pages: - - https://www.elastic.co/guide/en/elasticsearch/reference/current/remote-clusters-troubleshooting.html ---- - - - -# Troubleshooting [remote-clusters-troubleshooting] - - -You may encounter several issues when setting up a remote cluster for {{ccr}} or {{ccs}}. - -## General troubleshooting [remote-clusters-troubleshooting-general] - -### Checking whether a remote cluster has connected successfully [remote-clusters-troubleshooting-check-connection] - -A successful call to the cluster settings update API for adding or updating remote clusters does not necessarily mean the configuration is successful. Use the [remote cluster info API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-remote-info) to verify that a local cluster is successfully connected to a remote cluster. - -```console -GET /_remote/info -``` - -The API should return `"connected" : true`. When using [API key authentication](remote-clusters-api-key.md), it should also return `"cluster_credentials": "::es_redacted::"`. - -```console-result -{ - "cluster_one" : { - "seeds" : [ - "127.0.0.1:9443" - ], - "connected" : true, <1> - "num_nodes_connected" : 1, - "max_connections_per_cluster" : 3, - "initial_connect_timeout" : "30s", - "skip_unavailable" : false, - "cluster_credentials": "::es_redacted::", <2> - "mode" : "sniff" - } -} -``` - -1. The remote cluster has connected successfully. -2. If present, indicates the remote cluster has connected using [API key authentication](remote-clusters-api-key.md) instead of [certificate based authentication](remote-clusters-cert.md). - - - -### Enabling the remote cluster server [remote-clusters-troubleshooting-enable-server] - -When using API key authentication, cross-cluster traffic happens on the remote cluster interface, instead of the transport interface. The remote cluster interface is not enabled by default. This means a node is not ready to accept incoming cross-cluster requests by default, while it is ready to send outgoing cross-cluster requests. Ensure you’ve enabled the remote cluster server on every node of the remote cluster. In `elasticsearch.yml`: - -* Set [`remote_cluster_server.enabled`](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/networking-settings.md#remote-cluster-network-settings) to `true`. -* Configure the bind and publish address for remote cluster server traffic, for example using [`remote_cluster.host`](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/networking-settings.md#remote-cluster-network-settings). Without configuring the address, remote cluster traffic may be bound to the local interface, and remote clusters running on other machines can’t connect. -* Optionally, configure the remote server port using [`remote_cluster.port`](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/networking-settings.md#remote_cluster.port) (defaults to `9443`). - - - -## Common issues [remote-clusters-troubleshooting-common-issues] - -The following issues are listed in the order they may occur while setting up a remote cluster. - -### Remote cluster not reachable [remote-clusters-not-reachable] - -#### Symptom [_symptom] - -A local cluster may not be able to reach a remote cluster for many reasons. For example, the remote cluster server may not be enabled, an incorrect host or port may be configured, or a firewall may be blocking traffic. When a remote cluster is not reachable, check the logs of the local cluster for a `connect_exception`. - -When the remote cluster is configured using proxy mode: - -```txt -[2023-06-28T16:36:47,264][WARN ][o.e.t.ProxyConnectionStrategy] [local-node] failed to open any proxy connections to cluster [my] -org.elasticsearch.transport.ConnectTransportException: [][192.168.0.42:9443] **connect_exception** -``` - -When the remote cluster is configured using sniff mode: - -```txt -[2023-06-28T16:38:37,731][WARN ][o.e.t.SniffConnectionStrategy] [local-node] fetching nodes from external cluster [my] failed -org.elasticsearch.transport.ConnectTransportException: [][192.168.0.42:9443] **connect_exception** -``` - - -#### Resolution [_resolution] - -* Check the host and port for the remote cluster are correct. -* Ensure the [remote cluster server is enabled](#remote-clusters-troubleshooting-enable-server) on the remote cluster. -* Ensure no firewall is blocking the communication. - - - -### Remote cluster connection is unreliable [remote-clusters-unreliable-network] - -#### Symptom [_symptom_2] - -The local cluster can connect to the remote cluster, but the connection does not work reliably. For example, some cross-cluster requests may succeed while others report connection errors, time out, or appear to be stuck waiting for the remote cluster to respond. - -When {{es}} detects that the remote cluster connection is not working, it will report the following message in its logs: - -```txt -[2023-06-28T16:36:47,264][INFO ][o.e.t.ClusterConnectionManager] [local-node] transport connection to [{my-remote#192.168.0.42:9443}{...}] closed by remote -``` - -This message will also be logged if the node of the remote cluster to which {{es}} is connected is shut down or restarted. - -Note that with some network configurations it could take minutes or hours for the operating system to detect that a connection has stopped working. Until the failure is detected and reported to {{es}}, requests involving the remote cluster may time out or may appear to be stuck. - - -#### Resolution [_resolution_2] - -* Ensure that the network between the clusters is as reliable as possible. -* Ensure that the network is configured to permit [Long-lived idle connections](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/networking-settings.md#long-lived-connections). -* Ensure that the network is configured to detect faulty connections quickly. In particular, you must enable and fully support TCP keepalives, and set a short [retransmission timeout](../deploy/self-managed/system-config-tcpretries.md). -* On Linux systems, execute `ss -tonie` to verify the details of the configuration of each network connection between the clusters. -* If the problems persist, capture network packets at both ends of the connection and analyse the traffic to look for delays and lost messages. - - - -### TLS trust not established [remote-clusters-troubleshooting-tls-trust] - -TLS can be misconfigured on the local or the remote cluster. The result is that the local cluster does not trust the certificate presented by the remote cluster. - -#### Symptom [_symptom_3] - -The local cluster logs `failed to establish trust with server`: - -```txt -[2023-06-29T09:40:55,465][WARN ][o.e.c.s.DiagnosticTrustManager] [local-node] **failed to establish trust with server** at [192.168.0.42]; the server provided a certificate with subject name [CN=remote_cluster], fingerprint [529de35e15666ffaa26afa50876a2a48119db03a], no keyUsage and no extendedKeyUsage; the certificate is valid between [2023-01-29T12:08:37Z] and [2032-08-29T12:08:37Z] (current time is [2023-08-16T23:40:55.464275Z], certificate dates are valid); the session uses cipher suite [TLS_AES_256_GCM_SHA384] and protocol [TLSv1.3]; the certificate has subject alternative names [DNS:localhost,DNS:localhost6.localdomain6,IP:127.0.0.1,IP:0:0:0:0:0:0:0:1,DNS:localhost4,DNS:localhost6,DNS:localhost.localdomain,DNS:localhost4.localdomain4,IP:192.168.0.42]; the certificate is issued by [CN=Elastic Auto RemoteCluster CA] but the server did not provide a copy of the issuing certificate in the certificate chain; this ssl context ([(shared) (with trust configuration: JDK-trusted-certs)]) is not configured to trust that issuer but trusts [97] other issuers -sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target -``` - -The remote cluster logs `client did not trust this server's certificate`: - -```txt -[2023-06-29T09:40:55,478][WARN ][o.e.x.c.s.t.n.SecurityNetty4Transport] [remote-node] **client did not trust this server's certificate**, closing connection Netty4TcpChannel{localAddress=/192.168.0.42:9443, remoteAddress=/192.168.0.84:57305, profile=_remote_cluster} -``` - - -#### Resolution [_resolution_3] - -Read the warn log message on the local cluster carefully to determine the exact cause of the failure. For example: - -* Is the remote cluster certificate not signed by a trusted CA? This is the most likely cause. -* Is hostname verification failing? -* Is the certificate expired? - -Once you know the cause, you should be able to fix it by adjusting the remote cluster related SSL settings on either the local cluster or the remote cluster. - -Often, the issue is on the local cluster. For example, fix it by configuring necessary trusted CAs (`xpack.security.remote_cluster_client.ssl.certificate_authorities`). - -If you change the `elasticsearch.yml` file, the associated cluster needs to be restarted for the changes to take effect. - - - - -## API key authentication issues [remote-clusters-troubleshooting-api-key] - -### Connecting to transport port when using API key authentication [remote-clusters-troubleshooting-transport-port-api-key] - -When using API key authentication, a local cluster should connect to a remote cluster’s remote cluster server port (defaults to `9443`) instead of the transport port (defaults to `9300`). A misconfiguration can lead to a number of symptoms: - -#### Symptom 1 [_symptom_1] - -It’s recommended to use different CAs and certificates for the transport interface and the remote cluster server interface. If this recommendation is followed, a remote cluster client node does not trust the server certificate presented by a remote cluster on the transport interface. - -The local cluster logs `failed to establish trust with server`: - -```txt -[2023-06-28T12:48:46,575][WARN ][o.e.c.s.DiagnosticTrustManager] [local-node] **failed to establish trust with server** at [1192.168.0.42]; the server provided a certificate with subject name [CN=transport], fingerprint [c43e628be2a8aaaa4092b82d78f2bc206c492322], no keyUsage and no extendedKeyUsage; the certificate is valid between [2023-01-29T12:05:53Z] and [2032-08-29T12:05:53Z] (current time is [2023-06-28T02:48:46.574738Z], certificate dates are valid); the session uses cipher suite [TLS_AES_256_GCM_SHA384] and protocol [TLSv1.3]; the certificate has subject alternative names [DNS:localhost,DNS:localhost6.localdomain6,IP:127.0.0.1,IP:0:0:0:0:0:0:0:1,DNS:localhost4,DNS:localhost6,DNS:localhost.localdomain,DNS:localhost4.localdomain4,IP:192.168.0.42]; the certificate is issued by [CN=Elastic Auto Transport CA] but the server did not provide a copy of the issuing certificate in the certificate chain; this ssl context ([xpack.security.remote_cluster_client.ssl (with trust configuration: PEM-trust{/rcs2/ssl/remote-cluster-ca.crt})]) is not configured to trust that issuer, it only trusts the issuer [CN=Elastic Auto RemoteCluster CA] with fingerprint [ba2350661f66e46c746c1629f0c4b645a2587ff4] -sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target -``` - -The remote cluster logs `client did not trust this server's certificate`: - -```txt -[2023-06-28T12:48:46,584][WARN ][o.e.x.c.s.t.n.SecurityNetty4Transport] [remote-node] **client did not trust this server's certificate**, closing connection Netty4TcpChannel{localAddress=/192.168.0.42:9309, remoteAddress=/192.168.0.84:60810, profile=default} -``` - - -#### Symptom 2 [_symptom_2_2] - -The CA and certificate can be shared between the transport and remote cluster server interface. Since a remote cluster client does not have a client certificate by default, the server will fail to verify the client certificate. - -The local cluster logs `Received fatal alert: bad_certificate`: - -```txt -[2023-06-28T12:43:30,705][WARN ][o.e.t.TcpTransport ] [local-node] exception caught on transport layer [Netty4TcpChannel{localAddress=/192.168.0.84:60738, remoteAddress=/192.168.0.42:9309, profile=_remote_cluster}], closing connection -io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: **Received fatal alert: bad_certificate** -``` - -The remote cluster logs `Empty client certificate chain`: - -```txt -[2023-06-28T12:43:30,772][WARN ][o.e.t.TcpTransport ] [remote-node] exception caught on transport layer [Netty4TcpChannel{localAddress=/192.168.0.42:9309, remoteAddress=/192.168.0.84:60783, profile=default}], closing connection -io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: **Empty client certificate chain** -``` - - -#### Symptom 3 [_symptom_3_2] - -If the remote cluster client is configured for mTLS and provides a valid client certificate, the connection fails because the client does not send the expected authentication header. - -The local cluster logs `missing authentication`: - -```txt -[2023-06-28T13:04:52,710][WARN ][o.e.t.ProxyConnectionStrategy] [local-node] failed to open any proxy connections to cluster [my] -org.elasticsearch.transport.RemoteTransportException: [remote-node][192.168.0.42:9309][cluster:internal/remote_cluster/handshake] -Caused by: org.elasticsearch.ElasticsearchSecurityException: **missing authentication** credentials for action [cluster:internal/remote_cluster/handshake] -``` - -This does not show up in the logs of the remote cluster. - - -#### Symptom 4 [_symptom_4] - -If anonymous access is enabled on the remote cluster and it does not require authentication, depending on the privileges of the anonymous user, the local cluster may log the following. - -If the anonymous user does not the have necessary privileges to make a connection, the local cluster logs `unauthorized`: - -```txt -org.elasticsearch.transport.RemoteTransportException: [remote-node][192.168.0.42:9309][cluster:internal/remote_cluster/handshake] -Caused by: org.elasticsearch.ElasticsearchSecurityException: action [cluster:internal/remote_cluster/handshake] is **unauthorized** for user [anonymous_foo] with effective roles [reporting_user], this action is granted by the cluster privileges [cross_cluster_search,cross_cluster_replication,manage,all] -``` - -If the anonymous user has necessary privileges, for example it is a superuser, the local cluster logs `requires channel profile to be [_remote_cluster], but got [default]`: - -```txt -[2023-06-28T13:09:52,031][WARN ][o.e.t.ProxyConnectionStrategy] [local-node] failed to open any proxy connections to cluster [my] -org.elasticsearch.transport.RemoteTransportException: [remote-node][192.168.0.42:9309][cluster:internal/remote_cluster/handshake] -Caused by: java.lang.IllegalArgumentException: remote cluster handshake action **requires channel profile to be [_remote_cluster], but got [default]** -``` - - -#### Resolution [_resolution_4] - -Check the port number and ensure you are indeed connecting to the remote cluster server instead of the transport interface. - - - -### Connecting without a cross-cluster API key [remote-clusters-troubleshooting-no-api-key] - -A local cluster uses the presence of a cross-cluster API key to determine the model with which it connects to a remote cluster. If a cross-cluster API key is present, it uses API key based authentication. Otherwise, it uses certificate based authentication. You can check what model is being used with the [remote cluster info API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-remote-info) on the local cluster: - -```console -GET /_remote/info -``` - -The API should return `"connected" : true`. When using [API key authentication](remote-clusters-api-key.md), it should also return `"cluster_credentials": "::es_redacted::"`. - -```console-result -{ - "cluster_one" : { - "seeds" : [ - "127.0.0.1:9443" - ], - "connected" : true, <1> - "num_nodes_connected" : 1, - "max_connections_per_cluster" : 3, - "initial_connect_timeout" : "30s", - "skip_unavailable" : false, - "cluster_credentials": "::es_redacted::", <2> - "mode" : "sniff" - } -} -``` - -1. The remote cluster has connected successfully. -2. If present, indicates the remote cluster has connected using [API key authentication](remote-clusters-api-key.md) instead of [certificate based authentication](remote-clusters-cert.md). - - -Besides checking the response of the remote cluster info API, you can also check the logs. - -#### Symptom 1 [_symptom_1_2] - -If no cross-cluster API key is used, the local cluster uses the certificate based authentication method, and connects to the remote cluster using the TLS configuration of the transport interface. If the remote cluster has different TLS CA and certificate for transport and remote cluster server interfaces (which is the recommendation), TLS verification will fail. - -The local cluster logs `failed to establish trust with server`: - -```txt -[2023-06-28T12:51:06,452][WARN ][o.e.c.s.DiagnosticTrustManager] [local-node] **failed to establish trust with server** at []; the server provided a certificate with subject name [CN=remote_cluster], fingerprint [529de35e15666ffaa26afa50876a2a48119db03a], no keyUsage and no extendedKeyUsage; the certificate is valid between [2023-01-29T12:08:37Z] and [2032-08-29T12:08:37Z] (current time is [2023-06-28T02:51:06.451581Z], certificate dates are valid); the session uses cipher suite [TLS_AES_256_GCM_SHA384] and protocol [TLSv1.3]; the certificate has subject alternative names [DNS:localhost,DNS:localhost6.localdomain6,IP:127.0.0.1,IP:0:0:0:0:0:0:0:1,DNS:localhost4,DNS:localhost6,DNS:localhost.localdomain,DNS:localhost4.localdomain4,IP:192.168.0.42]; the certificate is issued by [CN=Elastic Auto RemoteCluster CA] but the server did not provide a copy of the issuing certificate in the certificate chain; this ssl context ([xpack.security.transport.ssl (with trust configuration: PEM-trust{/rcs2/ssl/transport-ca.crt})]) is not configured to trust that issuer, it only trusts the issuer [CN=Elastic Auto Transport CA] with fingerprint [bbe49e3f986506008a70ab651b188c70df104812] -sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target -``` - -The remote cluster logs `client did not trust this server's certificate`: - -```txt -[2023-06-28T12:52:16,914][WARN ][o.e.x.c.s.t.n.SecurityNetty4Transport] [remote-node] **client did not trust this server's certificate**, closing connection Netty4TcpChannel{localAddress=/192.168.0.42:9443, remoteAddress=/192.168.0.84:60981, profile=_remote_cluster} -``` - - -#### Symptom 2 [_symptom_2_3] - -Even if TLS verification is not an issue, the connection fails due to missing credentials. - -The local cluster logs `Please ensure you have configured remote cluster credentials`: - -```txt -Caused by: java.lang.IllegalArgumentException: Cross cluster requests through the dedicated remote cluster server port require transport header [_cross_cluster_access_credentials] but none found. **Please ensure you have configured remote cluster credentials** on the cluster originating the request. -``` - -This does not show up in the logs of the remote cluster. - - -#### Resolution [_resolution_5] - -Add the cross-cluster API key to {{es}} keystore on every node of the local cluster. Use the [Nodes reload secure settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-reload-secure-settings) API to reload the keystore. - - - -### Using the wrong API key type [remote-clusters-troubleshooting-wrong-api-key-type] - -API key based authentication requires [cross-cluster API keys](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key). It does not work with [REST API keys](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key). - -#### Symptom [_symptom_5] - -The local cluster logs `authentication expected API key type of [cross_cluster]`: - -```txt -[2023-06-28T13:26:53,962][WARN ][o.e.t.ProxyConnectionStrategy] [local-node] failed to open any proxy connections to cluster [my] -org.elasticsearch.transport.RemoteTransportException: [remote-node][192.168.0.42:9443][cluster:internal/remote_cluster/handshake] -Caused by: org.elasticsearch.ElasticsearchSecurityException: **authentication expected API key type of [cross_cluster]**, but API key [agZXJocBmA2beJfq2yKu] has type [rest] -``` - -This does not show up in the logs of the remote cluster. - - -#### Resolution [_resolution_6] - -Ask the remote cluster administrator to create and distribute a [cross-cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key). Replace the existing API key in the {{es}} keystore with this cross-cluster API key on every node of the local cluster. Use the [Nodes reload secure settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-reload-secure-settings) API to reload the keystore. - - - -### Invalid API key [remote-clusters-troubleshooting-non-valid-api-key] - -A cross-cluster API can fail to authenticate. For example, when its credentials are incorrect, or if it’s invalidated or expired. - -#### Symptom [_symptom_6] - -The local cluster logs `unable to authenticate`: - -```txt -[2023-06-28T13:22:58,264][WARN ][o.e.t.ProxyConnectionStrategy] [local-node] failed to open any proxy connections to cluster [my] -org.elasticsearch.transport.RemoteTransportException: [remote-node][192.168.0.42:9443][cluster:internal/remote_cluster/handshake] -Caused by: org.elasticsearch.ElasticsearchSecurityException: **unable to authenticate** user [agZXJocBmA2beJfq2yKu] for action [cluster:internal/remote_cluster/handshake] -``` - -The remote cluster logs `Authentication using apikey failed`: - -```txt -[2023-06-28T13:24:38,744][WARN ][o.e.x.s.a.ApiKeyAuthenticator] [remote-node] **Authentication using apikey failed** - invalid credentials for API key [agZXJocBmA2beJfq2yKu] -``` - - -#### Resolution [_resolution_7] - -Ask the remote cluster administrator to create and distribute a [cross-cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key). Replace the existing API key in the {{es}} keystore with this cross-cluster API key on every node of the local cluster. Use the [Nodes reload secure settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-reload-secure-settings) API to reload the keystore. - - - -### API key or local user has insufficient privileges [remote-clusters-troubleshooting-insufficient-privileges] - -The effective permission for a local user running requests on a remote cluster is determined by the intersection of the cross-cluster API key’s privileges and the local user’s `remote_indices` privileges. - -#### Symptom [_symptom_7] - -Request failures due to insufficient privileges result in API responses like: - -```js -{ - "type": "security_exception", - "reason": "action [indices:data/read/search] towards remote cluster is **unauthorized for user** [foo] with assigned roles [foo-role] authenticated by API key id [agZXJocBmA2beJfq2yKu] of user [elastic-admin] on indices [cd], this action is granted by the index privileges [read,all]" -} -``` - -This does not show up in any logs. - - -#### Resolution [_resolution_8] - -1. Check that the local user has the necessary `remote_indices` or `remote_cluster` privileges. Grant sufficient `remote_indices` or `remote_cluster` privileges if necessary. -2. If permission is not an issue locally, ask the remote cluster administrator to create and distribute a [cross-cluster API key](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-cross-cluster-api-key). Replace the existing API key in the {{es}} keystore with this cross-cluster API key on every node of the local cluster. Use the [Nodes reload secure settings](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-reload-secure-settings) API to reload the keystore. - - - -### Local user has no `remote_indices` privileges [remote-clusters-troubleshooting-no-remote_indices-privileges] - -This is a special case of insufficient privileges. In this case, the local user has no `remote_indices` privileges at all for the target remote cluster. {{es}} can detect that and issue a more explicit error response. - -#### Symptom [_symptom_8] - -This results in API responses like: - -```js -{ - "type": "security_exception", - "reason": "action [indices:data/read/search] towards remote cluster [my] is unauthorized for user [foo] with effective roles [] (assigned roles [foo-role] were not found) because **no remote indices privileges apply for the target cluster**" -} -``` - - -#### Resolution [_resolution_9] - -Grant sufficient `remote_indices` privileges to the local user. - - - - diff --git a/deploy-manage/security/claim-traffic-filter-link-id-ownership-through-api.md b/deploy-manage/security/claim-traffic-filter-link-id-ownership-through-api.md index 5f3618a82..d34543c75 100644 --- a/deploy-manage/security/claim-traffic-filter-link-id-ownership-through-api.md +++ b/deploy-manage/security/claim-traffic-filter-link-id-ownership-through-api.md @@ -5,7 +5,7 @@ mapped_pages: # Claim traffic filter link ID ownership through the API [ec-claim-traffic-filter-link-id-through-the-api] -This example demonstrates how to use the Elasticsearch Service RESTful API to claim different types of private link ID (AWS PrivateLink, Azure Private Link, and GCP Private Service Connect). We cover the following examples: +This example demonstrates how to use the {{ecloud}} RESTful API to claim different types of private link ID (AWS PrivateLink, Azure Private Link, and GCP Private Service Connect). We cover the following examples: * [Claim a traffic filter link id](#ec-claim-a-traffic-filter-link-id) diff --git a/deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md b/deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md index 2ec337546..b7030b9be 100644 --- a/deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md +++ b/deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md @@ -35,7 +35,7 @@ When a deployment encrypted with a customer-managed key is deleted or terminated ::::::{tab-item} AWS * Have permissions on AWS KMS to [create a symmetric AWS KMS key](https://docs.aws.amazon.com/kms/latest/developerguide/concepts.md#symmetric-cmks) and to configure AWS IAM roles. -* Consider the cloud regions where you need your deployment to live. Refer to the [list of available regions, deployment templates, and instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-regions-templates-instances.md) supported by Elastic Cloud. +* Consider the cloud regions where you need your deployment to live. Refer to the [list of available regions, deployment templates, and instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md) supported by Elastic Cloud. :::::: ::::::{tab-item} Azure @@ -46,11 +46,11 @@ When a deployment encrypted with a customer-managed key is deleted or terminated * Permissions to [assign roles in your Key Vault using Access control (IAM)](https://learn.microsoft.com/en-us/azure/key-vault/general/rbac-guide?tabs=azure-cli#prerequisites). This is required to grant the service principal access to your key. * The Azure Key Vault where the RSA key will be stored must have [purge protection](https://learn.microsoft.com/en-us/azure/key-vault/general/soft-delete-overview#purge-protection) enabled to support the encryption of snapshots. -* Consider the cloud regions where you need your deployment to live. Refer to the [list of available regions, deployment templates, and instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-regions-templates-instances.md) supported by Elastic Cloud. +* Consider the cloud regions where you need your deployment to live. Refer to the [list of available regions, deployment templates, and instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md) supported by Elastic Cloud. :::::: ::::::{tab-item} Google Cloud -* Consider the cloud regions where you need your deployment to live. Refer to the [list of available regions, deployment templates, and instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-regions-templates-instances.md) supported by Elastic Cloud. +* Consider the cloud regions where you need your deployment to live. Refer to the [list of available regions, deployment templates, and instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md) supported by Elastic Cloud. * Have the following permissions in Google Cloud KMS: * Permissions to [create a KMS key](https://cloud.google.com/kms/docs/create-key) on a key ring in the same region as your deployment. If you don’t have a key ring in the same region, or want to store the key in its own key ring, then you also need permissions to [create a key ring](https://cloud.google.com/kms/docs/create-key-ring). @@ -158,18 +158,18 @@ Provide your key identifier without the key version identifier so Elastic Cloud :::::::{tab-set} ::::::{tab-item} AWS -1. Create a new deployment. You can do it from the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body), or from the API: +1. Create a new deployment. You can do it from the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body), or from the API: - * from the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body): + * from the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body): - * Select **Create deployment** from the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body) home page. + * Select **Create deployment** from the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) home page. * In the **Settings**, set the **Cloud provider** to **Amazon Web Services** and select a region. * Expand the **Advanced settings** and turn on **Use a customer-managed encryption key**. An additional field appears to let you specify the ARN of the AWS KMS key or key alias you will use to encrypt your new deployment. * Configure the rest of your deployment to your convenience, and select **Create deployment**. * using the API: - * Choose a **cloud region** and a **deployment template** (also called hardware profile) for your deployment from the [list of available regions, deployment templates, and instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-regions-templates-instances.md). + * Choose a **cloud region** and a **deployment template** (also called hardware profile) for your deployment from the [list of available regions, deployment templates, and instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md). * [Get a valid Elastic Cloud API key](/deploy-manage/api-keys/elastic-cloud-api-keys.md) with the **Organization owner** role or the **Admin** role on deployments. These roles allow you to create new deployments. * Get the ARN of the symmetric AWS KMS key or of its alias. Use an alias if you are planning to do manual key rotations as specified in the [AWS documentation](https://docs.aws.amazon.com/kms/latest/developerguide/rotate-keys.md). * Use these parameters to create a new deployment with the [Elastic Cloud API](https://www.elastic.co/docs/api/doc/cloud/group/endpoint-deployments). For example: @@ -207,7 +207,7 @@ To create a new deployment with a customer-managed key in Azure, you need to per 1. In Elastic Cloud, retrieve the Azure application ID: - * Select **Create deployment** from the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body) home page. + * Select **Create deployment** from the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) home page. * In the **Settings**, set the **Cloud provider** to **Azure** and select a region. * Expand the **Advanced settings** and turn on **Use a customer-managed encryption key**. * Copy the **Azure application ID**. @@ -231,11 +231,11 @@ To create a new deployment with a customer-managed key in Azure, you need to per **Step 2: Create your deployment**
-After you have created the service principal and granted it the necessary permissions, you can finish creating your deployment. You can do so from the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body), or from the API. +After you have created the service principal and granted it the necessary permissions, you can finish creating your deployment. You can do so from the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body), or from the API. -* Using the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body): +* Using the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body): - * Select **Create deployment** from the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body) home page. + * Select **Create deployment** from the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) home page. * In the **Settings**, set the **Cloud provider** to **Azure** and select a region. * Expand the **Advanced settings** and turn on **Use a customer-managed encryption key**. * Enter the Azure key identifier for the RSA key that you created. @@ -243,7 +243,7 @@ After you have created the service principal and granted it the necessary permis * Using the API: - * Choose a **cloud region** and a **deployment template** (also called hardware profile) for your deployment from the [list of available regions, deployment templates, and instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-regions-templates-instances.md). + * Choose a **cloud region** and a **deployment template** (also called hardware profile) for your deployment from the [list of available regions, deployment templates, and instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md). * [Get a valid Elastic Cloud API key](/deploy-manage/api-keys/elastic-cloud-api-keys.md) with the **Organization owner** role or the **Admin** role on deployments. These roles allow you to create new deployments. * Use these parameters to create a new deployment with the [Elastic Cloud API](https://www.elastic.co/docs/api/doc/cloud/group/endpoint-deployments). For example: @@ -284,7 +284,7 @@ Elastic Cloud uses two service principals to encrypt and decrypt data using your 1. In Elastic Cloud, retrieve the email addresses for the service principals that will be used by Elastic: - * Select **Create deployment** from the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body) home page. + * Select **Create deployment** from the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) home page. * In the **Settings**, set the **Cloud provider** to **Google Cloud** and select a region. * Expand the **Advanced settings** and turn on **Use a customer-managed encryption key**. * Note the **Elastic service account** and **Google Cloud Platform storage service agent** email addresses. @@ -310,11 +310,11 @@ The user performing this action needs to belong to the **Owner** or **Cloud KMS **Step 2: Create your deployment** -After you have granted the Elastic principals the necessary roles, you can finish creating your deployment. You can do so from the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body), or from the API. +After you have granted the Elastic principals the necessary roles, you can finish creating your deployment. You can do so from the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body), or from the API. -* Using the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body): +* Using the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body): - * Select **Create deployment** from the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body) home page. + * Select **Create deployment** from the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) home page. * In the **Settings**, set the **Cloud provider** to **Google Cloud** and select a region. * Expand the **Advanced settings** and turn on **Use a customer-managed encryption key**. * Enter the resource ID for the key that you created. @@ -322,7 +322,7 @@ After you have granted the Elastic principals the necessary roles, you can finis * Using the API: - * Choose a **cloud region** and a **deployment template** (also called hardware profile) for your deployment from the [list of available regions, deployment templates, and instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-regions-templates-instances.md). + * Choose a **cloud region** and a **deployment template** (also called hardware profile) for your deployment from the [list of available regions, deployment templates, and instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md). * [Get a valid Elastic Cloud API key](/deploy-manage/api-keys/elastic-cloud-api-keys.md) with the **Organization owner** role or the **Admin** role on deployments. These roles allow you to create new deployments. * Use these parameters to create a new deployment with the [Elastic Cloud API](https://www.elastic.co/docs/api/doc/cloud/group/endpoint-deployments). For example: diff --git a/deploy-manage/security/httprest-clients-security.md b/deploy-manage/security/httprest-clients-security.md index 76011038f..3e42f9962 100644 --- a/deploy-manage/security/httprest-clients-security.md +++ b/deploy-manage/security/httprest-clients-security.md @@ -70,11 +70,11 @@ es-secondary-authorization: ApiKey <1> For more information about using {{security-features}} with the language specific clients, refer to: -* [Java](asciidocalypse://docs/elasticsearch-java/docs/reference/elasticsearch/elasticsearch-client-java-api-client/_basic_authentication.md) -* [JavaScript](asciidocalypse://docs/elasticsearch-js/docs/reference/elasticsearch/elasticsearch-client-javascript-api/connecting.md) -* [.NET](asciidocalypse://docs/elasticsearch-net/docs/reference/elasticsearch/elasticsearch-client-net-api/configuration.md) +* [Java](asciidocalypse://docs/elasticsearch-java/docs/reference/_basic_authentication.md) +* [JavaScript](asciidocalypse://docs/elasticsearch-js/docs/reference/connecting.md) +* [.NET](asciidocalypse://docs/elasticsearch-net/docs/reference/configuration.md) * [Perl](https://metacpan.org/pod/Search::Elasticsearch::Cxn::HTTPTiny#CONFIGURATION) -* [PHP](asciidocalypse://docs/elasticsearch-php/docs/reference/elasticsearch/elasticsearch-client-php-api/connecting.md) +* [PHP](asciidocalypse://docs/elasticsearch-php/docs/reference/connecting.md) * [Python](https://elasticsearch-py.readthedocs.io/en/master/#ssl-and-authentication) * [Ruby](https://github.com/elasticsearch/elasticsearch-ruby/tree/master/elasticsearch-transport#authentication) diff --git a/deploy-manage/security/secure-clients-integrations.md b/deploy-manage/security/secure-clients-integrations.md index a7cb47df2..7423fff72 100644 --- a/deploy-manage/security/secure-clients-integrations.md +++ b/deploy-manage/security/secure-clients-integrations.md @@ -9,18 +9,18 @@ You will need to update the configuration for several [clients](httprest-clients The {{es}} {{security-features}} enable you to secure your {{es}} cluster. But {{es}} itself is only one product within the {{stack}}. It is often the case that other products in the {{stack}} are connected to the cluster and therefore need to be secured as well, or at least communicate with the cluster in a secured way: -* [Apache Hadoop](asciidocalypse://docs/elasticsearch-hadoop/docs/reference/ingestion-tools/elasticsearch-hadoop/security.md) -* [Auditbeat](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-auditbeat/securing-auditbeat.md) -* [Filebeat](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/securing-filebeat.md) +* [Apache Hadoop](asciidocalypse://docs/elasticsearch-hadoop/docs/reference/security.md) +* [Auditbeat](asciidocalypse://docs/beats/docs/reference/auditbeat/securing-auditbeat.md) +* [Filebeat](asciidocalypse://docs/beats/docs/reference/filebeat/securing-filebeat.md) * [{{fleet}} & {{agent}}](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/secure.md) -* [Heartbeat](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-heartbeat/securing-heartbeat.md) +* [Heartbeat](asciidocalypse://docs/beats/docs/reference/heartbeat/securing-heartbeat.md) * [{{kib}}](../security.md) -* [Logstash](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/secure-connection.md) -* [Metricbeat](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/securing-metricbeat.md) +* [Logstash](asciidocalypse://docs/logstash/docs/reference/secure-connection.md) +* [Metricbeat](asciidocalypse://docs/beats/docs/reference/metricbeat/securing-metricbeat.md) * [Monitoring and security](../monitor.md) -* [Packetbeat](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-packetbeat/securing-packetbeat.md) +* [Packetbeat](asciidocalypse://docs/beats/docs/reference/packetbeat/securing-packetbeat.md) * [Reporting](../../explore-analyze/report-and-share.md) -* [Winlogbeat](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-winlogbeat/securing-winlogbeat.md) +* [Winlogbeat](asciidocalypse://docs/beats/docs/reference/winlogbeat/securing-winlogbeat.md) diff --git a/deploy-manage/security/set-up-basic-security-plus-https.md b/deploy-manage/security/set-up-basic-security-plus-https.md index 111e17a05..3a11acfd3 100644 --- a/deploy-manage/security/set-up-basic-security-plus-https.md +++ b/deploy-manage/security/set-up-basic-security-plus-https.md @@ -201,11 +201,11 @@ After making these changes, you must always access {{kib}} via HTTPS. For exampl ## Configure {{beats}} security [configure-beats-security] -{{beats}} are open source data shippers that you install as agents on your servers to send operational data to {{es}}. Each Beat is a separately installable product. The following steps cover configuring security for {{metricbeat}}. Follow these steps for each [additional Beat](asciidocalypse://docs/beats/docs/reference/ingestion-tools/index.md) you want to configure security for. +{{beats}} are open source data shippers that you install as agents on your servers to send operational data to {{es}}. Each Beat is a separately installable product. The following steps cover configuring security for {{metricbeat}}. Follow these steps for each [additional Beat](asciidocalypse://docs/beats/docs/reference/index.md) you want to configure security for. ### Prerequisites [_prerequisites_13] -[Install {{metricbeat}}](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/metricbeat-installation-configuration.md) using your preferred method. +[Install {{metricbeat}}](asciidocalypse://docs/beats/docs/reference/metricbeat/metricbeat-installation-configuration.md) using your preferred method. ::::{important} You cannot connect to the {{stack}} or configure assets for {{metricbeat}} before completing the following steps. @@ -441,7 +441,7 @@ In production environments, we strongly recommend using a separate cluster (refe verification_mode: "certificate" ``` - 1. Configuring SSL is required when monitoring a node with encrypted traffic. See [Configure SSL for {{metricbeat}}](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/configuration-ssl.md).`hosts` + 1. Configuring SSL is required when monitoring a node with encrypted traffic. See [Configure SSL for {{metricbeat}}](asciidocalypse://docs/beats/docs/reference/metricbeat/configuration-ssl.md).`hosts` : Specifies the host where your {{es}} cluster is running. Ensure that you include `https` in the URL. `username` diff --git a/deploy-manage/toc.yml b/deploy-manage/toc.yml index 81f16c287..c5118c731 100644 --- a/deploy-manage/toc.yml +++ b/deploy-manage/toc.yml @@ -31,33 +31,28 @@ toc: children: - file: deploy/elastic-cloud/subscribe-from-marketplace.md children: + - hidden: deploy/elastic-cloud/create-monthly-pay-as-you-go-subscription-on-aws-marketplace.md + - hidden: deploy/elastic-cloud/azure-marketplace-pricing.md + - hidden: deploy/elastic-cloud/create-monthly-pay-as-you-go-subscription-on-gcp-marketplace.md - file: deploy/elastic-cloud/aws-marketplace.md - children: - - file: deploy/elastic-cloud/create-monthly-pay-as-you-go-subscription-on-aws-marketplace.md - - file: deploy/elastic-cloud/azure-native-isv-service.md - children: - - file: deploy/elastic-cloud/azure-marketplace-pricing.md - - file: deploy/elastic-cloud/google-cloud-platform-marketplace.md - children: - - file: deploy/elastic-cloud/create-monthly-pay-as-you-go-subscription-on-gcp-marketplace.md + - file: deploy/elastic-cloud/azure-native-isv-service.md + - file: deploy/elastic-cloud/google-cloud-platform-marketplace.md - file: deploy/elastic-cloud/heroku.md children: - - file: deploy/elastic-cloud/ech-getting-started.md + - file: deploy/elastic-cloud/ech-getting-started-installing.md children: - - file: deploy/elastic-cloud/ech-getting-started-installing.md - children: - - file: deploy/elastic-cloud/ech-getting-started-installing-version.md - - file: deploy/elastic-cloud/ech-getting-started-removing.md - - file: deploy/elastic-cloud/ech-migrating.md - - file: deploy/elastic-cloud/ech-getting-started-accessing.md - children: - - file: deploy/elastic-cloud/ech-access-kibana.md - - file: deploy/elastic-cloud/ech-working-with-elasticsearch.md - - file: deploy/elastic-cloud/ech-api-console.md - - file: deploy/elastic-cloud/ech-getting-started-next-steps.md - - file: deploy/elastic-cloud/ech-migrate-data2.md - children: - - file: deploy/elastic-cloud/ech-migrate-data-internal.md + - file: deploy/elastic-cloud/ech-getting-started-installing-version.md + - file: deploy/elastic-cloud/ech-getting-started-removing.md + - file: deploy/elastic-cloud/ech-migrating.md + - file: deploy/elastic-cloud/ech-getting-started-accessing.md + children: + - file: deploy/elastic-cloud/ech-access-kibana.md + - file: deploy/elastic-cloud/ech-working-with-elasticsearch.md + - file: deploy/elastic-cloud/ech-api-console.md + - file: deploy/elastic-cloud/ech-getting-started-next-steps.md + - file: deploy/elastic-cloud/ech-migrate-data2.md + children: + - file: deploy/elastic-cloud/ech-migrate-data-internal.md - file: deploy/elastic-cloud/ech-about.md children: - file: deploy/elastic-cloud/ech-licensing.md @@ -94,19 +89,11 @@ toc: children: - file: deploy/elastic-cloud/configure.md children: - - file: deploy/elastic-cloud/ec-customize-deployment.md - children: - - file: deploy/elastic-cloud/ec-configure-deployment-settings.md - - file: deploy/elastic-cloud/ec-change-hardware-profile.md - - file: deploy/elastic-cloud/ec-customize-deployment-components.md - - file: deploy/elastic-cloud/ech-configure-settings.md - children: - - file: deploy/elastic-cloud/ech-configure-deployment-settings.md - - file: deploy/elastic-cloud/ech-customize-deployment-components.md + - file: deploy/elastic-cloud/ec-change-hardware-profile.md + - file: deploy/elastic-cloud/ec-customize-deployment-components.md - file: deploy/elastic-cloud/edit-stack-settings.md - file: deploy/elastic-cloud/add-plugins-extensions.md children: - - file: deploy/elastic-cloud/add-plugins-provided-with-elastic-cloud-hosted.md - file: deploy/elastic-cloud/upload-custom-plugins-bundles.md - file: deploy/elastic-cloud/manage-plugins-extensions-through-api.md - file: deploy/elastic-cloud/custom-endpoint-aliases.md @@ -122,22 +109,21 @@ toc: - file: deploy/cloud-enterprise.md children: - file: deploy/cloud-enterprise/ece-architecture.md - - file: deploy/cloud-enterprise/ece-containerization.md - - file: deploy/cloud-enterprise/prepare-environment.md - children: - - file: deploy/cloud-enterprise/ece-hardware-prereq.md - - file: deploy/cloud-enterprise/ece-software-prereq.md - - file: deploy/cloud-enterprise/ece-sysconfig.md - - file: deploy/cloud-enterprise/ece-networking-prereq.md - - file: deploy/cloud-enterprise/ece-ha.md - - file: deploy/cloud-enterprise/ece-roles.md - - file: deploy/cloud-enterprise/ece-load-balancers.md - - file: deploy/cloud-enterprise/ece-users-permissions.md - - file: deploy/cloud-enterprise/ece-jvm.md - - file: deploy/cloud-enterprise/ece-wildcard-dns.md - - file: deploy/cloud-enterprise/ece-manage-capacity.md - file: deploy/cloud-enterprise/deploy-an-orchestrator.md children: + - file: deploy/cloud-enterprise/prepare-environment.md + children: + - file: deploy/cloud-enterprise/ece-hardware-prereq.md + - file: deploy/cloud-enterprise/ece-software-prereq.md + - file: deploy/cloud-enterprise/ece-sysconfig.md + - file: deploy/cloud-enterprise/ece-networking-prereq.md + - file: deploy/cloud-enterprise/ece-ha.md + - file: deploy/cloud-enterprise/ece-roles.md + - file: deploy/cloud-enterprise/ece-load-balancers.md + - file: deploy/cloud-enterprise/ece-users-permissions.md + - file: deploy/cloud-enterprise/ece-jvm.md + - file: deploy/cloud-enterprise/ece-wildcard-dns.md + - file: deploy/cloud-enterprise/ece-manage-capacity.md - file: deploy/cloud-enterprise/install.md children: - file: deploy/cloud-enterprise/identify-deployment-scenario.md @@ -216,6 +202,8 @@ toc: - file: deploy/cloud-enterprise/resize-deployment.md children: - file: deploy/cloud-enterprise/resource-overrides.md + - file: deploy/cloud-enterprise/add-plugins.md + - file: deploy/cloud-enterprise/add-custom-bundles-plugins.md - file: deploy/cloud-enterprise/manage-integrations-server.md children: - file: deploy/cloud-enterprise/switch-from-apm-to-integrations-server-payload.md @@ -537,7 +525,6 @@ toc: - file: remote-clusters/remote-clusters-cert.md - file: remote-clusters/remote-clusters-migrate.md - file: remote-clusters/remote-clusters-settings.md - - file: remote-clusters/remote-clusters-troubleshooting.md - file: remote-clusters/eck-remote-clusters.md - file: security.md children: @@ -628,14 +615,23 @@ toc: - file: users-roles/cluster-or-deployment-auth/kerberos.md - file: users-roles/cluster-or-deployment-auth/ldap.md - file: users-roles/cluster-or-deployment-auth/openid-connect.md + children: + - file: users-roles/cluster-or-deployment-auth/oidc-examples.md - file: users-roles/cluster-or-deployment-auth/saml.md + children: + - file: users-roles/cluster-or-deployment-auth/saml-entra.md - file: users-roles/cluster-or-deployment-auth/pki.md - file: users-roles/cluster-or-deployment-auth/custom.md - file: users-roles/cluster-or-deployment-auth/built-in-users.md - - file: users-roles/cluster-or-deployment-auth/user-profiles.md + children: + - file: users-roles/cluster-or-deployment-auth/built-in-sm.md + - file: users-roles/cluster-or-deployment-auth/orchestrator-managed-users-overview.md + children: + - file: users-roles/cluster-or-deployment-auth/manage-elastic-user-cloud.md + - file: users-roles/cluster-or-deployment-auth/managed-credentials-eck.md + - file: users-roles/cluster-or-deployment-auth/kibana-authentication.md - file: users-roles/cluster-or-deployment-auth/access-agreement.md - file: users-roles/cluster-or-deployment-auth/anonymous-access.md - - file: users-roles/cluster-or-deployment-auth/manage-authentication-for-multiple-clusters.md - file: users-roles/cluster-or-deployment-auth/token-based-authentication-services.md - file: users-roles/cluster-or-deployment-auth/service-accounts.md - file: users-roles/cluster-or-deployment-auth/internal-users.md @@ -644,8 +640,10 @@ toc: - file: users-roles/cluster-or-deployment-auth/configure-operator-privileges.md - file: users-roles/cluster-or-deployment-auth/operator-only-functionality.md - file: users-roles/cluster-or-deployment-auth/operator-privileges-for-snapshot-restore.md + - file: users-roles/cluster-or-deployment-auth/user-profiles.md - file: users-roles/cluster-or-deployment-auth/looking-up-users-without-authentication.md - file: users-roles/cluster-or-deployment-auth/controlling-user-cache.md + - file: users-roles/cluster-or-deployment-auth/manage-authentication-for-multiple-clusters.md - file: users-roles/cluster-or-deployment-auth/user-roles.md children: - file: users-roles/cluster-or-deployment-auth/defining-roles.md @@ -792,8 +790,8 @@ toc: - file: cloud-organization/tools-and-apis.md - file: license.md children: - - file: license/manage-your-license-in-eck.md - file: license/manage-your-license-in-ece.md + - file: license/manage-your-license-in-eck.md - file: license/manage-your-license-in-self-managed-cluster.md - file: maintenance.md children: diff --git a/deploy-manage/tools/cross-cluster-replication.md b/deploy-manage/tools/cross-cluster-replication.md index ae1b0b3c6..694a65755 100644 --- a/deploy-manage/tools/cross-cluster-replication.md +++ b/deploy-manage/tools/cross-cluster-replication.md @@ -43,13 +43,15 @@ In a uni-directional configuration, the cluster containing follower indices must | 7.17 | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | | 8.0–9.0 | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![No](https://doc-icons.s3.us-east-2.amazonaws.com/icon-no.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | ![Yes](https://doc-icons.s3.us-east-2.amazonaws.com/icon-yes.png "") | -::::{note} -This documentation is for {{es}} version 9.0.0-beta1, which is not yet released. The above compatibility table applies if both clusters are running a released version of {{es}}, or if one of the clusters is running a released version and the other is running a pre-release build with a later build date. A cluster running a pre-release build of {{es}} can also communicate with remote clusters running the same pre-release build. Running a mix of pre-release builds is unsupported and typically will not work, even if the builds have the same version number. -:::: ::::: +% Moved from another file - hiding this note for now +% ::::{note} +% CCR is not supported for indices used by Enterprise Search. +% :::: + ## Multi-cluster architectures [ccr-multi-cluster-architectures] diff --git a/deploy-manage/tools/cross-cluster-replication/bi-directional-disaster-recovery.md b/deploy-manage/tools/cross-cluster-replication/bi-directional-disaster-recovery.md index da8a39a29..8897af6df 100644 --- a/deploy-manage/tools/cross-cluster-replication/bi-directional-disaster-recovery.md +++ b/deploy-manage/tools/cross-cluster-replication/bi-directional-disaster-recovery.md @@ -18,7 +18,7 @@ applies_to: Learn how to set up disaster recovery between two clusters based on bi-directional {{ccr}}. The following tutorial is designed for data streams which support [update by query](../../../manage-data/data-store/data-streams/use-data-stream.md#update-docs-in-a-data-stream-by-query) and [delete by query](../../../manage-data/data-store/data-streams/use-data-stream.md#delete-docs-in-a-data-stream-by-query). You can only perform these actions on the leader index. -This tutorial works with {{ls}} as the source of ingestion. It takes advantage of a {{ls}} feature where [the {{ls}} output to {{es}}](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-outputs-elasticsearch.md) can be load balanced across an array of hosts specified. {{beats}} and {{agents}} currently do not support multiple outputs. It should also be possible to set up a proxy (load balancer) to redirect traffic without {{ls}} in this tutorial. +This tutorial works with {{ls}} as the source of ingestion. It takes advantage of a {{ls}} feature where [the {{ls}} output to {{es}}](asciidocalypse://docs/logstash/docs/reference/plugins-outputs-elasticsearch.md) can be load balanced across an array of hosts specified. {{beats}} and {{agents}} currently do not support multiple outputs. It should also be possible to set up a proxy (load balancer) to redirect traffic without {{ls}} in this tutorial. * Setting up a remote cluster on `clusterA` and `clusterB`. * Setting up bi-directional cross-cluster replication with exclusion patterns. diff --git a/deploy-manage/tools/cross-cluster-replication/set-up-cross-cluster-replication.md b/deploy-manage/tools/cross-cluster-replication/set-up-cross-cluster-replication.md index ff6a08544..90df231cf 100644 --- a/deploy-manage/tools/cross-cluster-replication/set-up-cross-cluster-replication.md +++ b/deploy-manage/tools/cross-cluster-replication/set-up-cross-cluster-replication.md @@ -30,7 +30,7 @@ In this guide, you’ll learn how to: You can manually create follower indices to replicate specific indices on a remote cluster, or configure auto-follow patterns to replicate rolling time series indices. ::::{tip} -If you want to replicate data across clusters in the cloud, you can [configure remote clusters on {{ess}}](/deploy-manage/remote-clusters/ec-enable-ccs.md). Then, you can [search across clusters](../../../solutions/search/cross-cluster-search.md) and set up {{ccr}}. +If you want to replicate data across clusters in the cloud, you can [configure remote clusters on {{{ecloud}}](/deploy-manage/remote-clusters/ec-enable-ccs.md). Then, you can [search across clusters](../../../solutions/search/cross-cluster-search.md) and set up {{ccr}}. :::: diff --git a/deploy-manage/tools/snapshot-and-restore.md b/deploy-manage/tools/snapshot-and-restore.md index 7cf4a6335..962c75515 100644 --- a/deploy-manage/tools/snapshot-and-restore.md +++ b/deploy-manage/tools/snapshot-and-restore.md @@ -106,7 +106,7 @@ In Elasticsearch 8.0 and later versions, feature states are the only way to back ## How snapshots work -Snapshots are **automatically deduplicated** to save storage space and reduce network transfer costs. To back up an index, a snapshot makes a copy of the index’s [segments](/solutions/search/search-approaches/near-real-time-search.md) and stores them in the snapshot repository. Since segments are immutable, the snapshot only needs to copy any new segments created since the repository’s last snapshot. +Snapshots are **automatically deduplicated** to save storage space and reduce network transfer costs. To back up an index, a snapshot makes a copy of the index’s [segments](/manage-data/data-store/near-real-time-search.md) and stores them in the snapshot repository. Since segments are immutable, the snapshot only needs to copy any new segments created since the repository’s last snapshot. Each snapshot is **logically independent**. When you delete a snapshot, Elasticsearch only deletes the segments used exclusively by that snapshot. Elasticsearch doesn’t delete segments used by other snapshots in the repository. diff --git a/deploy-manage/tools/snapshot-and-restore/ec-aws-custom-repository.md b/deploy-manage/tools/snapshot-and-restore/ec-aws-custom-repository.md index cd55d8eee..aba169d80 100644 --- a/deploy-manage/tools/snapshot-and-restore/ec-aws-custom-repository.md +++ b/deploy-manage/tools/snapshot-and-restore/ec-aws-custom-repository.md @@ -46,9 +46,9 @@ For a full list of settings that are supported for your S3 bucket, refer to [S3 ## Store your secrets in the keystore [ec-snapshot-secrets-keystore] -You can use the Elasticsearch Service Keystore to store the credentials to access your AWS account. +You can use the {{es}} keystore to store the credentials to access your AWS account. -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. Navigate to the **Security** page of the deployment you wish to configure. 3. Locate **Elasticsearch keystore** and select **Add settings**. 4. With **Type** set to **Single string**, add the following keys and their values: diff --git a/deploy-manage/tools/snapshot-and-restore/ec-azure-snapshotting.md b/deploy-manage/tools/snapshot-and-restore/ec-azure-snapshotting.md index d8ceb630e..628151004 100644 --- a/deploy-manage/tools/snapshot-and-restore/ec-azure-snapshotting.md +++ b/deploy-manage/tools/snapshot-and-restore/ec-azure-snapshotting.md @@ -21,7 +21,7 @@ For deployments with **Elastic Stack version 7.17 and earlier**, you’ll need t 1. Refer to [Azure Repository Plugin](https://www.elastic.co/guide/en/elasticsearch/plugins/7.17/repository-azure.html) to download the version of the plugin that matches your Elastic Stack version. 2. Upload the plugin to your deployment: - 1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). + 1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. Open the **Features > Extensions** page and select **Upload extension**. 3. Specify the plugin name (`repository-azure`) and the version. 4. Select **An installable plugin (Compiled, no source code)**. @@ -36,7 +36,7 @@ For deployments with **Elastic Stack version 7.17 and earlier**, you’ll need t Create an entry for the Azure client in the Elasticsearch keystore: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. Navigate to the **Security** page of the deployment you wish to configure. 3. Locate **Elasticsearch keystore** and select **Add settings**. 4. With **Type** set to **Single string**, add the following keys and their values: diff --git a/deploy-manage/tools/snapshot-and-restore/ec-gcs-snapshotting.md b/deploy-manage/tools/snapshot-and-restore/ec-gcs-snapshotting.md index 6bb8dc372..d459631d0 100644 --- a/deploy-manage/tools/snapshot-and-restore/ec-gcs-snapshotting.md +++ b/deploy-manage/tools/snapshot-and-restore/ec-gcs-snapshotting.md @@ -34,7 +34,7 @@ For deployments with **Elastic Stack version 7.17 and earlier**, you’ll need t 1. Refer to [Google Cloud Storage Repository Plugin](https://www.elastic.co/guide/en/elasticsearch/plugins/7.17/repository-gcs.html) to download the version of the plugin that matches your Elastic Stack version. 2. Upload the plugin to your deployment: - 1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). + 1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. Open the **Features > Extensions** page and select **Upload extension**. 3. Specify the plugin name (`repository-gcs`) and the version. 4. Select **An installable plugin (Compiled, no source code)**. @@ -49,7 +49,7 @@ For deployments with **Elastic Stack version 7.17 and earlier**, you’ll need t Create an entry for the GCS client in the Elasticsearch keystore: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. Navigate to the **Security** page of the deployment you wish to configure. 3. Locate **Elasticsearch keystore** and select **Add settings**. 4. Enter the **Setting name** `gcs.client.secondary.credentials_file`. diff --git a/deploy-manage/upgrade/prepare-to-upgrade/index-compatibility.md b/deploy-manage/upgrade/prepare-to-upgrade/index-compatibility.md index 610dd15a6..cdeba0875 100644 --- a/deploy-manage/upgrade/prepare-to-upgrade/index-compatibility.md +++ b/deploy-manage/upgrade/prepare-to-upgrade/index-compatibility.md @@ -36,4 +36,4 @@ To upgrade to 9.0.0-beta1 from 7.16 or an earlier version, **you must first upgr {{es}} 8.0+ requires Java 17 or later. {{es}} 8.13+ has been tested with [Bouncy Castle](https://www.bouncycastle.org/java.md)'s Java 17 [certified](https://csrc.nist.gov/projects/cryptographic-module-validation-program/certificate/4616) FIPS implementation and is the recommended Java security provider when running {{es}} in FIPS 140-2 mode. Note - {{es}} does not ship with a FIPS certified security provider and requires explicit installation and configuration. -Alternatively, consider using {{ess}} in the [FedRAMP-certified GovCloud region](https://www.elastic.co/industries/public-sector/fedramp). +Alternatively, consider using {{ech}} in the [FedRAMP-certified GovCloud region](https://www.elastic.co/industries/public-sector/fedramp). diff --git a/deploy-manage/users-roles.md b/deploy-manage/users-roles.md index be5ac7c27..65c4c8846 100644 --- a/deploy-manage/users-roles.md +++ b/deploy-manage/users-roles.md @@ -2,12 +2,13 @@ navigation_title: "Users and roles" mapped_pages: - https://www.elastic.co/guide/en/serverless/current/project-settings-access.html -applies: +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all serverless: all - hosted: all - ece: all - eck: all - stack: all --- # Manage users and roles diff --git a/deploy-manage/users-roles/_snippets/external-realms.md b/deploy-manage/users-roles/_snippets/external-realms.md new file mode 100644 index 000000000..ab1876a1e --- /dev/null +++ b/deploy-manage/users-roles/_snippets/external-realms.md @@ -0,0 +1,20 @@ +ldap +: Uses an external LDAP server to authenticate the users. This realm supports an authentication token in the form of username and password, and requires explicit configuration in order to be used. See [LDAP user authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/ldap.md). + +active_directory +: Uses an external Active Directory Server to authenticate the users. With this realm, users are authenticated by usernames and passwords. See [Active Directory user authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/active-directory.md). + +pki +: Authenticates users using Public Key Infrastructure (PKI). This realm works in conjunction with SSL/TLS and identifies the users through the Distinguished Name (DN) of the client’s X.509 certificates. See [PKI user authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/pki.md). + +saml +: Facilitates authentication using the SAML 2.0 Web SSO protocol. This realm is designed to support authentication through {{kib}} and is not intended for use in the REST API. See [SAML authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md). + +kerberos +: Authenticates a user using Kerberos authentication. Users are authenticated on the basis of Kerberos tickets. See [Kerberos authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md). + +oidc +: Facilitates authentication using OpenID Connect. It enables {{es}} to serve as an OpenID Connect Relying Party (RP) and provide single sign-on (SSO) support in {{kib}}. See [Configuring single sign-on to the {{stack}} using OpenID Connect](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md). + +jwt +: Facilitates using JWT identity tokens as authentication bearer tokens. Compatible tokens are OpenID Connect ID Tokens, or custom JWTs containing the same claims. See [JWT authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/jwt.md). \ No newline at end of file diff --git a/deploy-manage/users-roles/_snippets/internal-realms.md b/deploy-manage/users-roles/_snippets/internal-realms.md new file mode 100644 index 000000000..438477b25 --- /dev/null +++ b/deploy-manage/users-roles/_snippets/internal-realms.md @@ -0,0 +1,5 @@ +native +: Users are stored in a dedicated {{es}} index. This realm supports an authentication token in the form of username and password, and is available by default when no realms are explicitly configured. Users are managed through {{kib}}, or using [user management APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-security). See [Native user authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/native.md). + +file +: Users are defined in files stored on each node in the {{es}} cluster. This realm supports an authentication token in the form of username and password and is always available. See [File-based user authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/file-based.md). Available for {{eck}} and self-managed deployments only. \ No newline at end of file diff --git a/deploy-manage/users-roles/cloud-enterprise-orchestrator.md b/deploy-manage/users-roles/cloud-enterprise-orchestrator.md index 996a6c711..00a762fd9 100644 --- a/deploy-manage/users-roles/cloud-enterprise-orchestrator.md +++ b/deploy-manage/users-roles/cloud-enterprise-orchestrator.md @@ -1,7 +1,8 @@ --- navigation_title: "ECE orchestrator" -applies: - ece: all +applies_to: + deployment: + ece: all --- # Elastic Cloud Enterprise orchestrator users diff --git a/deploy-manage/users-roles/cloud-enterprise-orchestrator/active-directory.md b/deploy-manage/users-roles/cloud-enterprise-orchestrator/active-directory.md index d688c3c2e..ceefb557b 100644 --- a/deploy-manage/users-roles/cloud-enterprise-orchestrator/active-directory.md +++ b/deploy-manage/users-roles/cloud-enterprise-orchestrator/active-directory.md @@ -1,22 +1,27 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-create-ad-profiles.html +applies_to: + deployment: + ece: all --- # Active Directory [ece-create-ad-profiles] -If you use an Active Directory (AD) server to authenticate users, you can specify the servers, parameters, and the search modes that Elastic Cloud Enterprise uses to locate user credentials. There are several sections to the profile: +If you use an Active Directory (AD) server to authenticate users, you can specify the servers, parameters, and the search modes that {{ece}} uses to locate user credentials. To set up Active Directory authentication, perform the following steps: -* Specify the [general AD settings](#ece-ad-general-settings). -* Optional: Prepare the [trusted CA certificates](#ece-prepare-ad-certificates). -* Supply the [bind credentials](#ece-supply-ad-bind-credentials). -* Select the [search mode and group search](#ece-ad-search-mode) settings. -* Create [role mappings](#ece-ad-role-mapping), either to all users that match the profile or assign roles to specific groups. -* Add any [custom configuration](#ece-ad-custom-configuration) advanced settings to the YAML file. +1. Specify the [general AD settings](#ece-ad-general-settings). +2. Optional: Prepare the [trusted CA certificates](#ece-prepare-ad-certificates). +3. Supply the [bind credentials](#ece-supply-ad-bind-credentials). +4. Select the [search mode and group search](#ece-ad-search-mode) settings. +5. Create [role mappings](#ece-ad-role-mapping), either to all users that match the profile or assign roles to specific groups. +6. Add any [custom configuration](#ece-ad-custom-configuration) advanced settings to the YAML file. -$$$ece-ad-general-settings$$$Begin the provider profile by adding the general settings: +## Add the general settings [ece-ad-general-settings] -1. [Log into the Cloud UI](../../deploy/cloud-enterprise/log-into-cloud-ui.md). +Begin the provider profile by adding the general settings: + +1. [Log into the Cloud UI](/deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). 2. Go to **Users** and then **Authentication providers**. 3. From the **Add provider** drop-down menu, select **Active Directory**. 4. Provide a unique profile name. This name becomes the realm ID, with any spaces replaced by hyphens. @@ -44,16 +49,16 @@ $$$ece-ad-general-settings$$$Begin the provider profile by adding the general se 7. Provide the top-level domain name. -## Prepare certificates [ece-prepare-ad-certificates] +## Prepare certificates (optional)[ece-prepare-ad-certificates] -Though optional, you can add one or more certificate authorities (CAs) to validate the server certificate that the Domain Controller uses for SSL/TLS. Connecting through SSL/TLS ensures that the identity of the AD server is authenticated before Elastic Cloud Enterprise transmits the user credentials and that the contents of the connection are encrypted. +You can add one or more certificate authorities (CAs) to validate the server certificate that the domain controller uses for SSL/TLS. Connecting through SSL/TLS ensures that the identity of the Active Directory server is authenticated before {{ece}} transmits the user credentials and that the contents of the connection are encrypted. 1. Provide the URL to the ZIP file that contains a keystore with the CA certificate(s). The bundle should be a ZIP file containing a single `keystore.ks` file in the directory `/active_directory/:id/truststore`, where `:id` is the value of the **Realm ID** field created in the [General settings](#ece-ad-general-settings). The keystore file can either be a JKS or a PKCS#12 keystore, but the name of the file should be `keystore.ks`. ::::{important} - Don’t use the same URL to serve a new version of the ZIP file as otherwise the new version may not be picked up. + Don’t use the same URL to serve a new version of the ZIP file. If you do, the new version might not be picked up. :::: 2. Select a keystore type. @@ -62,12 +67,16 @@ Though optional, you can add one or more certificate authorities (CAs) to valida ## Supply the bind credentials [ece-supply-ad-bind-credentials] -You can either select **Bind anonymously** for user searches or you must specify the distinguished name (DN) of the user to bind and the bind password. When **Bind anonymously** is selected, all requests to Active Directory will be performed with the credentials of the authenticating user. In the case that `Bind DN` and `Bind Password` are provided, requests are performed on behalf of this bind user. This can be useful in cases where the regular users can’t access all of the necessary items within Active Directory. +You can either select **Bind anonymously** for user searches, or you must specify the distinguished name (DN) of the user to bind and the bind password. + +When **Bind anonymously** is selected, all requests to Active Directory will be performed with the credentials of the authenticating user. + +In the case that `Bind DN` and `Bind Password` are provided, requests are performed on behalf of this bind user. This can be useful in cases where the regular users can’t access all of the necessary items within Active Directory. ## Configure the user search settings [ece-ad-search-mode] -You can configure how Elastic Cloud Enterprise will search for users in the Active Directory +You can configure how {{ece}} will search for users in the Active Directory To configure the user search: @@ -75,7 +84,7 @@ To configure the user search: 2. Set the **Search scope**: Sub-tree - : Searches all entries at all levels *under* the base DN, including the base DN itself. + : Searches all entries at all levels under the base DN, including the base DN itself. One level : Searches for objects one level under the `Base DN` but not the `Base DN` or entries in lower levels. @@ -88,7 +97,7 @@ To configure the user search: ## Configure the group search settings [ece-ad-search-groups] -You can configure how Elastic Cloud Enterprise will search for groups in the Active Directory +You can configure how {{ece}} will search for groups in Active Directory. To configure the group search: @@ -96,7 +105,7 @@ To configure the group search: 2. Set the **Search scope**: Sub-tree - : Searches all entries at all levels *under* the base DN, including the base DN itself. + : Searches all entries at all levels under the base DN, including the base DN itself. One level : Searches for objects one level under the `Base DN` but not the `Base DN` or entries in lower levels. @@ -108,19 +117,25 @@ To configure the group search: ## Create role mappings [ece-ad-role-mapping] -When a user is authenticated, the role mapping assigns them roles in Elastic Cloud Enterprise. +When a user is authenticated, the role mapping assigns them roles in {{ece}}. To assign all authenticated users a single role, select one of the **Default roles**. To assign roles according to the **User DN** of the user or **Group DN** of the group they belong to, use the **Add role mapping rule** fields. +For a list of roles, refer to [Available roles and permissions](/deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md#ece-user-role-permissions). + ## Custom configuration [ece-ad-custom-configuration] -You can add any additional settings to the **Advanced configuration** YAML file. For example, if you need to ignore the SSL check for the SSL certificate of the Domain Controller in a testing environment, you might add `ssl.verification_mode: none`. Note that all keys should omit the `xpack.security.authc.realms.active_directory.$realm_id` prefix that is required in `elasticsearch.yml`, as ECE will insert this itself and automatically account for any differences in format across Elasticsearch versions. +You can add any additional settings to the **Advanced configuration** YAML file. For example, if you need to ignore the SSL check for the SSL certificate of the domain controller in a testing environment, you might add `ssl.verification_mode: none`. + +:::{note} +All entries added should omit the `xpack.security.authc.realms.ldap.$realm_id` prefix, as ECE will insert this itself and automatically account for any differences in format across {{es}} versions. +::: ::::{important} -API keys created by Active Directory users are not automatically deleted or disabled when the user is deleted or disabled in Active Directory. When you delete a user in Active Directory, make sure to also remove the user’s API key or delete the user in ECE. +API keys created by Active Directory users are not automatically deleted or disabled when the user is deleted or disabled in Active Directory. When you delete a user in Active Directory, make sure to also remove the user’s API key or delete the user in {{ece}}. :::: diff --git a/deploy-manage/users-roles/cloud-enterprise-orchestrator/configure-sso-for-deployments.md b/deploy-manage/users-roles/cloud-enterprise-orchestrator/configure-sso-for-deployments.md index d96cb32a8..6c4017747 100644 --- a/deploy-manage/users-roles/cloud-enterprise-orchestrator/configure-sso-for-deployments.md +++ b/deploy-manage/users-roles/cloud-enterprise-orchestrator/configure-sso-for-deployments.md @@ -1,16 +1,19 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-deployment-sso.html +applies_to: + deployment: + ece: all --- # Configure SSO for deployments [ece-deployment-sso] -The single sign-on (SSO) feature in ECE allows `platform admins` and `deployment managers` to log in to their Kibana instances automatically once they are logged in to ECE. +The single sign-on (SSO) feature in ECE allows `platform admins` and `deployment managers` to log in to their {{kib}} instances automatically after they are logged in to ECE. ::::{note} Single sign-on is not available for system deployments; you need to use credentials to log in to them. :::: -To use single sign-on you first need to [configure the API base URL](../../deploy/cloud-enterprise/change-ece-api-url.md). Once this is set, all new deployments are SSO-enabled automatically, and existing deployments become SSO-enabled after any plan changes are applied. +To use single sign-on, you first need to [configure the API base URL](/deploy-manage/deploy/cloud-enterprise/change-ece-api-url.md). After this is set, all new deployments are SSO-enabled automatically, and existing deployments become SSO-enabled after any plan changes are applied. diff --git a/deploy-manage/users-roles/cloud-enterprise-orchestrator/ldap.md b/deploy-manage/users-roles/cloud-enterprise-orchestrator/ldap.md index 3524bed3c..ca8545036 100644 --- a/deploy-manage/users-roles/cloud-enterprise-orchestrator/ldap.md +++ b/deploy-manage/users-roles/cloud-enterprise-orchestrator/ldap.md @@ -1,22 +1,27 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-create-ldap-profiles.html +applies_to: + deployment: + ece: all --- # LDAP [ece-create-ldap-profiles] -If you use a Lightweight Directory Access Protocol (LDAP) server to authenticate users, you can specify the servers, parameters, and the search modes that Elastic Cloud Enterprise uses to locate user credentials. There are several sections to the profile: +If you use a Lightweight Directory Access Protocol (LDAP) server to authenticate users, you can specify the servers, parameters, and the search modes that {{ece}} uses to locate user credentials. To set up LDAP authentication, perform the following steps: -* Specify the [general LDAP settings](#ece-ldap-general-settings). -* Optional: Prepare the [trusted CA certificates](#ece-prepare-ldap-certificates). -* Supply the [bind credentials](#ece-supply-ldap-bind-credentials). -* Select the [search mode and group search](#ece-ldap-search-mode) settings. -* Create [role mappings](#ece-ldap-role-mapping), either to all users that match the profile or assign roles to specific groups. -* Add any [custom configuration](#ece-ldap-custom-configuration) advanced settings to the YAML file. +1. Specify the [general LDAP settings](#ece-ldap-general-settings). +2. Optional: Prepare the [trusted CA certificates](#ece-prepare-ldap-certificates). +3. Supply the [bind credentials](#ece-supply-ldap-bind-credentials). +4. Select the [search mode and group search](#ece-ldap-search-mode) settings. +5. Create [role mappings](#ece-ldap-role-mapping), either to all users that match the profile, or assign roles to specific groups. +6. Add any [custom configuration](#ece-ldap-custom-configuration) advanced settings to the YAML file. -$$$ece-ldap-general-settings$$$Begin the provider profile by adding the general settings: +## Add the general settings [ece-ldap-general-settings] -1. [Log into the Cloud UI](../../deploy/cloud-enterprise/log-into-cloud-ui.md). +Begin the provider profile by adding the general settings: + +1. [Log into the Cloud UI](/deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). 2. Go to **Users** and then **Authentication providers**. 3. From the **Add provider** drop-down menu, select **LDAP**. 4. Provide a unique profile name. This name becomes the realm ID, with any spaces replaced by hyphens. @@ -43,9 +48,9 @@ $$$ece-ldap-general-settings$$$Begin the provider profile by adding the general -## Prepare certificates [ece-prepare-ldap-certificates] +## Prepare certificates (optional)[ece-prepare-ldap-certificates] -Though optional, you can add one or more certificate authorities (CAs) to validate the server certificate that the Domain Controller uses for SSL/TLS. Connecting through SSL/TLS ensures that the identity of the AD server is authenticated before Elastic Cloud Enterprise transmits the user credentials and that the contents of the connection are encrypted. +You can add one or more certificate authorities (CAs) to validate the server certificate that the Domain Controller uses for SSL/TLS. Connecting through SSL/TLS ensures that the identity of the AD server is authenticated before {{ece}} transmits the user credentials and that the contents of the connection are encrypted. 1. Provide the URL to the ZIP file that contains a keystore with the CA certificate(s). @@ -91,7 +96,7 @@ To configure the template search: ## Configure the group search settings [ece-ldap-search-groups] -You can configure how Elastic Cloud Enterprise searches for groups in the LDAP Server +You can configure how {{ece}} searches for groups in the LDAP Server. To configure the group search: @@ -111,16 +116,22 @@ To configure the group search: ## Create role mappings [ece-ldap-role-mapping] -When a user is authenticated, the role mapping assigns them roles in Elastic Cloud Enterprise. +When a user is authenticated, the role mapping assigns them roles in {{ece}}. To assign all authenticated users a single role, select one of the **Default roles**. To assign roles according to the **User DN** of the user or **Group DN** of the group they belong to, use the **Add role mapping rule** fields. +For a list of roles, refer to [Available roles and permissions](/deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md#ece-user-role-permissions). + ## Custom configuration [ece-ldap-custom-configuration] -You can add any additional settings to the **Advanced configuration** YAML file. For example, if you need to ignore the SSL check in a testing environment, you might add `ssl.verification_mode: none`. Note that all entries added should omit `xpack.security.authc.realms.ldap.$realm_id` prefix, as ECE will insert this itself and automatically account for any differences in format across Elasticsearch versions. +You can add any additional settings to the **Advanced configuration** YAML file. For example, if you need to ignore the SSL check in a testing environment, you might add `ssl.verification_mode: none`. + +:::{note} +All entries added should omit the `xpack.security.authc.realms.ldap.$realm_id` prefix, as ECE will insert this itself and automatically account for any differences in format across {{es}} versions. +::: ::::{important} API keys created by LDAP users are not automatically deleted or disabled when the user is deleted or disabled in LDAP. When you delete a user in LDAP, make sure to also remove the user’s API key or delete the user in ECE. diff --git a/deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-system-passwords.md b/deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-system-passwords.md index 3bb9ef60b..2156fd18b 100644 --- a/deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-system-passwords.md +++ b/deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-system-passwords.md @@ -1,11 +1,14 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-manage-system-passwords.html +applies_to: + deployment: + ece: all --- -# Manage system passwords [ece-manage-system-passwords] +# Manage system passwords[ece-manage-system-passwords] -At the end of the Elastic Cloud Enterprise installation process on the first host, you are provided with the URL and user credentials for the administration console users `admin` and `readonly`. You use this information to log into the Cloud UI. Both users can access all parts of the Cloud UI, but only the `admin` user can make changes. We recommend that you keep this information secure. +At the end of the {{ece}} installation process on the first host, you are provided with the URL and user credentials for the administration console users `admin` and `readonly`. You use this information to log into the Cloud UI. Both users can access all parts of the Cloud UI, but only the `admin` user can make changes. We recommend that you keep this information secure. ## Retrieve user passwords [ece-retrieve-passwords] @@ -33,7 +36,7 @@ You access the Cloud UI on port 12400 or port 12443 at IP address of the first You might need to reset the Cloud UI passwords for one of the following reasons: -* To change the passwords for the `admin` and `readonly` users after installing Elastic Cloud Enterprise or periodically as part of your standard operating procedures. +* To change the passwords for the `admin` and `readonly` users after installing {{ece}} or periodically as part of your standard operating procedures. * To reset passwords if you think they might have become compromised. The passwords for these users are stored in `/mnt/data/elastic/bootstrap-state/bootstrap-secrets.json` along with other secrets (unless you specified a different host storage path). @@ -50,5 +53,5 @@ To reset the password for the `admin` user if no secrets file exists: bash elastic-cloud-enterprise.sh reset-adminconsole-password ``` -For additional usage examples, check [`elastic-cloud-enterprise.sh reset-adminconsole-password` Reference](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-enterprise/ece-installation-script-reset.md). +For additional usage examples, check [`elastic-cloud-enterprise.sh reset-adminconsole-password` Reference](asciidocalypse://docs/cloud/docs/reference/cloud-enterprise/ece-installation-script-reset.md). diff --git a/deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md b/deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md index 2672650b2..b3936bec5 100644 --- a/deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md +++ b/deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md @@ -1,15 +1,14 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-configure-rbac.html +applies_to: + deployment: + ece: all --- # Manage users and roles [ece-configure-rbac] -:::{image} ../../../images/cloud-enterprise-ece-rbac-intro.png -:alt: User groups -::: - -Role-based access control (RBAC) provides a way to add multiple users and restrict their access to specific platform resources. In addition to the system `admin` and `readonly` users, you can utilize pre-built roles to control access to platform operations, deployment assets, or API calls. +Role-based access control (RBAC) provides a way to add multiple users and restrict their access to specific platform resources. In addition to the system `admin` and `readonly` users, you can create additional users and assign pre-built roles to control access to platform operations, deployment assets, or API calls. Implementing RBAC in your environment benefits you in several ways: @@ -19,18 +18,21 @@ Implementing RBAC in your environment benefits you in several ways: * Adds multiple users by: * Creating [native users](native-user-authentication.md) locally. - * Integrating with third-party authentication providers like [ActiveDirectory](active-directory.md), [LDAP](ldap.md) or [SAML](saml.md). + * Integrating with third-party authentication providers like [Active Directory](active-directory.md), [LDAP](ldap.md) or [SAML](saml.md). +::::{tip} +This topic describes implementing RBAC at the {{ece}} installation level, which can be used to access the Cloud UI, and which can be set up to provide SSO capabilities to access deployments orchestrated by your {{ece}} installation. -::::{important} -With RBAC, interacting with API endpoints now requires a [bearer token](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-enterprise/ece-api-command-line.md) or [API key](../../api-keys/elastic-cloud-enterprise-api-keys.md#ece-api-keys). +If you want to manage access to each deployment individually, then refer to [](/deploy-manage/users-roles/cluster-or-deployment-auth.md). :::: - +::::{important} +With RBAC, interacting with API endpoints now requires a [bearer token](asciidocalypse://docs/cloud/docs/reference/cloud-enterprise/ece-api-command-line.md) or [API key](../../api-keys/elastic-cloud-enterprise-api-keys.md#ece-api-keys). +:::: ## Before you begin [ece_before_you_begin_8] -To prepare for RBAC, you should review the Elastic Cloud Enterprise [limitations and known issues](asciidocalypse://docs/cloud/docs/release-notes/known-issues/cloud-enterprise.md). +To prepare for RBAC, you should review the Elastic Cloud Enterprise [limitations and known issues](asciidocalypse://docs/cloud/docs/release-notes/cloud-enterprise/known-issues.md). ## Available roles and permissions [ece-user-role-permissions] @@ -50,30 +52,42 @@ Deployment viewer : Can view non-system deployments, including their activity. Can prepare the diagnostic bundle, inspect the files, and download the bundle as a ZIP file. -## Configure security deployment [ece-configure-security-deployment] +## Step 1: Configure the security deployment [ece-configure-security-deployment] -The security deployment is a system deployment that manages all of the Elastic Cloud Enterprise authentication and permissions. It is created automatically during installation. +The security deployment is a system deployment that manages all of the {{ece}} authentication and permissions. It is created automatically during installation. ::::{important} -We strongly recommend using three availability zones with at least 1 GB Elasticsearch nodes. You can scale up if you expect a heavy authentication workload. +We strongly recommend using three availability zones with at least 1 GB {{es}} nodes. You can scale up if you expect a heavy authentication workload. :::: -1. [Log into the Cloud UI](../../deploy/cloud-enterprise/log-into-cloud-ui.md). +1. [Log into the Cloud UI](/deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). 2. Go to **Deployments** a select the **security-cluster**. -3. Configure regular snapshots of the security deployment. This is critical if you plan to create any native users. -4. Optional: [Enable monitoring](../../monitor/stack-monitoring/ece-stack-monitoring.md) on the security deployment to a dedicated monitoring deployment. +3. Configure regular [snapshots](/deploy-manage/tools/snapshot-and-restore/create-snapshots.md) of the security deployment. This is critical if you plan to create any native users. +4. Optional: [Enable monitoring](/deploy-manage/monitor/stack-monitoring/ece-stack-monitoring.md) on the security deployment to a dedicated monitoring deployment. + +If you have authentication issues, you check out the security deployment {{es}} [logs](/deploy-manage/monitor/logging-configuration.md). + +## Step 2: Set up provider profiles + +Configure any third-party authentication providers that you want to use. + +If you want to use only [native user authentication](native-user-authentication.md), then no additional configuration is required. + +* [Active Directory](active-directory.md) +* [LDAP](ldap.md) +* [SAML](saml.md) -If you have authentication issues, you check out the security deployment Elasticsearch logs. +During setup, you can map users according to their properties to {{ece}} roles. -## Change the order of provider profiles [ece-provider-order] +## Step 3: Change the order of provider profiles [ece-provider-order] -Elastic Cloud Enterprise performs authentication checks against the configured providers, in order. When a match is found, the user search stops. The roles specified by that first profile match dictate which permissions the user is granted—​regardless of what permissions might be available in another, lower-order profile. +{{ece}} performs authentication checks against the configured providers, in order. When a match is found, the user search stops. The roles specified by that first profile match dictate which permissions the user is granted—​regardless of what permissions might be available in another, lower-order profile. To change the provider order: -1. [Log into the Cloud UI](../../deploy/cloud-enterprise/log-into-cloud-ui.md). +1. [Log into the Cloud UI](/deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). 2. Go to **Users** and then **Authentication providers**. 3. Use the carets to update the provider order. diff --git a/deploy-manage/users-roles/cloud-enterprise-orchestrator/native-user-authentication.md b/deploy-manage/users-roles/cloud-enterprise-orchestrator/native-user-authentication.md index 557730133..9b9ffb702 100644 --- a/deploy-manage/users-roles/cloud-enterprise-orchestrator/native-user-authentication.md +++ b/deploy-manage/users-roles/cloud-enterprise-orchestrator/native-user-authentication.md @@ -1,16 +1,19 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-add-native-users.html +applies_to: + deployment: + ece: all --- -# Native user authentication [ece-add-native-users] +# Native users [ece-add-native-users] -If you are adding a small number of users and don’t mind managing them manually, using the local *native* authentication might be the best fit for your team. +If you are adding a small number of users and don’t mind managing them manually, using native authentication might be the best fit for your team. -1. [Log into the Cloud UI](../../deploy/cloud-enterprise/log-into-cloud-ui.md). +1. [Log into the Cloud UI](/deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). 2. Go to **Users** and then **Native users**. 3. Select **Create user**. -4. Provide the user details, select the role, and set their password. +4. Provide the user details, select the [role](/deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md#ece-user-role-permissions), and set their password. The password must be a minimum of eight characters. diff --git a/deploy-manage/users-roles/cloud-enterprise-orchestrator/saml.md b/deploy-manage/users-roles/cloud-enterprise-orchestrator/saml.md index b2ce58d45..f698633ca 100644 --- a/deploy-manage/users-roles/cloud-enterprise-orchestrator/saml.md +++ b/deploy-manage/users-roles/cloud-enterprise-orchestrator/saml.md @@ -1,32 +1,41 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-create-saml-profiles.html +applies_to: + deployment: + ece: all --- # SAML [ece-create-saml-profiles] -You can configure Elastic Cloud Enterprise to delegate authentication of users to a Security Assertion Markup Language (SAML) authentication provider. Elastic Cloud Enterprise supports the SAML 2.0 Web Browser Single Sign On Profile only and this requires the use of a web browser. Due to this, SAML profiles should not be used for standard API clients. The security deployment acts as a SAML 2.0 compliant *service provider*. +You can configure {{ece}} to delegate authentication of users to a Security Assertion Markup Language (SAML) authentication provider. {{ece}} supports the SAML 2.0 Web Browser Single Sign On Profile only, and this requires the use of a web browser. Due to this, SAML profiles should not be used for standard API clients. The security deployment acts as a SAML 2.0 compliant *service provider*. -There are several sections to the profile: +:::{{tip}} +This topic describes implementing SAML SSO at the {{ece}} installation level. If you want to control access to a specific deployment, then refer to [SAML authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md). +::: -* Specify the [general SAML settings](#ece-saml-general-settings). -* Specify the necessary [attribute mappings](#ece-saml-attributes) -* Create [role mappings](#ece-saml-role-mapping), either to all users that match the profile or assign roles to specific attribute values. -* Add any [custom configuration](#ece-saml-custom-configuration) advanced settings to the YAML file. For example, if you need to ignore the SSL check for the SSL certificate of the Domain Controller in a testing environment, you might add `ssl.verification_mode: none`. Note that all entries added should omit `xpack.security.authc.realms.saml.$realm_id` prefix, as ECE will insert this itself and automatically account for any differences in format across Elasticsearch versions. -* Optional: Prepare the [trusted SSL certificate bundle](#ece-saml-ssl-certificates). -* Sign the [outgoing SAML messages](#ece-configure-saml-signing-certificates). -* [Encrypt SAML messages](#ece-encrypt-saml). +To set up SAML authentication, perform the following steps: -$$$ece-saml-general-settings$$$Begin the provider profile by adding the general settings: +1. Specify the [general SAML settings](#ece-saml-general-settings). +2. Specify the necessary [attribute mappings](#ece-saml-attributes) +3. Create [role mappings](#ece-saml-role-mapping), either to all users that match the profile or assign roles to specific attribute values. +4. Add any [custom configuration](#ece-saml-custom-configuration) advanced settings to the YAML file. +5. Optional: Prepare the [trusted SSL certificate bundle](#ece-saml-ssl-certificates). +6. Sign the [outgoing SAML messages](#ece-configure-saml-signing-certificates). +7. [Encrypt SAML messages](#ece-encrypt-saml). -1. [Log into the Cloud UI](../../deploy/cloud-enterprise/log-into-cloud-ui.md). +## Add the general settings [ece-saml-general-settings] + +Begin the provider profile by adding the general settings: + +1. [Log into the Cloud UI](/deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). 2. Go to **Users** and then **Authentication providers**. 3. From the **Add provider** drop-down menu, select **SAML**. 4. Provide a unique profile name. This name becomes the realm ID, with any spaces replaced by hyphens. The name can be changed, but the realm ID cannot. The realm ID becomes part of the [certificate bundle](#ece-saml-ssl-certificates). -5. Enter the Assertion Consumer Service URL endpoint within Elastic Cloud Enterprise that receives the SAML assertion. +5. Enter the Assertion Consumer Service URL endpoint within {{ece}} that receives the SAML assertion. Example: `https://HOSTNAME_OR_IP_ADDRESS:12443/api/v1/users/auth/saml/_callback` @@ -38,7 +47,7 @@ $$$ece-saml-general-settings$$$Begin the provider profile by adding the general Example: `urn:example:idp` -8. Enter the URI for the SAML **service provider entity ID** that represents Elastic Cloud Enterprise. The only restriction is that this is a valid URI, but the common practice is to use the domain name where the Service Provider is available. +8. Enter the URI for the SAML **service provider entity ID** that represents {{ece}}. The only restriction is that this is a valid URI, but the common practice is to use the domain name where the Service Provider is available. Example: `http://SECURITY_DEPLOYMENT_IP:12443` @@ -48,21 +57,30 @@ $$$ece-saml-general-settings$$$Begin the provider profile by adding the general -## Map SAML attributes to User Properties [ece-saml-attributes] +## Map SAML attributes to user properties [ece-saml-attributes] + +The SAML assertion about a user usually includes attribute names and values that can be used for role mapping. The configuration in this section allows to configure a mapping between these SAML attribute values and [{{es}} user properties](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-es-user-properties). + +When the attributes have been mapped to user properties such as `groups`, these can then be used to configure [role mappings](#ece-saml-role-mapping). Mapping the `principal` user property is required and the `groups` property is recommended for a minimum configuration. -The SAML assertion about a user usually includes attribute names and values that can be used for role mapping. The configuration in this section allows to configure a mapping between these SAML attribute values and [Elasticsearch user properties](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-elasticsearch-authentication). When the attributes have been mapped to user properties such as `groups`, these can then be used to configure [role mappings](#ece-saml-role-mapping). Mapping the `principal` user property is required and the `groups` property is recommended for a minimum configuration. +:::{note} +Although the SAML specification does not have many restrictions on the type of value that is mapped to the `principal` user property, ECE requires that the mapped value is also a valid {{es}} native realm identifier. -Note that some additional attention must be paid to the `principal` user property. Although the SAML specification does not have many restrictions on the type of value that is mapped, ECE requires that the mapped value is also a valid Elasticsearch native realm identifier. Specifically, this means the mapped identifier should not contain any commas or slashes, and should be otherwise URL friendly. +This means the mapped identifier should not contain any commas or slashes, and should be otherwise URL friendly. +::: ## Create role mappings [ece-saml-role-mapping] -When a user is authenticated, the role mapping assigns them roles in Elastic Cloud Enterprise. You can assign roles by: +When a user is authenticated, the role mapping assigns them roles in {{ece}}. -* Assigning all authenticated users a single role, select one of the **Default roles**. -* Assigning roles according to the user properties (such as `dn`, `groups`, `username`), use the **Add role mapping rule** fields. +To assign all authenticated users a single role, select one of the **Default roles**. -In the following example, you have configured the Elasticsearch user property `groups` to map to the SAML attribute with name `SAML_Roles` and you want only users whose SAML assertion contains the `SAML_Roles` attribute with value `p_viewer` to get the `Platform viewer` role in Elastic Cloud Enterprise. +To assign roles according to the user properties (such as `dn`, `groups`, `username`), use the **Add role mapping rule** fields. + +For a list of roles, refer to [Available roles and permissions](/deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md#ece-user-role-permissions). + +In the following example, you have configured the {{es}} user property `groups` to map to the SAML attribute with name `SAML_Roles` and you want only users whose SAML assertion contains the `SAML_Roles` attribute with value `p_viewer` to get the `Platform viewer` role in {{ece}}. To complete the role mapping: @@ -75,17 +93,23 @@ To complete the role mapping: ## Custom configuration [ece-saml-custom-configuration] -You can add any additional settings to the **Advanced configuration** YAML file. +You can add any additional settings to the **Advanced configuration** YAML file. + +For example, if you need to ignore the SSL check for the SSL certificate of the domain controller in a testing environment, you might add `ssl.verification_mode: none`. + +:::{note} +All entries added should omit the `xpack.security.authc.realms.ldap.$realm_id` prefix, as ECE will insert this itself and automatically account for any differences in format across {{es}} versions. +::: You can also enable some other options: -* **Use single logout (SLO)** makes sure that when a user logs out of Elastic Cloud Enterprise, they will also be redirected to the SAML Identity Provider so that they can logout there and subsequently logout from all of the other SAML sessions they might have with other SAML Service Providers. +* **Use single logout (SLO)** makes sure that when a user logs out of {{ece}}, they will also be redirected to the SAML Identity Provider so that they can logout there and subsequently log out from all of the other SAML sessions they might have with other SAML Service Providers. * **Enable force authentication** means that the Identity Provider must re-authenticate the user for each new session, even if the user has an existing, authenticated session with the Identity Provider. -## Prepare SAML SSL certificates [ece-saml-ssl-certificates] +## Prepare SAML SSL certificates (Optional) [ece-saml-ssl-certificates] -Though optional, you can add one or more certificate authorities (CAs) to validate the SSL/TLS certificate of the server that is hosting the metadata file. This might be useful when the Identity Provider uses a certificate for TLS that is signed by an organization specific Certification Authority, that is not trusted by default by Elastic Cloud Enterprise. +You can add one or more certificate authorities (CAs) to validate the SSL/TLS certificate of the server that is hosting the metadata file. This might be useful when the Identity Provider uses a certificate for TLS that is signed by an organization specific Certification Authority, that is not trusted by default by {{ece}}. 1. Expand the **Advanced settings**. 2. Provide the **SSL certificate URL** to the ZIP file. @@ -98,7 +122,7 @@ Though optional, you can add one or more certificate authorities (CAs) to valida ## Configure SAML signing certificates [ece-configure-saml-signing-certificates] -Elastic Cloud Enterprise can be configured to sign all outgoing SAML messages. Signing the outgoing messages provides assurance that the messages are coming from the expected service. +{{ece}} can be configured to sign all outgoing SAML messages. Signing the outgoing messages provides assurance that the messages are coming from the expected service. 1. Provide the **Signing certificate URL** to the ZIP file with the private key and certificate. @@ -110,7 +134,7 @@ Elastic Cloud Enterprise can be configured to sign all outgoing SAML messages. S ## Configure for the encryption of SAML messages [ece-encrypt-saml] -If your environment requires SAML messages to be encrypted communications, Elastic Cloud Enterprise can be configured with an encryption certificate and key pair. When the Identity Provider uses the public key in this certificate to encrypt the SAML Response ( or parts of it ), Elastic Cloud Enterprise will use the corresponding key to decrypt the message. +If your environment requires SAML messages to be encrypted communications, {{ece}} can be configured with an encryption certificate and key pair. When the Identity Provider uses the public key in this certificate to encrypt the SAML Response ( or parts of it ), {{ece}} will use the corresponding key to decrypt the message. 1. Provide the **Encryption certificate URL** to the ZIP file with the private key and certificate. diff --git a/deploy-manage/users-roles/cloud-organization.md b/deploy-manage/users-roles/cloud-organization.md index 038eed216..469fa3e2f 100644 --- a/deploy-manage/users-roles/cloud-organization.md +++ b/deploy-manage/users-roles/cloud-organization.md @@ -2,9 +2,10 @@ navigation_title: "Cloud organization" mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-organizations.html -applies: +applies_to: + deployment: + ess: all serverless: all - hosted: all --- # Cloud organization users [ec-organizations] @@ -25,9 +26,9 @@ If you're using {{ech}}, then you can also manage users and control access [at t ## Should I use organization-level or deployment-level SSO? [organization-deployment-sso] -:::{applies_to} -:hosted: all -::: +```{applies_to} +ess: all +``` :::{include} _snippets/org-vs-deploy-sso.md ::: \ No newline at end of file diff --git a/deploy-manage/users-roles/cloud-organization/configure-saml-authentication.md b/deploy-manage/users-roles/cloud-organization/configure-saml-authentication.md index 7bb77e7a8..19ce231cb 100644 --- a/deploy-manage/users-roles/cloud-organization/configure-saml-authentication.md +++ b/deploy-manage/users-roles/cloud-organization/configure-saml-authentication.md @@ -2,8 +2,9 @@ navigation_title: "Configure SAML SSO" mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-saml-sso.html -applies: - hosted: all +applies_to: + deployment: + ess: all serverless: all --- diff --git a/deploy-manage/users-roles/cloud-organization/manage-users.md b/deploy-manage/users-roles/cloud-organization/manage-users.md index 884b28b65..fff3b8f91 100644 --- a/deploy-manage/users-roles/cloud-organization/manage-users.md +++ b/deploy-manage/users-roles/cloud-organization/manage-users.md @@ -3,9 +3,10 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud/current/ec-invite-users.html - https://www.elastic.co/guide/en/serverless/current/general-manage-organization.html - https://www.elastic.co/guide/en/cloud/current/ec-api-organizations.html -applies: +applies_to: + deployment: + ess: all serverless: all - hosted: all --- # Manage users @@ -54,7 +55,7 @@ You can also manage members of your organization using the [{{ecloud}} API](http :::{dropdown} Get information about your organization -Get information about your Elasticsearch Service organization. +Get information about your {{ecloud}} organization. ```sh curl -XGET \ @@ -65,7 +66,7 @@ curl -XGET \ :::{dropdown} Invite members to your organization -Invite members to your Elasticsearch Service organization. +Invite members to your {{ecloud}} organization. ```sh curl -XPOST \ @@ -85,7 +86,7 @@ curl -XPOST \ :::{dropdown} View pending invitations to your organization -View pending invitations to your Elasticsearch Service organization. +View pending invitations to your {{ecloud}} organization. ```sh curl -XGET \ @@ -96,7 +97,7 @@ curl -XGET \ :::{dropdown} View members in your organization -View members in your Elasticsearch Service organization. +View members in your {{ecloud}} organization. ```sh curl -XGET \ @@ -107,7 +108,7 @@ curl -XGET \ :::{dropdown} Remove members from your organization -Remove members from your Elasticsearch Service organization. +Remove members from your {{ecloud}} organization. ```sh curl -XDELETE \ diff --git a/deploy-manage/users-roles/cloud-organization/register-elastic-cloud-saml-in-microsoft-entra-id.md b/deploy-manage/users-roles/cloud-organization/register-elastic-cloud-saml-in-microsoft-entra-id.md index e9acef5ee..6cadd5088 100644 --- a/deploy-manage/users-roles/cloud-organization/register-elastic-cloud-saml-in-microsoft-entra-id.md +++ b/deploy-manage/users-roles/cloud-organization/register-elastic-cloud-saml-in-microsoft-entra-id.md @@ -2,8 +2,9 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-saml-sso-entra.html navigation_title: "Microsoft Entra ID" -applies: - hosted: all +applies_to: + deployment: + ess: all serverless: all --- diff --git a/deploy-manage/users-roles/cloud-organization/register-elastic-cloud-saml-in-okta.md b/deploy-manage/users-roles/cloud-organization/register-elastic-cloud-saml-in-okta.md index 1d34d5ea2..48ccf57f5 100644 --- a/deploy-manage/users-roles/cloud-organization/register-elastic-cloud-saml-in-okta.md +++ b/deploy-manage/users-roles/cloud-organization/register-elastic-cloud-saml-in-okta.md @@ -2,8 +2,9 @@ mapped_pages: - https://www.elastic.co/guide/en/cloud/current/ec-saml-sso-okta.html navigation_title: Okta -applies: - hosted: all +applies_to: + deployment: + ess: all serverless: all --- diff --git a/deploy-manage/users-roles/cloud-organization/user-roles.md b/deploy-manage/users-roles/cloud-organization/user-roles.md index dace2b649..18bd14ba5 100644 --- a/deploy-manage/users-roles/cloud-organization/user-roles.md +++ b/deploy-manage/users-roles/cloud-organization/user-roles.md @@ -2,8 +2,9 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud/current/ec-user-privileges.html - https://www.elastic.co/guide/en/serverless/current/general-manage-organization.html -applies: - hosted: all +applies_to: + deployment: + ess: all serverless: all --- diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth.md b/deploy-manage/users-roles/cluster-or-deployment-auth.md index 7152066a1..16b2a83c4 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth.md @@ -3,11 +3,12 @@ navigation_title: "Cluster or deployment" mapped_urls: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-securing-clusters.html - https://www.elastic.co/guide/en/cloud/current/ec-security.html -applies: - hosted: all - ece: all - eck: all - stack: all +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all --- # Cluster or deployment users @@ -16,11 +17,11 @@ To prevent unauthorized access to your Elastic resources, you need a way to iden In this section, you’ll learn how to set up authentication and authorization at the cluster or deployment level, and learn about the underlying security technologies that Elasticsearch uses to authenticate and authorize requests internally and across services. -This section only covers direct access to and communications with an Elasticsearch cluster - sometimes known as a deployment. To learn about managing access to your {{ecloud}} organization or {{ece}} orchestrator, or to learn how to use single sign-on to access a cluster using your {{ecloud}} credentials, refer to [Manage users and roles](/deploy-manage/users-roles.md). +This section only covers direct access to and communications with an {{es}} cluster - sometimes known as a deployment - as well as the related {{kib}} instance. To learn about managing access to your {{ecloud}} organization or {{ece}} orchestrator, or to learn how to use single sign-on to access a cluster using your {{ecloud}} credentials, refer to [Manage users and roles](/deploy-manage/users-roles.md). ## Quickstart -If you plan to use native Elasticsearch user and role management, then [follow our quickstart](/deploy-manage/users-roles/cluster-or-deployment-auth/quickstart.md) to learn how to set up basic authentication and authorization features. +If you plan to use native Elasticsearch user and role management, then [follow our quickstart](/deploy-manage/users-roles/cluster-or-deployment-auth/quickstart.md) to learn how to set up basic authentication and authorization features, including [spaces](/deploy-manage/manage-spaces.md), [roles](/deploy-manage/users-roles/cluster-or-deployment-auth/user-roles.md), and [native users](/deploy-manage/users-roles/cluster-or-deployment-auth/native.md). ### User authentication diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/access-agreement.md b/deploy-manage/users-roles/cluster-or-deployment-auth/access-agreement.md index 0de1c9205..9b438c621 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/access-agreement.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/access-agreement.md @@ -1,9 +1,15 @@ --- mapped_pages: - https://www.elastic.co/guide/en/kibana/current/xpack-security-access-agreement.html +applies_to: + deployment: + ess: + ece: + eck: + self: --- -# Access agreement [xpack-security-access-agreement] +# {{kib}} access agreement [xpack-security-access-agreement] Access agreement is a [subscription feature](https://www.elastic.co/subscriptions) that requires users to acknowledge and accept an agreement before accessing {{kib}}. The agreement text supports Markdown format and can be specified using the `xpack.security.authc.providers...accessAgreement.message` setting. diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/active-directory.md b/deploy-manage/users-roles/cluster-or-deployment-auth/active-directory.md index 947690b4a..2da644214 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/active-directory.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/active-directory.md @@ -2,34 +2,323 @@ mapped_urls: - https://www.elastic.co/guide/en/elasticsearch/reference/current/active-directory-realm.html - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-securing-clusters-ad.html +applies_to: + deployment: + self: + ess: + ece: + eck: +navigation_title: "Active Directory" --- -# Active directory +# Active Directory user authentication [active-directory-realm] -% What needs to be done: Refine +You can configure {{stack}} {{security-features}} to communicate with Active Directory to authenticate users. -% GitHub issue: https://github.com/elastic/docs-projects/issues/347 +:::{{tip}} +This topic describes implementing Active Directory at the cluster or deployment level, for the purposes of authenticating with {{es}} and {{kib}}. -% Use migrated content from existing pages that map to this page: +You can also configure an {{ece}} installation to use an Active Directory to authenticate users. [Learn more](/deploy-manage/users-roles/cloud-enterprise-orchestrator/active-directory.md). +::: -% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/active-directory-realm.md -% - [ ] ./raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters-ad.md +## How it works -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): +The {{security-features}} use LDAP to communicate with Active Directory, so `active_directory` realms are similar to [`ldap` realms](/deploy-manage/users-roles/cluster-or-deployment-auth/ldap.md). Like LDAP directories, Active Directory stores users and groups hierarchically. The directory’s hierarchy is built from containers such as the *organizational unit* (`ou`), *organization* (`o`), and *domain component* (`dc`). -$$$ece-ad-configuration-with-bind-user$$$ +The path to an entry is a *Distinguished Name* (DN) that uniquely identifies a user or group. User and group names typically have attributes such as a *common name* (`cn`) or *unique ID* (`uid`). A DN is specified as a string, for example `"cn=admin,dc=example,dc=com"` (white spaces are ignored). + +The {{security-features}} supports only Active Directory security groups. You can't map distribution groups to roles. + +::::{note} +When you use Active Directory for authentication, the username entered by the user is expected to match the `sAMAccountName` or `userPrincipalName`, not the common name. +:::: + +The Active Directory realm authenticates users using an LDAP bind request. After authenticating the user, the realm then searches to find the user’s entry in Active Directory. After the user has been found, the Active Directory realm then retrieves the user’s group memberships from the `tokenGroups` attribute on the user’s entry in Active Directory. + +To integrate with Active Directory, you configure an `active_directory` realm and map Active Directory groups to user roles in {{es}}. + +:::{tip} +If your Active Directory domain supports authentication with user-provided credentials, then you don't need to configure a `bind_dn`. [Learn more](#ece-ad-configuration-with-bind-user). +::: + +## Step 1: Add a new realm configuration [ad-realm-configuration] + +1. Add a realm configuration of type `active_directory` to `elasticsearch.yml` under the `xpack.security.authc.realms.active_directory` namespace. At a minimum, you must specify the Active Directory `domain_name` and `order`. + + See [Active Directory realm settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html#ref-ad-settings) for all of the options you can set for an `active_directory` realm. + + :::{note} + Binding to Active Directory fails if the domain name is not mapped in DNS. + + In a self-managed cluster, if DNS is not being provided by a Windows DNS server, you can add a mapping for the domain in the local `/etc/hosts` file. + ::: + + ::::{tab-set} + + :::{tab-item} Single domain + The following realm configuration configures {{es}} to connect to `ldaps://example.com:636` to authenticate users through Active Directory: + + ```yaml + xpack: + security: + authc: + realms: + active_directory: + my_ad: + order: 0 <1> + domain_name: ad.example.com <2> + url: ldaps://ad.example.com:636 <3> + ``` + + 1. The order in which the `active_directory` realm is consulted during an authentication attempt. + 2. The primary domain in Active Directory. Binding to Active Directory fails if the domain name is not mapped in DNS. + 3. The LDAP URL pointing to the Active Directory Domain Controller that should handle authentication. If you don’t specify the URL, it defaults to `ldap::389`. + + ::: + + :::{tab-item} Forest + Set the `domain_name` setting to the forest root domain name. + + You must also set the `url` setting, since you must authenticate against the Global Catalog, which uses a different port and might not be running on every Domain Controller. + + For example, the following realm configuration configures {{es}} to connect to specific Domain Controllers on the Global Catalog port with the domain name set to the forest root: + + ```yaml + xpack: + security: + authc: + realms: + active_directory: + my_ad: + order: 0 + domain_name: example.com <1> + url: ldaps://dc1.ad.example.com:3269, ldaps://dc2.ad.example.com:3269 <2> + load_balance: + type: "round_robin" <3> + ``` + + 1. The `domain_name` is set to the name of the root domain in the forest. + 2. The URLs for two different Domain Controllers, which are also Global Catalog servers. Port 3268 is the default port for unencrypted communication with the Global Catalog. Port 3269 is the default port for SSL connections. The servers that are being connected to can be in any domain of the forest as long as they are also Global Catalog servers. + 3. A load balancing setting is provided to indicate the desired behavior when choosing the server to connect to. + + + In this configuration, users will need to use either their full User Principal Name (UPN) or their down-level logon name: + * A UPN is typically a concatenation of the username with `@ + ``` + + 1. The user to run as for all Active Directory search requests. + +1. Configure the password for the `bind_dn` user by adding the appropriate `xpack.security.authc.realms.active_directory..secure_bind_password` setting [to the {{es}} keystore](/deploy-manage/security/secure-settings.md). + + In self-managed deployments, when a bind user is configured, connection pooling is enabled by default. Connection pooling can be disabled using the `user_search.pool.enabled` setting. + + :::{warning} + In {{ech}} and {{ece}}, after you configure `secure_bind_password`, any attempt to restart the deployment will fail until you complete the rest of the configuration steps. If you want to rollback the Active Directory realm configurations, you need to remove the `xpack.security.authc.realms.active_directory..secure_bind_password` that was just added. + ::: + +## Step 3: Map Active Directory users and groups to roles + +An integral part of a realm authentication process is to resolve the roles associated with the authenticated user. Roles define the privileges a user has in the cluster. + +Because users are managed externally in the Active Directory server, the expectation is that their roles are managed there as well. Active Directory groups often represent user roles for different systems in the organization. + +The `active_directory` realm enables you to map Active Directory users to roles using their Active Directory groups or other metadata. + +You can map Active Directory groups to roles for your users in the following ways: + +* Using the role mappings page in {{kib}}. +* Using the [role mapping API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-put-role-mapping). +* Using a role mapping file. + +For more information, see [Mapping users and groups to roles](/deploy-manage/users-roles/cluster-or-deployment-auth/mapping-users-groups-to-roles.md). + +::::{important} +Only Active Directory security groups are supported. You can't map distribution groups to roles. +:::: + +### Example: using the role mapping API + +```console +POST /_security/role_mapping/ldap-superuser +{ +"enabled": true, +"roles": [ "superuser" ], <1> +"rules": { +"all" : [ +{ "field": { "realm.name": "my_ad" } }, <2> +{ "field": { "groups": "cn=administrators, dc=example, dc=com" } } <3> + ] +}, +"metadata": { "version": 1 } +} +``` + +1. The name of the role we want to assign, in this case `superuser`. +2. The name of our active_directory realm. +3. The Distinguished Name of the Active Directory group whose members should get the `superuser` role in the deployment. + +### Example: Using a role mapping file [ece_using_the_role_mapping_files_2] + +:::{tip} +If you're using {{ece}} or {{ech}}, then you must [upload this file as a custom bundle](/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md) before it can be referenced. + +If you're using {{eck}}, then install the file as a [custom configuration file](/deploy-manage/deploy/cloud-on-k8s/custom-configuration-files-plugins.md#use-a-volume-and-volume-mount-together-with-a-configmap-or-secret). + +If you're using a self-managed cluster, then the file must be present on each node. +::: + +```sh +superuser: +- cn=Senior Manager, cn=management, dc=example, dc=com +- cn=Senior Admin, cn=management, dc=example, dc=com +``` + +Referencing the file in `elasticsearch.yml`: + +```yaml +xpack: + security: + authc: + realms: + active_directory: + my_ad: + order: 2 + domain_name: ad.example.com + url: ldaps://ad.example.com:636 + bind_dn: es_svc_user@ad.example.com + ssl: + certificate_authorities: ["/app/config/cacerts/ca.crt"] + verification_mode: certificate + files: + role_mapping: "/app/config/mappings/role-mappings.yml" +``` + +## User metadata in Active Directory realms [ad-user-metadata] + +When a user is authenticated using an Active Directory realm, the following properties are populated in the user’s metadata: + +| Field | Description | +| --- | --- | +| `ldap_dn` | The distinguished name of the user. | +| `ldap_groups` | The distinguished name of each of the groups that were resolved for the user (regardless of whether those groups were mapped to a role). | + +This metadata is returned in the [authenticate API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-authenticate) and can be used with [templated queries](../../../deploy-manage/users-roles/cluster-or-deployment-auth/controlling-access-at-document-field-level.md#templating-role-query) in roles. + +Additional metadata can be extracted from the Active Directory server by configuring the `metadata` setting on the Active Directory realm. + + +## Load balancing and failover [ad-load-balancing] + +The `load_balance.type` setting can be used at the realm level to configure how the {{security-features}} should interact with multiple Active Directory servers. Two modes of operation are supported: failover and load balancing. + +See [Load balancing and failover](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html#load-balancing). + + +## Encrypting communications between {{es}} and Active Directory [tls-active-directory] $$$ece-ad-configuration-encrypt-communications$$$ -$$$ad-realm-configuration$$$ +To protect the user credentials that are sent for authentication, you should encrypt communications between {{es}} and your Active Directory server. Connecting using SSL/TLS ensures that the identity of the Active Directory server is authenticated before {{es}} transmits the user credentials and the usernames and passwords are encrypted in transit. + +Clients and nodes that connect using SSL/TLS to the Active Directory server need to have the Active Directory server’s certificate or the server’s root CA certificate installed in their keystore or trust store. + +If you're using {{ech}} or {{ece}}, then you must [upload your certificate as a custom bundle](/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md) before it can be referenced. + +If you're using {{eck}}, then install the certificate as a [custom configuration file](/deploy-manage/deploy/cloud-on-k8s/custom-configuration-files-plugins.md#use-a-volume-and-volume-mount-together-with-a-configmap-or-secret). + +:::{tip} + +If you're using {{ece}} or {{ech}}, then these steps are required only if TLS is enabled and the Active Directory controller is using self-signed certificates. +::: + +::::{admonition} Certificate formats +The following example uses a PEM encoded certificate. If your CA certificate is available as a `JKS` or `PKCS#12` keystore, you can reference it in the user settings. For example: + +```yaml +xpack.security.authc.realms.active_directory.my_ad.ssl.truststore.path: +"/app/config/truststore/ca.p12" +``` + +If the keystore is also password protected (which isn’t typical for keystores that only contain CA certificates), you can also provide the password for the keystore by adding `xpack.security.authc.realms.active_directory.my_ad.ssl.truststore.password: password` in the user settings. + +:::: -$$$tls-active-directory$$$ +The following example demonstrates how to trust a CA certificate (`cacert.pem`), which is located within the configuration directory. -$$$ad-user-metadata$$$ +```shell +xpack: + security: + authc: + realms: + active_directory: + ad_realm: + order: 0 + domain_name: ad.example.com + url: ldaps://ad.example.com:636 + ssl: + certificate_authorities: [ "ES_PATH_CONF/cacert.pem" ] +``` +For more information about these settings, see [Active Directory realm settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html#ref-ad-settings). +::::{note} +By default, when you configure {{es}} to connect to Active Directory using SSL/TLS, it attempts to verify the hostname or IP address specified with the `url` attribute in the realm configuration with the values in the certificate. If the values in the certificate and realm configuration do not match, {{es}} does not allow a connection to the Active Directory server. This is done to protect against man-in-the-middle attacks. If necessary, you can disable this behavior by setting the `ssl.verification_mode` property to `certificate`. +:::: -**This page is a work in progress.** The documentation team is working to combine content pulled from the following pages: +### Using {{kib}} with Active Directory [ad-realm-kibana] -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/active-directory-realm.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/active-directory-realm.md) -* [/raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters-ad.md](/raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters-ad.md) \ No newline at end of file +The Active Directory security realm uses the {{kib}}-provided [basic authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md#basic-authentication) login form. Basic authentication is enabled by default. \ No newline at end of file diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/anonymous-access.md b/deploy-manage/users-roles/cluster-or-deployment-auth/anonymous-access.md index 8845955b1..1e6310694 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/anonymous-access.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/anonymous-access.md @@ -1,16 +1,22 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/anonymous-access.html +applies_to: + deployment: + ess: + ece: + eck: + self: --- # Anonymous access [anonymous-access] ::::{tip} -To embed {{kib}} dashboards or grant access to {{kib}} without requiring credentials, use {{kib}}'s [anonymous authentication](user-authentication.md#anonymous-authentication) feature instead. +To embed {{kib}} dashboards or grant access to {{kib}} without requiring credentials, use {{kib}}'s [anonymous authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md) feature instead. :::: -Incoming requests are considered to be *anonymous* if no authentication token can be extracted from the incoming request. By default, anonymous requests are rejected and an authentication error is returned (status code `401`). +Incoming requests to {{es}} are considered to be *anonymous* if no authentication token can be extracted from the incoming request. By default, anonymous requests are rejected and an authentication error is returned (status code `401`). To enable anonymous access, you assign one or more roles to anonymous users in the `elasticsearch.yml` configuration file. For example, the following configuration assigns anonymous users `role1` and `role2`: diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/authentication-realms.md b/deploy-manage/users-roles/cluster-or-deployment-auth/authentication-realms.md index d015d658c..ded314782 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/authentication-realms.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/authentication-realms.md @@ -1,51 +1,49 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/realms.html +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all --- # Authentication realms [realms] -The {{stack-security-features}} authenticate users by using realms and one or more [token-based authentication services](token-based-authentication-services.md). +Elastic authenticates users by using realms and one or more [token-based authentication services](token-based-authentication-services.md). -A *realm* is used to resolve and authenticate users based on authentication tokens. The {{security-features}} provide the following built-in realms: +A *realm* is used to resolve and authenticate users based on authentication tokens. There are two types of realms: -*native* -: An internal realm where users are stored in a dedicated {{es}} index. This realm supports an authentication token in the form of username and password, and is available by default when no realms are explicitly configured. The users are managed via the [user management APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-security). See [Native user authentication](native.md). - -*ldap* -: A realm that uses an external LDAP server to authenticate the users. This realm supports an authentication token in the form of username and password, and requires explicit configuration in order to be used. See [LDAP user authentication](ldap.md). - -*active_directory* -: A realm that uses an external Active Directory Server to authenticate the users. With this realm, users are authenticated by usernames and passwords. See [Active Directory user authentication](active-directory.md). +Internal +: Realms that are internal to {{es}} and don’t require any communication with external parties. They are fully managed by {{es}}. There can only be a maximum of one configured realm per internal realm type. {{es}} provides two internal realm types: `native` and `file`. -*pki* -: A realm that authenticates users using Public Key Infrastructure (PKI). This realm works in conjunction with SSL/TLS and identifies the users through the Distinguished Name (DN) of the client’s X.509 certificates. See [PKI user authentication](pki.md). +External +: Realms that require interaction with parties and components external to {{es}}, typically with enterprise grade identity management systems. Unlike internal realms, you can have as many external realms as you would like, each with its own unique name and configuration. [View external realm types](#external-realms). -*file* -: An internal realm where users are defined in files stored on each node in the {{es}} cluster. This realm supports an authentication token in the form of username and password and is always available. See [File-based user authentication](file-based.md). +## Configuring realms -*saml* -: A realm that facilitates authentication using the SAML 2.0 Web SSO protocol. This realm is designed to support authentication through {{kib}} and is not intended for use in the REST API. See [SAML authentication](saml.md). +To learn how to configure and use a specific realm, follow the documentation for the realm that you want to use. You can also configure a custom realm by building a [custom realm plugin](/deploy-manage/users-roles/cluster-or-deployment-auth/custom.md). -*kerberos* -: A realm that authenticates a user using Kerberos authentication. Users are authenticated on the basis of Kerberos tickets. See [Kerberos authentication](kerberos.md). +You can also perform the following tasks to further configure your realms: -*oidc* -: A realm that facilitates authentication using OpenID Connect. It enables {{es}} to serve as an OpenID Connect Relying Party (RP) and provide single sign-on (SSO) support in {{kib}}. See [Configuring single sign-on to the {{stack}} using OpenID Connect](openid-connect.md). +* Prioritize your realms using [realm chains](/deploy-manage/users-roles/cluster-or-deployment-auth/realm-chains.md). +* Allow a single user to authenticate using multiple realms by grouping them together in a [security domain](/deploy-manage/users-roles/cluster-or-deployment-auth/security-domains.md). -*jwt* -: A realm that facilitates using JWT identity tokens as authentication bearer tokens. Compatible tokens are OpenID Connect ID Tokens, or custom JWTs containing the same claims. See [JWT authentication](jwt.md). +## Internal realms -The {{security-features}} also support custom realms. If you need to integrate with another authentication system, you can build a custom realm plugin. For more information, see [Integrating with other authentication systems](custom.md). +{{es}} provides the following built-in internal realms: -## Internal and external realms [_internal_and_external_realms] +:::{include} ../_snippets/internal-realms.md +::: -Realm types can roughly be classified in two categories: +## External realms -Internal -: Realms that are internal to Elasticsearch and don’t require any communication with external parties. They are fully managed by the {{stack}} {{security-features}}. There can only be a maximum of one configured realm per internal realm type. The {{security-features}} provide two internal realm types: `native` and `file`. +{{es}} provides the following built-in external realms: -External -: Realms that require interaction with parties/components external to {{es}}, typically, with enterprise grade identity management systems. Unlike internal realms, there can be as many external realms as one would like - each with its own unique name and configuration. The {{security-features}} provide the following external realm types: `ldap`, `active_directory`, `saml`, `kerberos`, and `pki`. +:::{include} ../_snippets/external-realms.md +::: +## Custom realms +If you need to integrate with another authentication system, you can build a custom realm plugin. For more information, see [Integrating with other authentication systems](custom.md). \ No newline at end of file diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/authorization-plugins.md b/deploy-manage/users-roles/cluster-or-deployment-auth/authorization-plugins.md index d65179ecd..4d1d7c2c6 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/authorization-plugins.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/authorization-plugins.md @@ -60,7 +60,7 @@ In order to register the security extension for your custom roles provider or au 1. Implement a plugin class that extends `org.elasticsearch.plugins.Plugin` 2. Create a build configuration file for the plugin; Gradle is our recommendation. -3. Create a `plugin-descriptor.properties` file as described in [Help for plugin authors](asciidocalypse://docs/elasticsearch/docs/extend/create-elasticsearch-plugins/index.md). +3. Create a `plugin-descriptor.properties` file as described in [Help for plugin authors](asciidocalypse://docs/elasticsearch/docs/extend/index.md). 4. Create a `META-INF/services/org.elasticsearch.xpack.core.security.SecurityExtension` descriptor file for the extension that contains the fully qualified class name of your `org.elasticsearch.xpack.core.security.SecurityExtension` implementation 5. Bundle all in a single zip file. diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-sm.md b/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-sm.md new file mode 100644 index 000000000..9c1f1b72d --- /dev/null +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-sm.md @@ -0,0 +1,62 @@ +--- +mapped_urls: + - https://www.elastic.co/guide/en/elasticsearch/reference/current/change-passwords-native-users.html +applies_to: + deployment: + self: +navigation_title: Change passwords +--- +# Set passwords for native and built-in users in self-managed clusters[ change-passwords-native-users] + +After you implement security, you might need or want to change passwords for different users. If you want to reset a password for a [built-in user](/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md) such as the `elastic` or `kibana_system` users, or a user in the [native realm](/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md), you can use the following tools: + +* The **Manage users** UI in {{kib}} +* The [`elasticsearch-reset-password`](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/command-line-tools/reset-password.md) tool +* The [change passwords API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-change-password) + +:::{{tip}} +This topic describes resetting passwords after the initial bootstrap password is reset. To learn about the users that are used to communicate between {{stack}} components, and about managing bootstrap passwords for built-in users, refer to [](/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md). +::: + +## Using {{kib}} + +Elastic enables you to easily manage users in {{kib}} on the **Stack Management > Security > Users** page. From this page, you can create users, edit users, assign roles to users, and change user passwords. You can also deactivate or delete existing users. + +## Using `elasticsearch-reset-password` + +For example, the following command changes the password for a user with the username `user1` to an auto-generated value, and prints the new password to the terminal: + +```shell +bin/elasticsearch-reset-password -u user1 +``` + +To explicitly set a password for a user, include the `-i` parameter with the intended password. + +```shell +bin/elasticsearch-reset-password -u user1 -i +``` + +If you’re working in {{kib}} or don’t have command-line access, you can use the change passwords API to change a user’s password: + +```console +POST /_security/user/user1/_password +{ + "password" : "new-test-password" +} +``` + +## Using the `user` API [native-users-api] + +You can manage users through the Elasticsearch `user` API. + +For example, you can change a user's password: + +```console +POST /_security/user/user1/_password +{ + "password" : "new-test-password" +} +``` + +For more information and examples, see [Users](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-security). + diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md b/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md index 8cd6dedd0..b9d1ce34c 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md @@ -1,43 +1,190 @@ --- mapped_urls: - https://www.elastic.co/guide/en/elasticsearch/reference/current/built-in-users.html - - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-password-reset-elastic.html - - https://www.elastic.co/guide/en/cloud/current/ec-password-reset.html - - https://www.elastic.co/guide/en/cloud-heroku/current/ech-password-reset.html - - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-users-and-roles.html - - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-rotate-credentials.html - - https://www.elastic.co/guide/en/elasticsearch/reference/current/change-passwords-native-users.html +applies_to: + deployment: + self: +navigation_title: "Built-in users" --- -# Built-in users +# Built-in users in self-managed clusters [built-in-users] -% What needs to be done: Refine +The {{stack-security-features}} provide built-in user credentials to help you get up and running. These users have a fixed set of privileges and cannot be authenticated until their passwords have been set. The `elastic` user can be used to [set all of the built-in user passwords](/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md#set-built-in-user-passwords). -% GitHub issue: https://github.com/elastic/docs-projects/issues/347 +::::{admonition} Create users with minimum privileges +The built-in users serve specific purposes and are not intended for general use. In particular, do not use the `elastic` superuser unless full access to the cluster is absolutely required. On self-managed deployments, use the `elastic` user to create users that have the minimum necessary roles or privileges for their activities. +:::: -% Use migrated content from existing pages that map to this page: -% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/built-in-users.md -% - [ ] ./raw-migrated-files/cloud/cloud-enterprise/ece-password-reset-elastic.md -% - [ ] ./raw-migrated-files/cloud/cloud/ec-password-reset.md -% - [ ] ./raw-migrated-files/cloud/cloud-heroku/ech-password-reset.md -% - [ ] ./raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-users-and-roles.md -% Notes: just default elastic user -% - [ ] ./raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-rotate-credentials.md -% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/change-passwords-native-users.md +::::{note} +On {{ecloud}}, [operator privileges](/deploy-manage/users-roles/cluster-or-deployment-auth/operator-privileges.md) are enabled. These privileges restrict some infrastructure functionality, even if a role would otherwise permit a user to complete an administrative task. +:::: -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): +## Built-in users -$$$set-built-in-user-passwords$$$ +The following built-in users are available: -$$$bootstrap-elastic-passwords$$$ +`elastic` +: A built-in [superuser](/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-roles.md). + + Anyone who can log in as the `elastic` user has direct read-only access to restricted indices, such as `.security`. This user also has the ability to manage security and create roles with unlimited privileges. + +`kibana_system` +: The user Kibana uses to connect and communicate with {{es}}. + +`logstash_system` +: The user Logstash uses when storing monitoring information in {{es}}. + +`beats_system` +: The user the Beats use when storing monitoring information in {{es}}. + +`apm_system` +: The user the APM server uses when storing monitoring information in {{es}}. + +`remote_monitoring_user` +: The user {{metricbeat}} uses when collecting and storing monitoring information in {{es}}. It has the `remote_monitoring_agent` and `remote_monitoring_collector` built-in roles. + + +## How the built-in users work [built-in-user-explanation] + +These built-in users are stored in a special `.security` index, which is managed by {{es}}. If a built-in user is disabled or its password changes, the change is automatically reflected on each node in the cluster. If your `.security` index is deleted or restored from a snapshot, however, any changes you have applied are lost. + +Although they share the same API, the built-in users are separate and distinct from users managed by the [native realm](/deploy-manage/users-roles/cluster-or-deployment-auth/native.md). Disabling the native realm will not have any effect on the built-in users. The built-in users can be disabled individually, using the [disable users API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-disable-user). + + +## The Elastic bootstrap password [bootstrap-elastic-passwords] +```{{applies_to}} +deployment: + self: +``` + +```{{tip}} +{{ech}}, {{ece}}, and {{eck}} manage the `elastic` user differently. [Learn more](/deploy-manage/users-roles/cluster-or-deployment-auth/orchestrator-managed-users-overview.md). +``` + +When you install {{es}}, if the `elastic` user does not already have a password, it uses a default bootstrap password. The bootstrap password is a transient password that enables you to run the tools that set all the built-in user passwords. + +By default, the bootstrap password is derived from a randomized `keystore.seed` setting, which is added to the keystore during installation. You do not need to know or change this bootstrap password. If you have defined a `bootstrap.password` setting in the keystore, however, that value is used instead. For more information about interacting with the keystore, see [Secure settings](/deploy-manage/security/secure-settings.md). + +::::{note} +After you [set passwords for the built-in users](/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md#set-built-in-user-passwords), in particular for the `elastic` user, there is no further use for the bootstrap password. +:::: + +## Setting initial built-in user passwords [set-built-in-user-passwords] +```{{applies_to}} +deployment: + self: +``` + +You must set the passwords for all built-in users. You can set or reset passwords using several methods. + +* Using `elasticsearch-setup-passwords` +* Using {{kib}} user management +* Using the change password API + +If you want to reset built-in user passwords after initial setup, refer to [Set passwords for native and built-in users in self-managed clusters](/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-sm.md). + +```{{tip}} +{{ech}}, {{ece}}, and {{eck}} manage the `elastic` user differently. [Learn more](/deploy-manage/users-roles/cluster-or-deployment-auth/orchestrator-managed-users-overview.md). +``` + +### Using `elasticsearch-setup-passwords` + +The `elasticsearch-setup-passwords` tool is the simplest method to set the built-in users' passwords for the first time. It uses the `elastic` user’s bootstrap password to run user management API requests. For example, you can run the command in an "interactive" mode, which prompts you to enter new passwords for the `elastic`, `kibana_system`, `logstash_system`, `beats_system`, `apm_system`, and `remote_monitoring_user` users: + +```shell +bin/elasticsearch-setup-passwords interactive +``` + +For more information about the command options, see [elasticsearch-setup-passwords](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/command-line-tools/setup-passwords.md). + +::::{important} +After you set a password for the `elastic` user, the bootstrap password is no longer valid; you cannot run the `elasticsearch-setup-passwords` command a second time. +:::: + +### Using {{kib}} user management or the change password API + +You can set the initial passwords for the built-in users by using the **Management > Users** page in {{kib}} or the [change password API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-change-password). + +To use these methods, you must supply the `elastic` user and its bootstrap password to log in to {{kib}} or run the API. This requirement means that you can't use the default bootstrap password that is derived from the `keystore.seed` setting. Instead, you must explicitly set a `bootstrap.password` setting in the keystore before you start {{es}}. For example, the following command prompts you to enter a new bootstrap password: + +```shell +bin/elasticsearch-keystore add "bootstrap.password" +``` + +You can then start {{es}} and {{kib}} and use the `elastic` user and bootstrap password to log in to {{kib}} and change the passwords. + +### Using the Change Password API + +Alternatively, you can submit Change Password API requests for each built-in user. These methods are better suited for changing your passwords after the initial setup is complete, since at that point the bootstrap password is no longer required. + +## Adding built-in user passwords to {{kib}} [add-built-in-user-kibana] + +After the `kibana_system` user password is set, you need to update the {{kib}} server with the new password by setting `elasticsearch.password` in the `kibana.yml` configuration file: + +```yaml +elasticsearch.password: kibanapassword +``` + +See [Configuring security in {{kib}}](/deploy-manage/security.md). + + +## Adding built-in user passwords to {{ls}} [add-built-in-user-logstash] + +The `logstash_system` user is used internally within Logstash when monitoring is enabled for Logstash. + +To enable this feature in Logstash, you need to update the Logstash configuration with the new password by setting `xpack.monitoring.elasticsearch.password` in the `logstash.yml` configuration file: + +```yaml +xpack.monitoring.elasticsearch.password: logstashpassword +``` + +If you have upgraded from an older version of {{es}}, the `logstash_system` user may have defaulted to *disabled* for security reasons. Once the password has been changed, you can enable the user via the following API call: + +```console +PUT _security/user/logstash_system/_enable +``` + +See [Configuring credentials for {{ls}} monitoring](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/secure-connection.md#ls-monitoring-user). + + +## Adding built-in user passwords to Beats [add-built-in-user-beats] + +The `beats_system` user is used internally within Beats when monitoring is enabled for Beats. + +To enable this feature in Beats, you need to update the configuration for each of your beats to reference the correct username and password. For example: + +```yaml +xpack.monitoring.elasticsearch.username: beats_system +xpack.monitoring.elasticsearch.password: beatspassword +``` + +For example, see [Monitoring {{metricbeat}}](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/monitoring.md). + +The `remote_monitoring_user` is used when {{metricbeat}} collects and stores monitoring data for the {{stack}}. See [*Monitoring in a production environment*](/deploy-manage/monitor/stack-monitoring/elasticsearch-monitoring-self-managed.md). + +If you have upgraded from an older version of {{es}}, then you may not have set a password for the `beats_system` or `remote_monitoring_user` users. If this is the case, then you should use the **Management > Users** page in {{kib}} or the [change password API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-change-password) to set a password for these users. + + +## Adding built-in user passwords to APM [add-built-in-user-apm] + +The `apm_system` user is used internally within APM when monitoring is enabled. + +To enable this feature in APM, you need to update the `apm-server.yml` configuration file to reference the correct username and password. For example: + +```yaml +xpack.monitoring.elasticsearch.username: apm_system +xpack.monitoring.elasticsearch.password: apmserverpassword +``` + +If you have upgraded from an older version of {{es}}, then you may not have set a password for the `apm_system` user. If this is the case, then you should use the **Management > Users** page in {{kib}} or the [change password API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-change-password) to set a password for these users. + + +## Disabling default password functionality [disabling-default-password] + +::::{important} +This setting is deprecated. The elastic user no longer has a default password. The password must be set before the user can be used. See [The Elastic bootstrap password](/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md#bootstrap-elastic-passwords). + +:::: -**This page is a work in progress.** The documentation team is working to combine content pulled from the following pages: -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/built-in-users.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/built-in-users.md) -* [/raw-migrated-files/cloud/cloud-enterprise/ece-password-reset-elastic.md](/raw-migrated-files/cloud/cloud-enterprise/ece-password-reset-elastic.md) -* [/raw-migrated-files/cloud/cloud/ec-password-reset.md](/raw-migrated-files/cloud/cloud/ec-password-reset.md) -* [/raw-migrated-files/cloud/cloud-heroku/ech-password-reset.md](/raw-migrated-files/cloud/cloud-heroku/ech-password-reset.md) -* [/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-users-and-roles.md](/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-users-and-roles.md) -* [/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-rotate-credentials.md](/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-rotate-credentials.md) -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/change-passwords-native-users.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/change-passwords-native-users.md) \ No newline at end of file diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/configure-operator-privileges.md b/deploy-manage/users-roles/cluster-or-deployment-auth/configure-operator-privileges.md index 6e6549b16..a17dd28ae 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/configure-operator-privileges.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/configure-operator-privileges.md @@ -1,15 +1,19 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/configure-operator-privileges.html +applies_to: + deployment: + ess: + ece: + eck: --- # Configure operator privileges [configure-operator-privileges] -::::{note} -{cloud-only} +::::{admonition} Indirect use only +This feature is designed for indirect use by {{ech}}, {{ece}}, and {{eck}}. Direct use is not supported. :::: - Before you can use operator privileges, you must [enable the feature](#enable-operator-privileges) on all nodes in the cluster and [designate operator users](#designate-operator-users). ## Enable operator privileges [enable-operator-privileges] diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/controlling-user-cache.md b/deploy-manage/users-roles/cluster-or-deployment-auth/controlling-user-cache.md index 11af1c209..37955b46e 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/controlling-user-cache.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/controlling-user-cache.md @@ -1,6 +1,12 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/controlling-user-cache.html +applies_to: + deployment: + ess: + ece: + eck: + self: --- # Controlling the user cache [controlling-user-cache] diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/custom.md b/deploy-manage/users-roles/cluster-or-deployment-auth/custom.md index 6b9387e12..71bcfefa5 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/custom.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/custom.md @@ -1,6 +1,12 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/custom-realms.html +applies_to: + deployment: + ece: + ess: + eck: + self: --- # Custom realms @@ -9,9 +15,9 @@ If you are using an authentication system that is not supported out-of-the-box b ## Implementing a custom realm [implementing-custom-realm] -Sample code that illustrates the structure and implementation of a custom realm is provided in [https://github.com/elastic/elasticsearch/tree/master/x-pack/qa/security-example-spi-extension](https://github.com/elastic/elasticsearch/tree/master/x-pack/qa/security-example-spi-extension). You can use this code as a starting point for creating your own realm. +Sample code that illustrates the structure and implementation of a custom realm is provided [in the `elasticsearch` repository](https://github.com/elastic/elasticsearch/tree/master/x-pack/qa/security-example-spi-extension) on GitHub. You can use this code as a starting point for creating your own realm. -To create a custom realm, you need to: +To create a custom realm, you need to do the following: 1. Extend `org.elasticsearch.xpack.security.authc.Realm` to communicate with your authentication system to authenticate users. 2. Implement the `org.elasticsearch.xpack.security.authc.Realm.Factory` interface in a class that will be used to create the custom realm. @@ -57,11 +63,17 @@ To package your custom realm as a plugin: To use a custom realm: -1. Install the realm extension on each node in the cluster. You run `bin/elasticsearch-plugin` with the `install` sub-command and specify the URL pointing to the zip file that contains the extension. For example: +1. Install the realm extension on each node in the cluster. + + * If you're using a self-managed cluster, then run `bin/elasticsearch-plugin` with the `install` sub-command and specify the URL pointing to the zip file that contains the extension. For example: + + ```shell + bin/elasticsearch-plugin install file:////my-realm-1.0.zip + ``` + * If you're using {{ech}}, then refer to [](/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md). + * If you're using {{ece}}, then refer to [](/deploy-manage/deploy/cloud-enterprise/add-custom-bundles-plugins.md). + * If you're using {{eck}}, then refer to [](/deploy-manage/deploy/cloud-on-k8s/custom-configuration-files-plugins.md). - ```shell - bin/elasticsearch-plugin install file:////my-realm-1.0.zip - ``` 2. Add a realm configuration of the appropriate realm type to `elasticsearch.yml` under the `xpack.security.authc.realms` namespace. You must define your realm within the namespace that matches the type defined by the extension. The options you can set depend on the settings exposed by the custom realm. At a minimum, you must explicitly set the `order` attribute to control the order in which the realms are consulted during authentication. You must also make sure each configured realm has a distinct `order` setting. In the event that two or more realms have the same `order`, the node will fail to start. @@ -69,6 +81,6 @@ To use a custom realm: When you configure realms in `elasticsearch.yml`, only the realms you specify are used for authentication. If you also want to use the `native` or `file` realms, you must include them in the realm chain. :::: -3. Restart Elasticsearch. +3. Restart {{es}}. diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/external-authentication.md b/deploy-manage/users-roles/cluster-or-deployment-auth/external-authentication.md index 9202a19c4..4962efa7b 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/external-authentication.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/external-authentication.md @@ -1,5 +1,29 @@ +--- +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all +--- + # External authentication -% What needs to be done: Write from scratch +External authentication in Elastic is any form of authentication that requires interaction with parties and components external to {{es}}, typically with enterprise grade identity management systems. + +Elastic offers several external [realm](authentication-realms.md) types, each of which represents a common authentication provider. You can have as many external realms as you would like, each with its own unique name and configuration. + +If the authentication provider that you want to use is not currently supported, then you can create a your own [custom realm plugin](custom.md) to integrate with additional systems. + +In this section, you'll learn how to configure different types of external realms, and use them to grant access to Elastic resources. + +:::{{tip}} +For many external realms, you need to perform extra steps to use the realm to log in to {{kib}}. [Learn more](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md). +::: + +## Available external realms + +{{es}} provides the following built-in external realms: -⚠️ **This page is a work in progress.** ⚠️ \ No newline at end of file +:::{include} ../_snippets/external-realms.md +::: \ No newline at end of file diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/file-based.md b/deploy-manage/users-roles/cluster-or-deployment-auth/file-based.md index b0940a193..63b7a747b 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/file-based.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/file-based.md @@ -2,23 +2,231 @@ mapped_urls: - https://www.elastic.co/guide/en/elasticsearch/reference/current/file-realm.html - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-users-and-roles.html +applies_to: + deployment: + self: all + eck: all +navigation_title: "File-based" --- -# File-based +# File-based user authentication [file-realm] -% What needs to be done: Refine +You can manage and authenticate users with the built-in `file` realm. With the `file` realm, users are defined in local files on each node in the cluster. -% GitHub issue: https://github.com/elastic/docs-projects/issues/347 +The `file` realm is useful as a fallback or recovery realm. For example in cases where the cluster is unresponsive or the security index is unavailable, or when you forget the password for your administrative users. In this type of scenario, the `file` realm is a convenient workaround: you can define a new `admin` user in the `file` realm and use it to log in and reset the credentials of all other users. -% Use migrated content from existing pages that map to this page: +You can configure only one file realm on {{es}} nodes. -% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/file-realm.md -% - [ ] ./raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-users-and-roles.md -% Notes: file realm content +::::{important} +* In self-managed deployments, as the administrator of the cluster, it is your responsibility to ensure the same users are defined on every node in the cluster. The {{stack}} {{security-features}} do not deliver any mechanism to guarantee this. +* You can't add or manage users in the `file` realm using the [user APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-security), or using the {{kib}} **Management > Security > Users** page. +:::: -$$$file-realm-configuration$$$ +## Configure a file realm [file-realm-configuration] -**This page is a work in progress.** The documentation team is working to combine content pulled from the following pages: +You don’t need to explicitly configure a `file` realm. The `file` and `native` realms are added to the realm chain by default. Unless configured otherwise, the `file` realm is added first, followed by the `native` realm. You can define only one `file` realm per node. -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/file-realm.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/file-realm.md) -* [/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-users-and-roles.md](/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-users-and-roles.md) \ No newline at end of file +1. (Optional) Add a realm configuration to `elasticsearch.yml` under the `xpack.security.authc.realms.file` namespace. At a minimum, you must set the realm’s `order` attribute. + + For example, the following snippet shows a `file` realm configuration that sets the `order` to zero so the realm is checked first: + + ```yaml + xpack: + security: + authc: + realms: + file: + file1: + order: 0 + ``` + +2. If you're using a self-managed {{es}} cluster, optionally change how often the `users` and `users_roles` files are checked. + + By default, {{es}} checks these files for changes every 5 seconds. You can change this default behavior by changing the `resource.reload.interval.high` setting in the `elasticsearch.yml` file + + :::{{warning}} + Because `resource.reload.interval.high` is a common setting in {{es}}, changing its value may effect other schedules in the system. + ::: + +3. Restart {{es}}. + + In {{eck}}, this change is propagated automatically. + + +## Add users + +**In a self-managed {{es}} cluster**, all the data about the users for the `file` realm is stored in two files on each node in the cluster: [`users` and `users_roles`](#using-users-and-users_roles-files). Both files are located in `ES_PATH_CONF` and are read on startup. + +**In an {{eck}} deployment**, you can pass file realm user information in two ways: + +1. Using [`users` and `user_roles`](#using-users-and-users_roles-files) files, which are passed using file realm content secrets +2. [Using Kubernetes basic authentication secrets](#k8s-basic) + +You can reference several secrets in the {{es}} specification. ECK aggregates their content into a single secret, mounted in every {{es}} Pod. + +::::{important} +In a self-managed cluster, the `users` and `users_roles` files are managed locally by the node and are **not** managed globally by the cluster. This means that with a typical multi-node cluster, the exact same changes need to be applied on each and every node in the cluster. + +A safer approach would be to apply the change on one of the nodes and have the files distributed or copied to all other nodes in the cluster (either manually or using a configuration management system such as Puppet or Chef). +:::: + +### Using `users` and `users_roles` files + +`users` and `users_roles` files contain all of the information about users in the file realm. + +#### `users` + +The `users` file stores all the users and their passwords. Each line in the file represents a single user entry consisting of the username and hashed and salted password. + +``` +rdeniro:$2a$10$BBJ/ILiyJ1eBTYoRKxkqbuDEdYECplvxnqQ47uiowE7yGqvCEgj9W +alpacino:$2a$10$cNwHnElYiMYZ/T3K4PvzGeJ1KbpXZp2PfoQD.gfaVdImnHOwIuBKS +jacknich:{PBKDF2}50000$z1CLJt0MEFjkIK5iEfgvfnA6xq7lF25uasspsTKSo5Q=$XxCVLbaKDimOdyWgLCLJiyoiWpA/XDMe/xtVgn1r5Sg= +``` + +:::{tip} +To limit exposure to credential theft and mitigate credential compromise, the file realm stores passwords and caches user credentials according to security best practices. By default, a hashed version of user credentials is stored in memory, using a salted sha-256 hash algorithm and a hashed version of passwords is stored on disk salted and hashed with the bcrypt hash algorithm. To use different hash algorithms, see [User cache and password hash algorithms](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html#hashing-settings). +::: + +#### `users_roles` + +The `users_roles` file stores the roles associated with the users. For example: + +``` +admin:rdeniro +power_user:alpacino,jacknich +user:jacknich +``` + +Each row maps a role to a comma-separated list of all the users that are associated with that role. + +#### Editing `users` and `users_roles` files + +You can edit files and secrets that contain `users` and `users_roles` manually, or you can edit them using a tool. + +**Manually** + +::::{tab-set} + +:::{tab-item} Self-managed +In a self-managed cluster, you can edit the contents of `ES_PATH_CONF/users` and `ES_PATH_CONF/users_roles` directly. +::: + +:::{tab-item} {{eck}} +You can pass `users` and `user_roles` files to {{eck}} using a file realm secret: + +```yaml +apiVersion: elasticsearch.k8s.elastic.co/v1 +kind: Elasticsearch +metadata: + name: elasticsearch-sample +spec: + version: 8.16.1 + auth: + fileRealm: + - secretName: my-filerealm-secret-1 + - secretName: my-filerealm-secret-2 + nodeSets: + - name: default + count: 1 +``` + +A file realm secret is composed of two entries: a `users` entry and a `users_roles` entry. You can provide either one entry or both entries in each secret. + +If you specify multiple users with the same name in more than one secret, the last one takes precedence. If you specify multiple roles with the same name in more than one secret, a single entry per role is derived from the concatenation of its corresponding users from all secrets. + +The following secret specifies three users and their respective roles: + +```yaml +kind: Secret +apiVersion: v1 +metadata: + name: my-filerealm-secret +stringData: + users: |- + rdeniro:$2a$10$BBJ/ILiyJ1eBTYoRKxkqbuDEdYECplvxnqQ47uiowE7yGqvCEgj9W + alpacino:$2a$10$cNwHnElYiMYZ/T3K4PvzGeJ1KbpXZp2PfoQD.gfaVdImnHOwIuBKS + jacknich:{PBKDF2}50000$z1CLJt0MEFjkIK5iEfgvfnA6xq7lF25uasspsTKSo5Q=$XxCVLbaKDimOdyWgLCLJiyoiWpA/XDMe/xtVgn1r5Sg= + users_roles: |- + admin:rdeniro + power_user:alpacino,jacknich + user:jacknich +``` +::: + +:::: + +**Using a tool** + +To avoid editing these files manually, you can use the [elasticsearch-users](https://www.elastic.co/guide/en/elasticsearch/reference/current/users-command.html) tool: + +::::{tab-set} + +:::{tab-item} Self-managed + +``` +bin/elasticsearch-users useradd myuser -p mypassword -r monitoring_user +``` +::: + +:::{tab-item} {{eck}} +The following is an example of invoking the `elasticsearch-users` tool in a Docker container: + +```sh +# create a folder with the 2 files +mkdir filerealm +touch filerealm/users filerealm/users_roles + +# create user 'myuser' with role 'monitoring_user' +docker run \ + -v $(pwd)/filerealm:/usr/share/elasticsearch/config \ + docker.elastic.co/elasticsearch/elasticsearch:8.16.1 \ + bin/elasticsearch-users useradd myuser -p mypassword -r monitoring_user + +# create a Kubernetes secret with the file realm content +kubectl create secret generic my-file-realm-secret --from-file filerealm +``` +::: + +:::: + +### Using {{k8s}} basic authentication secrets [k8s-basic] +```{applies_to} +eck: all +``` +You can also add file-based authentication users using [Kubernetes basic authentication secrets](https://kubernetes.io/docs/concepts/configuration/secret/#basic-authentication-secret). + +A basic authentication secret can optionally contain a [`roles`](#users_roles) entry. It must contain a comma separated list of roles to be associated with the user. The following example illustrates this combination: + +```yaml +apiVersion: v1 +kind: Secret +metadata: + name: secret-basic-auth +type: kubernetes.io/basic-auth +stringData: + username: rdeniro # required field for kubernetes.io/basic-auth + password: mypassword # required field for kubernetes.io/basic-auth + roles: kibana_admin,ingest_admin # optional, not part of kubernetes.io/basic-auth +``` + +You can make this file available to {{eck}} by adding it as a file realm secret: + +```yaml +apiVersion: elasticsearch.k8s.elastic.co/v1 +kind: Elasticsearch +metadata: + name: elasticsearch-sample +spec: + version: 8.16.1 + auth: + fileRealm: + - secretName: secret-basic-auth + nodeSets: + - name: default + count: 1 +``` + +::::{note} +If you specify the password for the `elastic` user through a basic authentication secret, then the secret holding the password described in [](/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md) will not be created by the operator. +:::: diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/internal-authentication.md b/deploy-manage/users-roles/cluster-or-deployment-auth/internal-authentication.md index 5e442a92f..16b487baf 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/internal-authentication.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/internal-authentication.md @@ -1,5 +1,23 @@ +--- +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all +--- + # Internal authentication -% What needs to be done: Write from scratch +Internal authentication methods are fully managed by {{es}}, and don't require any communication with external parties. + +{{es}} offers two internal authentication [realms](authentication-realms.md), both of which are enabled by default. There can only be a maximum of one configured realm per internal realm type. + +In this section, you'll learn how to configure internal realms, and manage users that authenticate using internal realms. + +## Available internal realms + +{{es}} provides two internal realm types: -⚠️ **This page is a work in progress.** ⚠️ \ No newline at end of file +:::{include} ../_snippets/internal-realms.md +::: diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/internal-users.md b/deploy-manage/users-roles/cluster-or-deployment-auth/internal-users.md index 42f2a3785..36f2d2ee7 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/internal-users.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/internal-users.md @@ -1,6 +1,12 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/internal-users.html +applies_to: + deployment: + ess: + ece: + eck: + self: --- # Internal users [internal-users] diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/jwt.md b/deploy-manage/users-roles/cluster-or-deployment-auth/jwt.md index 4fe227cad..0c2ebd3e3 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/jwt.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/jwt.md @@ -4,28 +4,544 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-securing-clusters-JWT.html - https://www.elastic.co/guide/en/cloud-heroku/current/ech-securing-clusters-JWT.html - https://www.elastic.co/guide/en/elasticsearch/reference/current/jwt-auth-realm.html +applies_to: + deployment: + self: + ess: + ece: + eck: +navigation_title: "JWT" --- -# JWT +# JWT authentication [jwt-auth-realm] -% What needs to be done: Refine +{{es}} can be configured to trust JSON Web Tokens (JWTs) issued from an external service as bearer tokens for authentication. -% GitHub issue: https://github.com/elastic/docs-projects/issues/347 +When a JWT realm is used to authenticate with {{es}}, a distinction is made between the client that is connecting to {{es}}, and the user on whose behalf the request should run. The JWT authenticates the user, and a separate credential authenticates the client. -% Use migrated content from existing pages that map to this page: +The JWT realm supports two token types, `id_token` (the default) and `access_token`: -% - [ ] ./raw-migrated-files/cloud/cloud/ec-securing-clusters-JWT.md -% - [ ] ./raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters-JWT.md -% - [ ] ./raw-migrated-files/cloud/cloud-heroku/ech-securing-clusters-JWT.md -% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/jwt-auth-realm.md +1. `id_token`: An application authenticates and identifies a user with an authentication flow, e.g. OpenID Connect (OIDC), and then accesses {{es}} on behalf of the authenticated user using a JSON Web Token (JWT) conforming to OIDC ID Token specification. This option is available in deployments using {{stack}} 8.2+. +2. `access_token`: An application accesses {{es}} using its own identity, encoded as a JWT, e.g. The application authenticates itself to a central identity platform using an OAuth2 Client Credentials Flow and then uses the resulting JWT-based access token to connect to {{es}}. This option is available in deployments using {{stack}} 8.7+. -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): +::::{note} +A single JWT realm can only work with a single token type. To handle both token types, you must configure at least two JWT realms. You should choose the token type carefully based on the use case because it impacts on how validations are performed. +:::: -$$$jwt-realm-runas$$$ +The JWT realm validates the incoming JWT based on its configured token type. JSON Web Tokens (JWT) of both types must contain the following 5 pieces of information. While ID Tokens, based on the OIDC specification, have strict rules for what claims should provide these information, access tokens allow some claims to be configurable. -**This page is a work in progress.** The documentation team is working to combine content pulled from the following pages: +**Claims** -* [/raw-migrated-files/cloud/cloud/ec-securing-clusters-JWT.md](/raw-migrated-files/cloud/cloud/ec-securing-clusters-JWT.md) -* [/raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters-JWT.md](/raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters-JWT.md) -* [/raw-migrated-files/cloud/cloud-heroku/ech-securing-clusters-JWT.md](/raw-migrated-files/cloud/cloud-heroku/ech-securing-clusters-JWT.md) -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/jwt-auth-realm.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/jwt-auth-realm.md) \ No newline at end of file +| Information | ID Token | Access Token | +| --- | --- | --- | +| Issuer | `iss` | `iss` | +| Subject | `sub` | Defaults to `sub`, but can fall back to another claim if `sub` does not exist | +| Audiences | `aud` | Defaults to `aud`, but can fall back to another claim if `aud` does not exist | +| Issue Time | `iat` | `iat` | +| Expiration Time | `exp` | `exp` | + +In addition, {{es}} also validates `nbf` and `auth_time` claims for ID Tokens if these claims are present. But these claims are ignored for access tokens. + +Overall, the access token type has more relaxed validation rules and is suitable for more generic JWTs, including self-signed ones. + +## ID Tokens from OIDC workflows [jwt-realm-oidc] + +JWT authentication in {{es}} is derived from OIDC user workflows, where different tokens can be issued by an OIDC Provider (OP), including ID Tokens. ID Tokens from an OIDC provider are well-defined JSON Web Tokens (JWT) and should be always compatible with a JWT realm of the `id_token` token type. The subject claim of an ID token represents the end-user. This means that ID tokens will generally have many allowed subjects. Therefore, a JWT realm of `id_token` token type does *not* mandate the `allowed_subjects` (or `allowed_subject_patterns`) validation. + +::::{note} +Because JWTs are obtained external to {{es}}, you can define a custom workflow instead of using the OIDC workflow. However, the JWT format must still be JSON Web Signature (JWS). The JWS header and JWS signature are validated using OIDC ID token validation rules. +:::: + + +{{es}} supports a separate [OpenID Connect realm](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md). It is preferred for any use case where {{es}} can act as an OIDC RP. The OIDC realm is the only supported way to enable OIDC authentication in {{kib}}. + +::::{tip} +Users authenticating with a JWT realm can optionally impersonate another user with the [`run_as`](/deploy-manage/users-roles/cluster-or-deployment-auth/submitting-requests-on-behalf-of-other-users.md) feature. See [Applying the `run_as` privilege to JWT realm users](/deploy-manage/users-roles/cluster-or-deployment-auth/jwt.md#jwt-realm-runas). +:::: + + +## Access tokens [jwt-realm-oauth2] + +A common method to obtain access tokens is with the OAuth2 client credentials flow. A typical usage of this flow is for an application to get a credential for itself. This is the use case that the `access_token` token type is designed for. It is likely that this application also obtains ID Tokens for its end-users. To prevent end-user ID Tokens being used to authenticate with the JWT realm configured for the application, we mandate `allowed_subjects` or `allowed_subject_patterns` validation when a JWT realm has token type `access_token`. + +::::{note} +Not every access token is formatted as a JSON Web Token (JWT). For it to be compatible with the JWT realm, it must at least use the JWT format and satisfies relevant requirements in the above table. +:::: + +## Configure {{es}} to use a JWT realm [jwt-realm-configuration] + +To use JWT authentication, create the realm in the `elasticsearch.yml` file to configure it within the {{es}} authentication chain. + +The JWT realm has a few mandatory settings, plus optional settings that are described in [JWT realm settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md#ref-jwt-settings). + +::::{note} +Client authentication is enabled by default for the JWT realms. Disabling client authentication is possible, but strongly discouraged. +:::: + +1. Configure the realm using your preferred token type: + + :::::{tab-set} + + ::::{tab-item} ID tokens + + The following example includes the most common settings, which are not intended for every use case: + + ```yaml + xpack.security.authc.realms.jwt.jwt1: + order: 3 + token_type: id_token + client_authentication.type: shared_secret + allowed_issuer: "https://issuer.example.com/jwt/" + allowed_audiences: [ "8fb85eba-979c-496c-8ae2-a57fde3f12d0" ] + allowed_signature_algorithms: [RS256,HS256] + pkc_jwkset_path: jwt/jwkset.json + claims.principal: sub + ``` + + `order` + : Specifies a realm `order` of `3`, which indicates the order in which the configured realm is checked when authenticating a user. Realms are consulted in ascending order, where the realm with the lowest order value is consulted first. + + `token_type` + : Instructs the realm to treat and validate incoming JWTs as ID Tokens (`id_token`). + + `client_authentication.type` + : Specifies the client authentication type as `shared_secret`, which means that the client is authenticated using an HTTP request header that must match a pre-configured secret value. The client must provide this shared secret with every request in the `ES-Client-Authentication` header and using the `SharedSecret` scheme. The header value must be a case-sensitive match to the realm’s `client_authentication.shared_secret`. + + `allowed_issuer` + : Sets a verifiable identifier for your JWT issuer. This value is typically a URL, UUID, or some other case-sensitive string value. + + `allowed_audiences` + : Specifies a list of JWT audiences that the realm will allow. These values are typically URLs, UUIDs, or other case-sensitive string values. + + `allowed_signature_algorithms` + : Indicates that {{es}} should use the `RS256` or `HS256` signature algorithms to verify the signature of the JWT from the JWT issuer. + + `pkc_jwkset_path` + : The file name or URL to a JSON Web Key Set (JWKS) with the public key material that the JWT Realm uses for verifying token signatures. A value is considered a file name if it does not begin with `https`. The file name is resolved relative to the {{es}} configuration directory. If a URL is provided, then it must begin with `https://` (`http://` is not supported). {{es}} automatically caches the JWK set and will attempt to refresh the JWK set upon signature verification failure, as this might indicate that the JWT Provider has rotated the signing keys. + + `claims.principal` + : The name of the JWT claim that contains the user’s principal (username). + + :::: + + ::::{tab-item} Access tokens + The following is an example snippet for configuring a JWT realm for handling access tokens: + + ```yaml + xpack.security.authc.realms.jwt.jwt2: + order: 4 + token_type: access_token + client_authentication.type: shared_secret + allowed_issuer: "https://issuer.example.com/jwt/" + allowed_subjects: [ "123456-compute@admin.example.com" ] + allowed_subject_patterns: [ "wild*@developer?.example.com", "/[a-z]+<1-10>\\@dev\\.example\\.com/"] + allowed_audiences: [ "elasticsearch" ] + required_claims: + token_use: access + version: ["1.0", "2.0"] + allowed_signature_algorithms: [RS256,HS256] + pkc_jwkset_path: "https://idp-42.example.com/.well-known/configuration" + fallback_claims.sub: client_id + fallback_claims.aud: scope + claims.principal: sub + ``` + + `token_type` + : Instructs the realm to treat and validate incoming JWTs as access tokens (`access_token`). + + `allowed_subjects` + : Specifies a list of JWT subjects that the realm will allow. These values are typically URLs, UUIDs, or other case-sensitive string values. + + `allowed_subject_patterns` + : Analogous to `allowed_subjects` but it accepts a list of [Lucene regexp](asciidocalypse://docs/elasticsearch/docs/reference/query-languages/regexp-syntax.md) and wildcards for the allowed JWT subjects. Wildcards use the `*` and `?` special characters (which are escaped by `\`) to mean "any string" and "any single character" respectively, for example "a?\**", matches "a1*" and "ab*whatever", but not "a", "abc", or "abc*" (in Java strings `\` must itself be escaped by another `\`). [Lucene regexp](asciidocalypse://docs/elasticsearch/docs/reference/query-languages/regexp-syntax.md) must be enclosed between `/`, for example "/https?://[^/]+/?/" matches any http or https URL with no path component (matches "https://elastic.co/" but not "https://elastic.co/guide"). + + At least one of the `allowed_subjects` or `allowed_subject_patterns` settings must be specified (and be non-empty) when `token_type` is `access_token`. + + When both `allowed_subjects` and `allowed_subject_patterns` settings are specified an incoming JWT’s `sub` claim is accepted if it matches any of the two lists. + + `required_claims` + : Specifies a list of key/value pairs for additional verifications to be performed against a JWT. The values are either a string or an array of strings. + + `fallback_claims.sub` + : The name of the JWT claim to extract the subject information if the `sub` claim does not exist. This setting is only available when `token_type` is `access_token`. The fallback is applied everywhere the `sub` claim is used. In the above snippet, it means the `claims.principal` will also fallback to `client_id` if `sub` does not exist. + + `fallback_claims.aud` + : The name of the JWT claim to extract the audiences information if the `aud` claim does not exist. This setting is only available when `token_type` is `access_token`. The fallback is applied everywhere the `aud` claim is used. + :::: + ::::: + +2. Add secure settings [to the {{es}} keystore](/deploy-manage/security/secure-settings.md): + + * The `shared_secret` value for `client_authentication.type` + + (`xpack.security.authc.realms.jwt.jwt1.client_authentication.shared_secret1`) + * The HMAC keys for `allowed_signature_algorithms` + + (`xpack.security.authc.realms.jwt.jwt1.hmac_jwkset`) + + This setting can be a path to a JWKS, which is a resource for a set of JSON-encoded secret keys. The file can be removed after you load the contents into the {{es}} keystore. + + + :::{note} + Using the JWKS is preferred. However, you can add an HMAC key in string format using `xpack.security.authc.realms.jwt.jwt1.hmac_key`. This format is compatible with HMAC UTF-8 keys, but only supports a single key with no attributes. You can only use one HMAC format (either `hmac_jwkset` or `hmac_key`) at a time. + ::: + + +## JWT encoding and validation [jwt-validation] + +JWTs can be parsed into three pieces: + +Header +: Provides information about how to validate the token. + +Claims +: Contains data about the calling user or application. + +Signature +: The data that’s used to validate the token. + +```js +Header: {"typ":"JWT","alg":"HS256"} +Claims: {"aud":"aud8","sub":"security_test_user","iss":"iss8","exp":4070908800,"iat":946684800} +Signature: UnnFmsoFKfNmKMsVoDQmKI_3-j95PCaKdgqqau3jPMY +``` + +This example illustrates a partial decoding of a JWT. The validity period is from 2000 to 2099 (inclusive), as defined by the issue time (`iat`) and expiration time (`exp`). JWTs typically have a validity period shorter than 100 years, such as 1-2 hours or 1-7 days, not an entire human life. + +The signature in this example is deterministic because the header, claims, and HMAC key are fixed. JWTs typically have a `nonce` claim to make the signature non-deterministic. The supported JWT encoding is JSON Web Signature (JWS), and the JWS `Header` and `Signature` are validated using OpenID Connect ID Token validation rules. Some validation is customizable through [JWT realm settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md#ref-jwt-settings). + +### Header claims [jwt-validation-header] + +The header claims indicate the token type and the algorithm used to sign the token. + +`alg` +: (Required, String) Indicates the algorithm that was used to sign the token, such as `HS256`. The algorithm must be in the realm’s allow list. + +`typ` +: (Optional, String) Indicates the token type, which must be `JWT`. + + +### Payload claims [jwt-validation-payload] + +Tokens contain several claims, which provide information about the user who is issuing the token, and the token itself. Depending on the token type, these information can optionally be identified by different claims. + +#### JWT payload claims [_jwt_payload_claims] + +The following claims are validated by a subset of OIDC ID token rules. + +{{es}} doesn’t validate `nonce` claims, but a custom JWT issuer can add a random `nonce` claim to introduce entropy into the signature. + +::::{note} +You can relax validation of any of the time-based claims by setting `allowed_clock_skew`. This value sets the maximum allowed clock skew before validating JWTs with respect to their authentication time (`auth_time`), creation (`iat`), not before (`nbf`), and expiration times (`exp`). +:::: + + +`iss` +: (Required, String) Denotes the issuer that created the ID token. The value must be an exact, case-sensitive match to the value in the `allowed_issuer` setting. + +`sub` +: (Required*, String) Indicates the subject that the ID token is created for. If the JWT realm is of the `id_token` type, this claim is mandatory. A JWT realm of the `id_token` type by defaults accepts all subjects. A JWT realm of the access_token type must specify the `allowed_subjects` setting and the subject value must be an exact, case-sensitive match to any of the CSV values in the allowed_subjects setting. A JWT realm of the access_token type can specify a fallback claim that will be used in place where the `sub` claim does not exist. + +`aud` +: (Required*, String) Indicates the audiences that the ID token is for, expressed as a comma-separated value (CSV). One of the values must be an exact, case-sensitive match to any of the CSV values in the `allowed_audiences` setting. If the JWT realm is of the `id_token` type, this claim is mandatory. A JWT realm of the `access_token` type can specify a fallback claim that will be used in place where the `aud` claim does not exist. + +`exp` +: (Required, integer) Expiration time for the ID token, expressed in UTC seconds since epoch. + +`iat` +: (Required, integer) Time that the ID token was issued, expressed in UTC seconds since epoch. + +`nbf` +: (Optional, integer) Indicates the time before which the JWT must not be accepted, expressed as UTC seconds since epoch. This claim is optional. If it exists, a JWT realm of `id_token` type will verify it, while a JWT realm of `access_token` will just ignore it. + +`auth_time` +: (Optional, integer) Time when the user authenticated to the JWT issuer, expressed as UTC seconds since epoch. This claim is optional. If it exists, a JWT realm of `id_token` type will verify it, while a JWT realm of `access_token` will just ignore it. + + +#### {{es}} settings for consuming JWT claims [jwt-validation-payload-es] + +{{es}} uses JWT claims for the following settings. + +`principal` +: (Required, String) Contains the user’s principal (username). The value is configurable using the realm setting `claims.principal`. You can configure an optional regular expression using the `claim_patterns.principal` to extract a substring. + +`groups` +: (Optional, JSON array) Contains the user’s group membership. The value is configurable using the realm setting `claims.groups`. You can configure an optional regular expression using the realm setting `claim_patterns.groups` to extract a substring value. + +`name` +: (Optional, String) Contains a human-readable identifier that identifies the subject of the token. The value is configurable using the realm setting `claims.name`. You can configure an optional regular expression using the realm setting `claim_patterns.name` to extract a substring value. + +`mail` +: (Optional, String) Contains the e-mail address to associate with the user. The value is configurable using the realm setting `claims.mail`. You can configure an optional regular expression using the realm setting `claim_patterns.mail` to extract a substring value. + +`dn` +: (Optional, String) Contains the user’s Distinguished Name (DN), which uniquely identifies a user or group. The value is configurable using the realm setting `claims.dn`. You can configure an optional regular expression using the realm setting `claim_patterns.dn` to extract a substring value. + + +## Role mapping [jwt-authorization] + +You can map LDAP groups to roles in the following ways: + +* Using the role mappings page in {{kib}}. +* Using the [role mapping API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-put-role-mapping). +* By delegating authorization [to another realm](#jwt-authorization-delegation). + +For more information, see [Mapping users and groups to roles](/deploy-manage/users-roles/cluster-or-deployment-auth/mapping-users-groups-to-roles.md). + +::::{important} +You can't map roles in the JWT realm using the `role_mapping.yml` file. +:::: + +### Authorizing with the role mapping API [jwt-authorization-role-mapping] + +You can use the [create or update role mappings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-put-role-mapping) to define role mappings that determine which roles should be assigned to each user based on their username, groups, or other metadata. + +```console +PUT /_security/role_mapping/jwt1_users?refresh=true <1> +{ + "roles" : [ "user" ], <2> + "rules" : { "all" : [ <3> + { "field": { "realm.name": "jwt1" } }, <4> + { "field": { "username": "principalname1" } }, + { "field": { "dn": "CN=Principal Name 1,DC=example.com" } }, + { "field": { "groups": "group1" } }, + { "field": { "metadata.jwt_claim_other": "other1" } } + ] }, + "enabled": true +} +``` + +1. The mapping name. +2. The {{stack}} role to map to. +3. A rule specifying the JWT role to map from. +4. `realm.name` can be any string containing only alphanumeric characters, underscores, and hyphens. + +If you use this API in the JWT realm, the following claims are available for role mapping: + +`principal` +: (Required, String) Principal claim that is used as the {{es}} user’s username. + +`dn` +: (Optional, String) Distinguished Name (DN) that is used as the {{es}} user’s DN. + +`groups` +: (Optional, String) Comma-separated value (CSV) list that is used as the {{es}} user’s list of groups. + +`metadata` +: (Optional, object) Additional metadata about the user, such as strings, integers, boolean values, and collections that are used as the {{es}} user’s metadata. These values are key value pairs formatted as `metadata.jwt_claim_` = ``. + + +### Delegating JWT authorization to another realm [jwt-authorization-delegation] + +If you [delegate authorization](../../../deploy-manage/users-roles/cluster-or-deployment-auth/realm-chains.md#authorization_realms) to other realms from the JWT realm, only the `principal` claim is available for role lookup. When delegating the assignment and lookup of roles to another realm from the JWT realm, claims for `dn`, `groups`, `mail`, `metadata`, and `name` are not used for the {{es}} user’s values. Only the JWT `principal` claim is passed to the delegated authorization realms. The realms that are delegated for authorization - not the JWT realm - become responsible for populating all of the {{es}} user’s values. + +The following example shows how you define delegation authorization in the `elasticsearch.yml` file to multiple other realms from the JWT realm. A JWT realm named `jwt2` is delegating authorization to multiple realms: + +```yaml +xpack.security.authc.realms.jwt.jwt2.authorization_realms: file1,native1,ldap1,ad1 +``` + +You can then use the [create or update role mappings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-put-role-mapping) to map roles to the authorizing realm. The following example maps roles in the `native1` realm for the `principalname1` JWT principal. + +```console +PUT /_security/role_mapping/native1_users?refresh=true +{ + "roles" : [ "user" ], + "rules" : { "all" : [ + { "field": { "realm.name": "native1" } }, + { "field": { "username": "principalname1" } } + ] }, + "enabled": true +} +``` + +If realm `jwt2` successfully authenticates a client with a JWT for principal `principalname1`, and delegates authorization to one of the listed realms (such as `native1`), then that realm can look up the {{es}} user’s values. With this defined role mapping, the realm can also look up this role mapping rule linked to realm `native1`. + + +## Applying the `run_as` privilege to JWT realm users [jwt-realm-runas] + +{{es}} can retrieve roles for a JWT user through either role mapping or delegated authorization. Regardless of which option you choose, you can apply the [`run_as` privilege](../../../deploy-manage/users-roles/cluster-or-deployment-auth/submitting-requests-on-behalf-of-other-users.md#run-as-privilege-apply) to a role so that a user can submit authenticated requests to "run as" a different user. To submit requests as another user, include the `es-security-runas-user` header in your requests. Requests run as if they were issued from that user and {{es}} uses their roles. + +For example, let’s assume that there’s a user with the username `user123_runas`. The following request creates a user role named `jwt_role1`, which specifies a `run_as` user with the `user123_runas` username. Any user with the `jwt_role1` role can issue requests as the specified `run_as` user. + +```console +POST /_security/role/jwt_role1?refresh=true +{ + "cluster": ["manage"], + "indices": [ { "names": [ "*" ], "privileges": ["read"] } ], + "run_as": [ "user123_runas" ], + "metadata" : { "version" : 1 } +} +``` + +You can then map that role to a user in a specific realm. The following request maps the `jwt_role1` role to a user with the username `user2` in the `jwt2` JWT realm. This means that {{es}} will use the `jwt2` realm to authenticate the user named `user2`. Because `user2` has a role (the `jwt_role1` role) that includes the `run_as` privilege, {{es}} retrieves the role mappings for the `user123_runas` user and uses the roles for that user to submit requests. + +```console +POST /_security/role_mapping/jwt_user1?refresh=true +{ + "roles": [ "jwt_role1"], + "rules" : { "all" : [ + { "field": { "realm.name": "jwt2" } }, + { "field": { "username": "user2" } } + ] }, + "enabled": true, + "metadata" : { "version" : 1 } +} +``` + +After mapping the roles, you can make an [authenticated call](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-authenticate) to {{es}} using a JWT and include the `ES-Client-Authentication` header: + +$$$jwt-auth-shared-secret-scheme-example$$$ + +```sh +curl -s -X GET -H "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJhdWQiOlsiZXMwMSIsImVzMDIiLCJlczAzIl0sInN1YiI6InVzZXIyIiwiaXNzIjoibXktaXNzdWVyIiwiZXhwIjo0MDcwOTA4ODAwLCJpYXQiOjk0NjY4NDgwMCwiZW1haWwiOiJ1c2VyMkBzb21ldGhpbmcuZXhhbXBsZS5jb20ifQ.UgO_9w--EoRyUKcWM5xh9SimTfMzl1aVu6ZBsRWhxQA" -H "ES-Client-Authentication: sharedsecret test-secret" https://localhost:9200/_security/_authenticate +``` + +The response includes the user who submitted the request (`user2`), including the `jwt_role1` role that you mapped to this user in the JWT realm: + +```sh +{"username":"user2","roles":["jwt_role1"],"full_name":null,"email":"user2@something.example.com", +"metadata":{"jwt_claim_email":"user2@something.example.com","jwt_claim_aud":["es01","es02","es03"], +"jwt_claim_sub":"user2","jwt_claim_iss":"my-issuer"},"enabled":true,"authentication_realm": +{"name":"jwt2","type":"jwt"},"lookup_realm":{"name":"jwt2","type":"jwt"},"authentication_type":"realm"} +% +``` + +If you want to specify a request as the `run_as` user, include the `es-security-runas-user` header with the name of the user that you want to submit requests as. The following request uses the `user123_runas` user: + +```sh +curl -s -X GET -H "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJhdWQiOlsiZXMwMSIsImVzMDIiLCJlczAzIl0sInN1YiI6InVzZXIyIiwiaXNzIjoibXktaXNzdWVyIiwiZXhwIjo0MDcwOTA4ODAwLCJpYXQiOjk0NjY4NDgwMCwiZW1haWwiOiJ1c2VyMkBzb21ldGhpbmcuZXhhbXBsZS5jb20ifQ.UgO_9w--EoRyUKcWM5xh9SimTfMzl1aVu6ZBsRWhxQA" -H "ES-Client-Authentication: sharedsecret test-secret" -H "es-security-runas-user: user123_runas" https://localhost:9200/_security/_authenticate +``` + +In the response, you’ll see that the `user123_runas` user submitted the request, and {{es}} used the `jwt_role1` role: + +```sh +{"username":"user123_runas","roles":["jwt_role1"],"full_name":null,"email":null,"metadata":{}, +"enabled":true,"authentication_realm":{"name":"jwt2","type":"jwt"},"lookup_realm":{"name":"native", +"type":"native"},"authentication_type":"realm"}% +``` + + +## PKC JWKS reloading [jwt-realm-jwkset-reloading] + +JWT authentication supports signature verification using PKC (Public Key Cryptography) or HMAC algorithms. + +PKC JSON Web Token Key Sets (JWKS) can contain public RSA and EC keys. HMAC JWKS or an HMAC UTF-8 JWK contain secret keys. JWT issuers typically rotate PKC JWKS more frequently (such as daily), because RSA and EC public keys are designed to be easier to distribute than secret keys like HMAC. + +JWT realms load a PKC JWKS and an HMAC JWKS or HMAC UTF-8 JWK at startup. JWT realms can also reload PKC JWKS contents at runtime; a reload is triggered by signature validation failures. + +::::{note} +HMAC JWKS or HMAC UTF-8 JWK reloading is not supported at this time. +:::: + + +Load failures, parse errors, and configuration errors prevent a node from starting (and restarting). However, runtime PKC reload errors and recoveries are handled gracefully. + +All other JWT realm validations are checked before a signature failure can trigger a PKC JWKS reload. If multiple JWT authentication signature failures occur simultaneously with a single {{es}} node, reloads are combined to reduce the reloads that are sent externally. + +Separate reload requests cannot be combined if JWT signature failures trigger: + +* PKC JWKS reloads in different {{es}} nodes +* PKC JWKS reloads in the same {{es}} node at different times + +::::{important} +Enabling client authentication (`client_authentication.type`) is strongly recommended. Only trusted client applications and realm-specific JWT users can trigger PKC reload attempts. Additionally, configuring the following [JWT security settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md#ref-jwt-settings) is recommended: + +* `allowed_audiences` +* `allowed_clock_skew` +* `allowed_issuer` +* `allowed_signature_algorithms` + +:::: + + + + +## Authorizing to the JWT realm with an HMAC UTF-8 key [hmac-oidc-example] + +The following settings are for a JWT issuer, {{es}}, and a client of {{es}}. The example HMAC key is in an OIDC format that’s compatible with HMAC. The key bytes are the UTF-8 encoding of the UNICODE characters. + +::::{important} +HMAC UTF-8 keys need to be longer than HMAC random byte keys to achieve the same key strength. +:::: + + +### JWT issuer [hmac-oidc-example-jwt-issuer] + +The following values are for the bespoke JWT issuer. + +```js +Issuer: iss8 +Audiences: aud8 +Algorithms: HS256 +HMAC UTF-8: hmac-oidc-key-string-for-hs256-algorithm +``` + + +### JWT realm settings [hmac-oidc-example-jwt-realm] + +To define a JWT realm, add the following realm settings to `elasticsearch.yml`. + +```yaml +xpack.security.authc.realms.jwt.jwt8.order: 8 <1> +xpack.security.authc.realms.jwt.jwt8.allowed_issuer: iss8 +xpack.security.authc.realms.jwt.jwt8.allowed_audiences: [aud8] +xpack.security.authc.realms.jwt.jwt8.allowed_signature_algorithms: [HS256] +xpack.security.authc.realms.jwt.jwt8.claims.principal: sub +xpack.security.authc.realms.jwt.jwt8.client_authentication.type: shared_secret +``` + +1. In {{ecloud}}, the realm order starts at `2`. `0` and `1` are reserved in the realm chain on {{ecloud}}. + + + +### JWT realm secure settings [_jwt_realm_secure_settings] + +After defining the realm settings, use the [`elasticsearch-keystore`](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/command-line-tools/elasticsearch-keystore.md) tool to add the following secure settings to the {{es}} keystore. In {{ecloud}}, you define settings for the {{es}} keystore under **Security** in your deployment. + +```yaml +xpack.security.authc.realms.jwt.jwt8.hmac_key: hmac-oidc-key-string-for-hs256-algorithm +xpack.security.authc.realms.jwt.jwt8.client_authentication.shared_secret: client-shared-secret-string +``` + + +### JWT realm role mapping rule [_jwt_realm_role_mapping_rule] + +The following request creates role mappings for {{es}} in the `jwt8` realm for the user `principalname1`: + +```console +PUT /_security/role_mapping/jwt8_users?refresh=true +{ + "roles" : [ "user" ], + "rules" : { "all" : [ + { "field": { "realm.name": "jwt8" } }, + { "field": { "username": "principalname1" } } + ] }, + "enabled": true +} +``` + + +### Request headers [hmac-oidc-example-request-headers] + +The following header settings are for an {{es}} client. + +```js +Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJpc3M4IiwiYXVkIjoiYXVkOCIsInN1YiI6InNlY3VyaXR5X3Rlc3RfdXNlciIsImV4cCI6NDA3MDkwODgwMCwiaWF0Ijo5NDY2ODQ4MDB9.UnnFmsoFKfNmKMsVoDQmKI_3-j95PCaKdgqqau3jPMY +ES-Client-Authentication: SharedSecret client-shared-secret-string +``` + +You can use this header in a `curl` request to make an authenticated call to {{es}}. Both the bearer token and the client authorization token must be specified as separate headers with the `-H` option: + +```sh +curl -s -X GET -H "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJpc3MiOiJpc3M4IiwiYXVkIjoiYXVkOCIsInN1YiI6InNlY3VyaXR5X3Rlc3RfdXNlciIsImV4cCI6NDA3MDkwODgwMCwiaWF0Ijo5NDY2ODQ4MDB9.UnnFmsoFKfNmKMsVoDQmKI_3-j95PCaKdgqqau3jPMY" -H "ES-Client-Authentication: SharedSecret client-shared-secret-string" https://localhost:9200/_security/_authenticate +``` + +If you used role mapping in the JWT realm, the response includes the user’s `username`, their `roles`, metadata about the user, and the details about the JWT realm itself. + +```sh +{"username":"user2","roles":["jwt_role1"],"full_name":null,"email":"user2@something.example.com", +"metadata":{"jwt_claim_email":"user2@something.example.com","jwt_claim_aud":["es01","es02","es03"], +"jwt_claim_sub":"user2","jwt_claim_iss":"my-issuer"},"enabled":true,"authentication_realm": +{"name":"jwt2","type":"jwt"},"lookup_realm":{"name":"jwt2","type":"jwt"},"authentication_type":"realm"} +``` diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md b/deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md index 8b211fcf0..6e28afabf 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md @@ -4,26 +4,288 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud/current/ec-secure-clusters-kerberos.html - https://www.elastic.co/guide/en/cloud-heroku/current/ech-secure-clusters-kerberos.html - https://www.elastic.co/guide/en/elasticsearch/reference/current/kerberos-realm.html +navigation_title: Kerberos +applies_to: + deployment: + self: + ess: + ece: + eck: --- -# Kerberos +# Kerberos authentication [kerberos-realm] -% What needs to be done: Refine +You can configure the {{stack}} {{security-features}} to support Kerberos V5 authentication, an industry standard protocol to authenticate users in {{es}} and {{kib}}. -% GitHub issue: https://github.com/elastic/docs-projects/issues/347 +::::{note} +You can't use the Kerberos realm to authenticate on the transport network layer. +:::: -% Use migrated content from existing pages that map to this page: -% - [ ] ./raw-migrated-files/cloud/cloud-enterprise/ece-secure-clusters-kerberos.md -% - [ ] ./raw-migrated-files/cloud/cloud/ec-secure-clusters-kerberos.md -% - [ ] ./raw-migrated-files/cloud/cloud-heroku/ech-secure-clusters-kerberos.md -% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/kerberos-realm.md +To authenticate users with Kerberos, you need to configure a Kerberos realm and map users to roles. For more information on realm settings, see [Kerberos realm settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md#ref-kerberos-settings). -⚠️ **This page is a work in progress.** ⚠️ +## Key concepts [kerberos-terms] -The documentation team is working to combine content pulled from the following pages: +There are a few terms and concepts that you’ll encounter when you’re setting up Kerberos realms: -* [/raw-migrated-files/cloud/cloud-enterprise/ece-secure-clusters-kerberos.md](/raw-migrated-files/cloud/cloud-enterprise/ece-secure-clusters-kerberos.md) -* [/raw-migrated-files/cloud/cloud/ec-secure-clusters-kerberos.md](/raw-migrated-files/cloud/cloud/ec-secure-clusters-kerberos.md) -* [/raw-migrated-files/cloud/cloud-heroku/ech-secure-clusters-kerberos.md](/raw-migrated-files/cloud/cloud-heroku/ech-secure-clusters-kerberos.md) -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/kerberos-realm.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/kerberos-realm.md) \ No newline at end of file +kdc +: Key Distribution Center. A service that issues Kerberos tickets. + +principal +: A Kerberos principal is a unique identity to which Kerberos can assign tickets. It can be used to identify a user or a service provided by a server. + + Kerberos V5 principal names are of format `primary/instance@REALM`, where `primary` is a user name. + + `instance` is an optional string that qualifies the primary and is separated by a slash(`/`) from the primary. For a user, usually it is not used; for service hosts, it is the fully qualified domain name of the host. + + `REALM` is the Kerberos realm. Usually it is the domain name in upper case. An example of a typical user principal is `user@ES.DOMAIN.LOCAL`. An example of a typical service principal is `HTTP/es.domain.local@ES.DOMAIN.LOCAL`. + + +realm +: Realms define the administrative boundary within which the authentication server has authority to authenticate users and services. + +keytab +: A file that stores pairs of principals and encryption keys. + + ::::{important} + Anyone with read permissions to this file can use the credentials in the network to access other services so it is important to protect it with proper file permissions. + :::: + + +krb5.conf +: A file that contains Kerberos configuration information such as the default realm name, the location of Key distribution centers (KDC), realms information, mappings from domain names to Kerberos realms, and default configurations for realm session key encryption types. + +ticket granting ticket (TGT) +: A TGT is an authentication ticket generated by the Kerberos authentication server. It contains an encrypted authenticator. + + +## Configuring a Kerberos realm [kerberos-realm-configuration] + +Kerberos is used to protect services and uses a ticket-based authentication protocol to authenticate users. You can configure {{es}} to use the Kerberos V5 authentication protocol, which is an industry standard protocol, to authenticate users. In this scenario, clients must present Kerberos tickets for authentication. + +In Kerberos, users authenticate with an authentication service and later with a ticket granting service to generate a TGT (ticket-granting ticket). This ticket is then presented to the service for authentication. Refer to your Kerberos installation documentation for more information about obtaining TGT. {{es}} clients must first obtain a TGT then initiate the process of authenticating with {{es}}. + +### Prerequisites [kerberos-realm-prereq] + +Before you set up a Kerberos realm, you must have the Kerberos infrastructure set up in your environment. + +::::{note} +Kerberos requires a lot of external services to function properly, such as time synchronization between all machines and working forward and reverse DNS mappings in your domain. Refer to your Kerberos documentation for more details. +:::: + +These instructions do not cover setting up and configuring your Kerberos deployment. Where examples are provided, they pertain to an MIT Kerberos V5 deployment. For more information, see [MIT Kerberos documentation](http://web.mit.edu/kerberos/www/index.md) + +If you're using a self-managed cluster, then perform the following additional steps: + +* Enable TLS for HTTP. + + If your {{es}} cluster is operating in production mode, you must configure the HTTP interface to use SSL/TLS before you can enable Kerberos authentication. For more information, see [Encrypt HTTP client communications for {{es}}](../../../deploy-manage/security/set-up-basic-security-plus-https.md#encrypt-http-communication). + + This step is necessary to support Kerberos authentication through {{kib}}. It is not required for Kerberos authentication directly against the {{es}} Rest API. + + If you started {{es}} [with security enabled](/deploy-manage/deploy/self-managed/installing-elasticsearch.md), then TLS is already enabled for HTTP. + + {{ech}}, {{ece}}, and {{eck}} have TLS enabled by default. + +* Enable the token service. + + The {{es}} Kerberos implementation makes use of the {{es}} token service. If you configure TLS on the HTTP interface, this service is automatically enabled. It can be explicitly configured by adding the following setting in your `elasticsearch.yml` file: + + ```yaml + xpack.security.authc.token.enabled: true + ``` + This step is necessary to support Kerberos authentication through {{kib}}. It is not required for Kerberos authentication directly against the {{es}} Rest API. + + {{ech}}, {{ece}}, and {{eck}} have TLS enabled by default. + + + +### Create a Kerberos realm [kerberos-realm-create] + +To configure a Kerberos realm in {{es}}: + +#### Prepare Kerberos config files + +{{es}} uses Java GSS framework support for Kerberos authentication. To support Kerberos authentication, {{es}} needs the following files: + + * `krb5.conf`: The Kerberos configuration file (`krb5.conf`) provides information such as the default realm, the Key Distribution Center (KDC), and other configuration details required for Kerberos authentication. For more information, see [krb5.conf](https://web.mit.edu/kerberos/krb5-latest/doc/admin/conf_files/krb5_conf.html). + * `keytab`: A keytab is a file that stores pairs of principals and encryption keys. {{es}} uses the keys from the keytab to decrypt the tickets presented by the user. You must create a keytab for {{es}} by using the tools provided by your Kerberos implementation. For example, some tools that create keytabs are `ktpass.exe` on Windows and `kadmin` for MIT Kerberos. + +The configuration requirements depend on your Kerberos setup. Refer to your Kerberos documentation to configure the `krb5.conf` file. + +For more information on Java GSS, see [Java GSS Kerberos requirements](https://docs.oracle.com/javase/10/security/kerberos-requirements1.htm). + +#### Configure {{es}} + +The way that you provide Kerberos config files to {{es}} depends on your deployment method. + +For detailed information of available realm settings, see [Kerberos realm settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md#ref-kerberos-settings). + +:::::{tab-set} + +::::{tab-item} Self-managed + +1. Configure the JVM to find the Kerberos configuration file. + + {{es}} uses Java GSS and JAAS Krb5LoginModule to support Kerberos authentication using a Simple and Protected GSSAPI Negotiation (SPNEGO) mechanism. When the JVM needs some configuration properties, it tries to find those values by locating and loading the `krb5.conf` file. The JVM system property to configure the file path is `java.security.krb5.conf`. To configure JVM system properties see [Set JVM options](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/jvm-settings.md#set-jvm-options). If this system property is not specified, Java tries to locate the file based on the conventions. + + :::{tip} + It is recommended that this system property be configured for {{es}}. The method for setting this property depends on your Kerberos infrastructure. Refer to your Kerberos documentation for more details. + ::: + + For more information, see [krb5.conf](https://web.mit.edu/kerberos/krb5-latest/doc/admin/conf_files/krb5_conf.html). + +2. Put the keytab file in the {{es}} configuration directory. + + Make sure that this keytab file has read permissions. This file contains credentials, therefore you must take appropriate measures to protect it. + + :::{important} + {{es}} uses Kerberos on the HTTP network layer, therefore there must be a keytab file for the HTTP service principal on every {{es}} node. The service principal name must have the format `HTTP/es.domain.local@ES.DOMAIN.LOCAL`. The keytab files are unique for each node since they include the hostname. An {{es}} node can act as any principal a client requests as long as that principal and its credentials are found in the configured keytab. + ::: + +3. Create a Kerberos realm. + + To enable Kerberos authentication in {{es}}, you must add a Kerberos realm in the realm chain. + + :::{note} + You can configure only one Kerberos realm on {{es}} nodes. + ::: + + To configure a Kerberos realm, there are a few mandatory realm settings and other optional settings that you need to configure in the `elasticsearch.yml` configuration file. Add a realm configuration under the `xpack.security.authc.realms.kerberos` namespace. + + The most common configuration for a Kerberos realm is as follows: + + ```yaml + xpack.security.authc.realms.kerberos.kerb1: + order: 3 + keytab.path: es.keytab + remove_realm_name: false + ``` +4. Restart {{es}}. +:::: + +::::{tab-item} ECH and ECE + +1. Create a [custom bundle](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch-plugins/cloud-enterprise/ece-add-plugins.md) that contains your `krb5.conf` and `keytab` files, and add it to your cluster. + + :::{tip} + You should use these exact filenames for {{ecloud}} to recognize the file in the bundle. + :::: + +2. Edit your cluster configuration, sometimes also referred to as the deployment plan, to define your Kerberos settings: + + ```sh + xpack.security.authc.realms.kerberos.cloud-krb: + order: 2 + keytab.path: es.keytab + remove_realm_name: false + ``` + + ::::{important} + The name of the realm must be `cloud-krb`, and the order must be 2: `xpack.security.authc.realms.kerberos.cloud-krb.order: 2` + :::: +:::: + +::::{tab-item} ECK + +1. Install your `krb5.conf` and `keytab` files as a [custom configuration files](/deploy-manage/deploy/cloud-on-k8s/custom-configuration-files-plugins.md#use-a-volume-and-volume-mount-together-with-a-configmap-or-secret). Mount them in a sub-directory of the main config directory, for example `/usr/share/elasticsearch/config/kerberos`, and use a `Secret` instead of a `ConfigMap` to store the information. + +2. Configure the JVM to find the Kerberos configuration file. + + {{es}} uses Java GSS and JAAS Krb5LoginModule to support Kerberos authentication using a Simple and Protected GSSAPI Negotiation (SPNEGO) mechanism. When the JVM needs some configuration properties, it tries to find those values by locating and loading the `krb5.conf` file. The JVM system property to configure the file path is `java.security.krb5.conf`. If this system property is not specified, Java tries to locate the file based on the conventions. + + To provide JVM setting overrides to your cluster: + + 1. Create a new ConfigMap with a valid JVM options file as the key. The source file should be a JVM `.options` file containing the JVM system property `-Djava.security.krb5.conf=/usr/share/elasticsearch/config/kerberos/krb5.conf`, assuming the `krb5.conf` file was mounted on `/usr/share/elasticsearch/config/kerberos` in the previous step. + + ``` + # create a configmap with a key named override.options and the content of your local file + kubectl create configmap jvm-options --from-file=override.options= + ``` + + 2. Reference the ConfigMap in your [cluster specification](/deploy-manage/deploy/cloud-on-k8s/update-deployments.md): + + ```yaml + apiVersion: elasticsearch.k8s.elastic.co/v1 + kind: Elasticsearch + metadata: + name: test-cluster + spec: + version: 8.17.0 + nodeSets: + - name: default + count: 3 + config: + # this allows ES to run on nodes even if their vm.max_map_count has not been increased, at a performance cost + node.store.allow_mmap: false + podTemplate: + spec: + containers: + - name: elasticsearch + volumeMounts: + - name: jvm-opts + mountPath: /usr/share/elasticsearch/config/jvm.options.d + - name: krb5 + mountPath: /usr/share/elasticsearch/config/kerberos + volumes: + - name: jvm-opts + configMap: + name: jvm-options + - name: krb5 + secret: + name: kerberos-secret + ``` + +3. Edit your cluster configuration to define your Kerberos settings: + + ```yaml + xpack.security.authc.realms.kerberos.cloud-krb: + order: 2 + keytab.path: kerberos/keytab + remove_realm_name: false + ``` +:::: + +::::: + +The `username` is extracted from the ticket presented by the user and usually has the format `username@REALM`. This `username` is used for mapping roles to the user. If realm setting `remove_realm_name` is set to `true`, the realm part (`@REALM`) is removed. + +## Map Kerberos users to roles + + +The `kerberos` realm enables you to map Kerberos users to roles. + +You can map these users to roles in multiple ways: + +* Using the role mappings page in {{kib}}. +* Using the [role mapping API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-put-role-mapping). + +You identify users by their `username` field. + +The following example uses the role mapping API to map `user@REALM` to the roles `monitoring` and `user`: + +```console +POST /_security/role_mapping/kerbrolemapping +{ + "roles" : [ "monitoring_user" ], + "enabled": true, + "rules" : { + "field" : { "username" : "user@REALM" } + } +} +``` + +In case you want to support Kerberos cross realm authentication, you may need to map roles based on the Kerberos realm name. For such scenarios, the following additional user metadata can be used for role mapping: + +- `kerberos_realm`: The Kerberos realm name. +- `kerberos_user_principal_name`: The user principal name from the Kerberos ticket. + +For more information, see [Mapping users and groups to roles](/deploy-manage/users-roles/cluster-or-deployment-auth/mapping-users-groups-to-roles.md). + +::::{note} +The Kerberos realm supports [authorization realms](/deploy-manage/users-roles/cluster-or-deployment-auth/realm-chains.md#authorization_realms) as an alternative to role mapping. +:::: + +## Use Kerberos authentication for {{kib}} [kerberos-realm-kibana] + +If you want to use Kerberos to authenticate using your browser and {{kib}}, you need to enable the relevant authentication provider in {{kib}} configuration. See [Kerberos single sign-on](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md#kerberos). diff --git a/raw-migrated-files/kibana/kibana/kibana-authentication.md b/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md similarity index 78% rename from raw-migrated-files/kibana/kibana/kibana-authentication.md rename to deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md index 4fb1d7492..ad06aa4e5 100644 --- a/raw-migrated-files/kibana/kibana/kibana-authentication.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md @@ -1,24 +1,29 @@ --- -navigation_title: "Authentication" +navigation_title: "Kibana authentication" +applies_to: + deployment: + ess: + ece: + eck: + self: --- # Authentication in {{kib}} [kibana-authentication] +After you configure an authentication method in {{es}}, you can configure an authentication mechanism to log in to {{kib}}. {{kib}} supports the following authentication mechanisms: -* [Multiple authentication providers](../../../deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md#multiple-authentication-providers) -* [Basic authentication](../../../deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md#basic-authentication) -* [Token authentication](../../../deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md#token-authentication) -* [Public key infrastructure (PKI) authentication](../../../deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md#pki-authentication) -* [SAML single sign-on](../../../deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md#saml) -* [OpenID Connect single sign-on](../../../deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md#oidc) -* [Kerberos single sign-on](../../../deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md#kerberos) -* [Anonymous authentication](../../../deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md#anonymous-authentication) -* [HTTP authentication](../../../deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md#http-authentication) -* [Embedded content authentication](../../../deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md#embedded-content-authentication) - -For an introduction to {{kib}}'s security features, including the login process, refer to [*Securing access to {{kib}}*](../../../deploy-manage/users-roles/cluster-or-deployment-auth/quickstart.md). +* [Multiple authentication providers](#multiple-authentication-providers) +* [Basic authentication](#basic-authentication) +* [Token authentication](#token-authentication) +* [Public key infrastructure (PKI) authentication](#pki-authentication) +* [SAML single sign-on](#saml) +* [OpenID Connect single sign-on](#oidc) +* [Kerberos single sign-on](#kerberos) +* [Anonymous authentication](#anonymous-authentication) +* [HTTP authentication](#http-authentication) +* [Embedded content authentication](#embedded-content-authentication) ## Multiple authentication providers [multiple-authentication-providers] @@ -76,28 +81,23 @@ If you have multiple authentication providers configured, you can use the `auth_ ## Basic authentication [basic-authentication] -To successfully log in to {{kib}}, basic authentication requires a username and password. Basic authentication is enabled by default, and is based on the Native, LDAP, or Active Directory security realm that is provided by {{es}}. The basic authentication provider uses a {{kib}} provided login form, and supports authentication using the `Authorization` request header `Basic` scheme. +To successfully log in to {{kib}}, basic authentication requires a username and password. Basic authentication is enabled by default, and is based on the [Native](native.md), [LDAP](ldap.md), or [Active Directory](active-directory.md) security realm that is provided by {{es}}. The basic authentication provider uses a {{kib}} provided login form, and supports authentication using the `Authorization` request header `Basic` scheme. ::::{note} You can configure only one Basic provider per {{kib}} instance. :::: - -For more information about basic authentication and built-in users, see [User authentication](../../../deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md). - - ## Token authentication [token-authentication] -Token authentication is a [subscription feature](https://www.elastic.co/subscriptions). This allows users to log in using the same {{kib}} provided login form as basic authentication, and is based on the Native security realm or LDAP security realm that is provided by {{es}}. The token authentication provider is built on {{es}} token APIs. - -Prior to configuring {{kib}}, ensure token support is enabled in {{es}}. See the [{{es}} token API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-get-token) documentation for more information. +Token authentication is a [subscription feature](https://www.elastic.co/subscriptions). This allows users to log in using the same {{kib}} provided login form as basic authentication, and is based on the [Native](native.md) or [LDAP](ldap.md) security realm that is provided by {{es}}. The token authentication provider is built on {{es}} token APIs. -To enable the token authentication provider in {{kib}}, set the following value in your `kibana.yml`: +Prior to configuring {{kib}}, ensure that token support is enabled in {{es}}. See the [{{es}} token API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-get-token) documentation for more information. ::::{note} -You can configure only one Token provider per {{kib}} instance. +You can configure only one token provider per {{kib}} instance. :::: +To enable the token authentication provider in {{kib}}, set the following value in your `kibana.yml`: ```yaml xpack.security.authc.providers: @@ -105,7 +105,7 @@ xpack.security.authc.providers: order: 0 ``` -Switching to the token authentication provider from basic one will make {{kib}} to reject requests from applications like `curl` that usually use `Authorization` request header with the `Basic` scheme for authentication. If you still want to support such applications you’ll have to either switch to using `Bearer` scheme with the tokens [created by {{es}} token API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-get-token) or add `Basic` scheme to the list of supported schemes for the [HTTP authentication](../../../deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md#http-authentication). +Switching to the token authentication provider from the basic one will make {{kib}} to reject requests from applications like `curl` that usually use `Authorization` request header with the `Basic` scheme for authentication. If you still want to support such applications, you’ll have to either switch to using `Bearer` scheme with the tokens [created by {{es}} token API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-get-token), or add the `Basic` scheme to the list of supported schemes for the [HTTP authentication](#http-authentication). ## Public key infrastructure (PKI) authentication [pki-authentication] @@ -135,6 +135,8 @@ xpack.security.authc.providers: order: 0 ``` +If you're using {{ece}} or {{ech}}, then you must [upload this file as a custom bundle](/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md) before it can be referenced. If you're using {{eck}}, then install the file as a [custom configuration file](/deploy-manage/deploy/cloud-on-k8s/custom-configuration-files-plugins.md#use-a-volume-and-volume-mount-together-with-a-configmap-or-secret). If you're using a self-managed cluster, then the file must be present on each node. + ::::{note} Trusted CAs can also be specified in a PKCS #12 keystore bundled with your {{kib}} server certificate/key using `server.ssl.keystore.path` or in a separate trust store using `server.ssl.truststore.path`. :::: @@ -200,12 +202,12 @@ xpack.security.authc.providers: Basic authentication is supported *only* if the `basic` authentication provider is explicitly declared in `xpack.security.authc.providers` setting, in addition to `saml`. -To support basic authentication for the applications like `curl` or when the `Authorization: Basic base64(username:password)` HTTP header is included in the request (for example, by reverse proxy), add `Basic` scheme to the list of supported schemes for the [HTTP authentication](../../../deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md#http-authentication). +To support basic authentication for the applications like `curl` or when the `Authorization: Basic base64(username:password)` HTTP header is included in the request (for example, by reverse proxy), add `Basic` scheme to the list of supported schemes for the [HTTP authentication](#http-authentication). ## OpenID Connect single sign-on [oidc] -OpenID Connect (OIDC) authentication is part of single sign-on (SSO), a [subscription feature](https://www.elastic.co/subscriptions). Similar to SAML, authentication with OIDC allows users to log in to {{kib}} using an OIDC Provider such as Google, or Okta. OIDC should also be configured in {{es}}. For more details, see [Configuring single sign-on to the {{stack}} using OpenID Connect](../../../deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md). +OpenID Connect (OIDC) authentication is part of single sign-on (SSO), a [subscription feature](https://www.elastic.co/subscriptions). Similar to SAML, authentication with OIDC allows users to log in to {{kib}} using an OIDC Provider such as Google, or Okta. OIDC should also be configured in {{es}}. For more details, see [Configuring single sign-on to the {{stack}} using OpenID Connect](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md). Enable OIDC authentication by specifying which OIDC realm in {{es}} to use: @@ -249,12 +251,12 @@ xpack.security.authc.providers: Basic authentication is supported *only* if the `basic` authentication provider is explicitly declared in `xpack.security.authc.providers` setting, in addition to `oidc`. -To support basic authentication for the applications like `curl` or when the `Authorization: Basic base64(username:password)` HTTP header is included in the request (for example, by reverse proxy), add `Basic` scheme to the list of supported schemes for the [HTTP authentication](../../../deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md#http-authentication). +To support basic authentication for the applications like `curl` or when the `Authorization: Basic base64(username:password)` HTTP header is included in the request (for example, by reverse proxy), add `Basic` scheme to the list of supported schemes for the [HTTP authentication](#http-authentication). ### Single sign-on provider details [_single_sign_on_provider_details] -The following sections apply both to [SAML single sign-on](../../../deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md#saml) and [OpenID Connect single sign-on](../../../deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md#oidc) +The following sections apply both to [SAML single sign-on](#saml) and [OpenID Connect single sign-on](#oidc). #### Access and refresh tokens [_access_and_refresh_tokens] @@ -275,7 +277,7 @@ During logout, both the {{kib}} session and {{es}} access/refresh token pair are ## Kerberos single sign-on [kerberos] -Kerberos authentication is part of single sign-on (SSO), a [subscription feature](https://www.elastic.co/subscriptions). Make sure that Kerberos is enabled and configured in {{es}} before setting it up in {{kib}}. See [Kerberos authentication](../../../deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md). +Kerberos authentication is part of single sign-on (SSO), a [subscription feature](https://www.elastic.co/subscriptions). Make sure that Kerberos is enabled and configured in {{es}} before setting it up in {{kib}}. See [Kerberos authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md). Next, to enable Kerberos in {{kib}}, you will need to enable the Kerberos authentication provider in the `kibana.yml` configuration file, as follows: @@ -306,7 +308,6 @@ xpack.security.authc.providers: :::: - ## Anonymous authentication [anonymous-authentication] ::::{important} @@ -324,7 +325,7 @@ You can configure only one anonymous authentication provider per {{kib}} instanc :::: -You must have a user account that can authenticate to {{es}} using a username and password, for instance from the Native or LDAP security realms, so that you can use these credentials to impersonate the anonymous users. Here is how your `kibana.yml` might look: +You must have a user account that can authenticate to {{es}} using a username and password, for instance from the [Native](native.md) or [LDAP](ldap.md) security realms, so that you can use these credentials to impersonate the anonymous users. Here is how your `kibana.yml` might look: ```yaml xpack.security.authc.providers: @@ -361,9 +362,9 @@ For information on how to embed, refer to [Embed {{kib}} content in a web page]( #### Anonymous access session [anonymous-access-session] -{{kib}} maintains a separate [session](../../../deploy-manage/security/kibana-session-management.md) for every anonymous user, as it does for all other authentication mechanisms. +{{kib}} maintains a separate [session](/deploy-manage/security/kibana-session-management.md) for every anonymous user, as it does for all other authentication mechanisms. -You can configure [session idle timeout](../../../deploy-manage/security/kibana-session-management.md#session-idle-timeout) and [session lifespan](../../../deploy-manage/security/kibana-session-management.md#session-lifespan) for anonymous sessions the same as you do for any other session with the exception that idle timeout is explicitly disabled for anonymous sessions by default. The global [`xpack.security.session.idleTimeout`](asciidocalypse://docs/kibana/docs/reference/configuration-reference/security-settings.md#security-session-and-cookie-settings) setting doesn’t affect anonymous sessions. To change the idle timeout for anonymous sessions, you must configure the provider-level [`xpack.security.authc.providers.anonymous..session.idleTimeout`](asciidocalypse://docs/kibana/docs/reference/configuration-reference/security-settings.md#anonymous-authentication-provider-settings) setting. +You can configure [session idle timeout](/deploy-manage/security/kibana-session-management.md#session-idle-timeout) and [session lifespan](/deploy-manage/security/kibana-session-management.md#session-lifespan) for anonymous sessions the same as you do for any other session with the exception that idle timeout is explicitly disabled for anonymous sessions by default. The global [`xpack.security.session.idleTimeout`](asciidocalypse://docs/kibana/docs/reference/configuration-reference/security-settings.md#security-session-and-cookie-settings) setting doesn’t affect anonymous sessions. To change the idle timeout for anonymous sessions, you must configure the provider-level [`xpack.security.authc.providers.anonymous..session.idleTimeout`](asciidocalypse://docs/kibana/docs/reference/configuration-reference/security-settings.md#anonymous-authentication-provider-settings) setting. ## HTTP authentication [http-authentication] @@ -384,7 +385,7 @@ API keys are intended for programmatic access to {{kib}} and {{es}}. Do not use :::: -By default {{kib}} supports [`ApiKey`](../../../deploy-manage/api-keys/elasticsearch-api-keys.md) authentication scheme *and* any scheme supported by the currently enabled authentication provider. For example, `Basic` authentication scheme is automatically supported when basic authentication provider is enabled, or `Bearer` scheme when any of the token based authentication providers is enabled (Token, SAML, OpenID Connect, PKI or Kerberos). But it’s also possible to add support for any other authentication scheme in the `kibana.yml` configuration file, as follows: +By default {{kib}} supports [`ApiKey`](/deploy-manage/api-keys/elasticsearch-api-keys.md) authentication scheme *and* any scheme supported by the currently enabled authentication provider. For example, `Basic` authentication scheme is automatically supported when basic authentication provider is enabled, or `Bearer` scheme when any of the token based authentication providers is enabled (Token, SAML, OpenID Connect, PKI or Kerberos). But it’s also possible to add support for any other authentication scheme in the `kibana.yml` configuration file, as follows: ::::{note} Don’t forget to explicitly specify the default `apikey` and `bearer` schemes when you just want to add a new one to the list. @@ -432,6 +433,4 @@ To make this iframe leverage anonymous access automatically, you will need to mo :::: -For more information, refer to [Embed code](../../../explore-analyze/report-and-share.md#embed-code). - - +For more information, refer to [Embed code](../../../explore-analyze/report-and-share.md#embed-code). \ No newline at end of file diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/ldap.md b/deploy-manage/users-roles/cluster-or-deployment-auth/ldap.md index 3aaa17419..67001f840 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/ldap.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/ldap.md @@ -2,30 +2,316 @@ mapped_urls: - https://www.elastic.co/guide/en/elasticsearch/reference/current/ldap-realm.html - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-securing-clusters-ldap.html +applies_to: + deployment: + self: + ess: + ece: + eck: +navigation_title: LDAP --- -# LDAP +# LDAP user authentication [ldap-realm] -% What needs to be done: Refine +You can configure the {{stack}} {{security-features}} to communicate with a Lightweight Directory Access Protocol (LDAP) server to authenticate users. See [Configuring an LDAP realm](../../../deploy-manage/users-roles/cluster-or-deployment-auth/ldap.md#ldap-realm-configuration). -% GitHub issue: https://github.com/elastic/docs-projects/issues/347 +To integrate with LDAP, you configure an `ldap` realm and map LDAP groups to user roles. -% Use migrated content from existing pages that map to this page: +:::{{tip}} +This topic describes implementing LDAP at the cluster or deployment level, for the purposes of authenticating with {{es}} and {{kib}}. -% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/ldap-realm.md -% - [ ] ./raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters-ldap.md +You can also configure an {{ece}} installation to use an LDAP server to authenticate users. [Learn more](/deploy-manage/users-roles/cloud-enterprise-orchestrator/ldap.md). +::: -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): +## How it works -$$$ldap-realm-configuration$$$ +LDAP stores users and groups hierarchically, similar to the way folders are grouped in a file system. An LDAP directory’s hierarchy is built from containers such as the *organizational unit* (`ou`), *organization* (`o`), and *domain component* (`dc`). -$$$tls-ldap$$$ +The path to an entry is a *Distinguished Name* (DN) that uniquely identifies a user or group. User and group names typically have attributes such as a *common name* (`cn`) or *unique ID* (`uid`). A DN is specified as a string, for example `"cn=admin,dc=example,dc=com"` (white spaces are ignored). -$$$mapping-roles-ldap$$$ +The `ldap` realm supports two modes of operation, a user search mode and a mode with specific templates for user DNs. -$$$ldap-user-metadata$$$ +::::{important} +When you configure realms in `elasticsearch.yml`, only the realms you specify are used for authentication. If you also want to use the `native` or `file` realms, you must include them in the realm chain. +:::: -**This page is a work in progress.** The documentation team is working to combine content pulled from the following pages: +## Step 1: Add a new realm configuration [ldap-realm-configuration] -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/ldap-realm.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/ldap-realm.md) -* [/raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters-ldap.md](/raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters-ldap.md) \ No newline at end of file +The `ldap` realm supports two modes of operation, a user search mode and a mode with specific templates for user DNs: + +* **LDAP user search**: The most common mode of operation. In this mode, a specific user with permission to search the LDAP directory is used to search for the DN of the authenticating user based on the provided username and an LDAP attribute. Once found, the user is authenticated by attempting to bind to the LDAP server using the found DN and the provided password. + +* **DN templates**: If your LDAP environment uses a few specific standard naming conditions for users, you can use user DN templates to configure the realm. The advantage of this method is that a search does not have to be performed to find the user DN. However, multiple bind operations might be needed to find the correct user DN. + + +### Set up LDAP user search mode + +To configure an `ldap` realm with user search: + +1. Add a realm configuration to `elasticsearch.yml` under the `xpack.security.authc.realms.ldap` namespace. + + At a minimum, you must specify the `url` and `order` of the LDAP server, and set `user_search.base_dn` to the container DN where the users are searched for. See [LDAP realm settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md#ref-ldap-settings) for all of the options you can set for an `ldap` realm. + + For example, the following snippet shows an LDAP realm configured with a user search: + + ```yaml + xpack: + security: + authc: + realms: + ldap: + ldap1: + order: 2 <1> + url: "ldap://ldap.example.com:389" <2> + bind_dn: "cn=ldapuser, ou=users, o=services, dc=example, dc=com" <3> + user_search: + base_dn: "ou=users, o=services, dc=example, dc=com" <4> + filter: "(cn=\{0})" <5> + group_search: + base_dn: "ou=groups, o=services, dc=example, dc=com" <6> + ``` + + 1. The order in which the LDAP realm will be consulted during an authentication attempt. + 2. The LDAP URL pointing to the LDAP server that should handle authentication. + 3. The DN of the bind user. + 4. The base DN under which your users are located in LDAP. + 5. Optionally specify an additional LDAP filter used to search the directory in attempts to match an entry with the username provided by the user. Defaults to `(uid={{0}})`. `{{0}}` is substituted with the username provided by the user for authentication. + 6. The base DN under which groups are located in LDAP. + + ::::{warning} + In {{ece}}, you must apply the user settings to each [deployment template](/deploy-manage/deploy/cloud-enterprise/configure-deployment-templates.md). + :::: + +2. Configure the password for the `bind_dn` user by adding the `xpack.security.authc.realms.ldap..secure_bind_password` setting [to the {{es}} keystore](/deploy-manage/security/secure-settings.md). + + :::{warning} + In {{ech}} and {{ece}}, after you configure `secure_bind_password`, any attempt to restart the deployment will fail until you complete the rest of the configuration steps. If you want to rollback the Active Directory realm configurations, you need to remove the `xpack.security.authc.realms.ldap..secure_bind_password` that was just added. + ::: + +1. (Optional) Configure how the {{security-features}} interact with multiple LDAP servers. + + The `load_balance.type` setting can be used at the realm level. The {{es}} {{security-features}} support both failover and load balancing modes of operation. See [LDAP realm settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md#ref-ldap-settings). + +2. (Optional) To protect passwords, [encrypt communications between {{es}} and the LDAP server](../../../deploy-manage/users-roles/cluster-or-deployment-auth/ldap.md#tls-ldap). + + * **For self-managed clusters and {{eck}} deployments**, clients and nodes that connect using SSL/TLS to the Active Directory server need to have the Active Directory server’s certificate or the server’s root CA certificate installed in their keystore or trust store. + + * **For {{ece}} and {{ech}} deployments**, if your Domain Controller is configured to use LDAP over TLS and it uses a self-signed certificate or a certificate that is signed by your organization’s CA, you need to enable the deployment to trust this certificate. +3. Restart {{es}}. + +### Set up LDAP with user DN templates + +To configure an `ldap` realm with user DN templates: + +1. Add a realm configuration to `elasticsearch.yml` in the `xpack.security.authc.realms.ldap` namespace. At a minimum, you must specify the `url` and `order` of the LDAP server, and specify at least one template with the `user_dn_templates` option. See [LDAP realm settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md#ref-ldap-settings) for all of the options you can set for an `ldap` realm. + + For example, the following snippet shows an LDAP realm configured with user DN templates: + + ```yaml + xpack: + security: + authc: + realms: + ldap: + ldap1: + order: 2 <1> + url: "ldap://ldap.example.com:389" <2> + user_dn_templates: <3> + - "uid={0}, ou=users, o=engineering, dc=example, dc=com" + - "uid={0}, ou=users, o=marketing, dc=example, dc=com" + group_search: + base_dn: ou=groups, o=services, dc=example, dc=com" <4> + ``` + + 1. The order in which the LDAP realm will be consulted during an authentication attempt. + 2. The LDAP URL pointing to the LDAP server that should handle authentication. + 3. The templates that should be tried for constructing the user DN and authenticating to LDAP. If a user attempts to authenticate with username `user1` and password `password1`, authentication will be attempted with the DN `uid=user1, ou=users, o=engineering, dc=example, dc=com` and if not successful, also with `uid=user1, ou=users, o=marketing, dc=example, dc=com` and the given password. If authentication with one of the constructed DNs is successful, all subsequent LDAP operations are run with this user. + 4. The base DN under which groups are located in LDAP. + + The `bind_dn` setting is not used in template mode. All LDAP operations run as the authenticating user. + + ::::{warning} + In {{ece}}, you must apply the user settings to each [deployment template](../../../deploy-manage/deploy/cloud-enterprise/configure-deployment-templates.md). + :::: + +2. (Optional) Configure how the {{security-features}} interact with multiple LDAP servers. + + The `load_balance.type` setting can be used at the realm level. The {{es}} {{security-features}} support both failover and load balancing modes of operation. See [LDAP realm settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md#ref-ldap-settings). + +3. (Optional) To protect passwords, [encrypt communications between {{es}} and the LDAP server](../../../deploy-manage/users-roles/cluster-or-deployment-auth/ldap.md#tls-ldap). + + * **For self-managed clusters and {{eck}} deployments**, clients and nodes that connect using SSL/TLS to the Active Directory server need to have the Active Directory server’s certificate or the server’s root CA certificate installed in their keystore or trust store. + + * **For {{ece}} and {{ech}} deployments**, if your Domain Controller is configured to use LDAP over TLS and it uses a self-signed certificate or a certificate that is signed by your organization’s CA, you need to enable the deployment to trust this certificate. +4. Restart {{es}}. + +## Step 2: Map LDAP groups to roles [mapping-roles-ldap] + +An integral part of a realm authentication process is to resolve the roles associated with the authenticated user. Roles define the privileges a user has in the cluster. + +Because users are managed externally in the LDAP server, the expectation is that their roles are managed there as well. LDAP groups often represent user roles for different systems in the organization. + +The `active_directory` realm enables you to map Active Directory users to roles using their Active Directory groups or other metadata. + +You can map LDAP groups to roles in the following ways: + +* Using the role mappings page in {{kib}}. +* Using the [role mapping API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-put-role-mapping). +* Using a role mapping file. + +For more information, see [Mapping users and groups to roles](/deploy-manage/users-roles/cluster-or-deployment-auth/mapping-users-groups-to-roles.md). + +::::{note} +The LDAP realm supports [authorization realms](../../../deploy-manage/users-roles/cluster-or-deployment-auth/realm-chains.md#authorization_realms) as an alternative to role mapping. +:::: + +### Example: using the role mapping API + +```console +POST /_security/role_mapping/ldap-superuser <1> +{ + "enabled": true, + "roles": [ "superuser" ], <2> + "rules": { + "all" : [ + { "field": { "realm.name": "ldap1" } },<3> + { "field": { "groups": "cn=administrators, ou=groups, o=services, dc=example, dc=com" } }<4> + ] + }, + "metadata": { "version": 1 } +} +``` + +1. The name of the role mapping. +2. The name of the role we want to assign, in this case `superuser`. +3. The name of our LDAP realm. +4. The DN of the LDAP group whose members should get the `superuser` role in the deployment. + + +### Example: Using a role mapping file + +:::{tip} +If you're using {{ece}} or {{ech}}, then you must [upload this file as a custom bundle](/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md) before it can be referenced. If you're using {{eck}}, then install the file as a [custom configuration file](/deploy-manage/deploy/cloud-on-k8s/custom-configuration-files-plugins.md#use-a-volume-and-volume-mount-together-with-a-configmap-or-secret). If you're using a self-managed cluster, then the file must be present on each node. +::: + +```yaml +monitoring: <1> + - "cn=admins,dc=example,dc=com" <2>S +user: + - "cn=users,dc=example,dc=com" <3> + - "cn=admins,dc=example,dc=com" +``` + +1. The name of the mapped role. +2. The LDAP distinguished name (DN) of the `admins` group. +3. The LDAP distinguished name (DN) of the `users` group. + +Referencing the file in `elasticsearch.yml`: + +```yaml +xpack: + security: + authc: + realms: + ldap: + ldap1: + order: 2 + url: "ldaps://ldap.example.com:636" + bind_dn: "cn=ldapuser, ou=users, o=services, dc=example, dc=com" + user_search: + base_dn: "ou=users, o=services, dc=example, dc=com" + group_search: + base_dn: ou=groups, o=services, dc=example, dc=com" + ssl: + verification_mode: certificate + certificate_authorities: ["/app/config/cacerts/ca.crt"] + files: + role_mapping: "/app/config/mappings/role-mappings.yml" +``` + +## User metadata in LDAP realms [ldap-user-metadata] + +When a user is authenticated via an LDAP realm, the following properties are populated in the user’s metadata: + +| Field | Description | +| --- | --- | +| `ldap_dn` | The distinguished name of the user. | +| `ldap_groups` | The distinguished name of each of the groups that were resolved for the user (regardless of whether those groups were mapped to a role). | + +This metadata is returned in the [authenticate API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-authenticate), and can be used with [templated queries](../../../deploy-manage/users-roles/cluster-or-deployment-auth/controlling-access-at-document-field-level.md#templating-role-query) in roles. + +Additional fields can be included in the user’s metadata by configuring the `metadata` setting on the LDAP realm. This metadata is available for use with the [role mapping API](../../../deploy-manage/users-roles/cluster-or-deployment-auth/mapping-users-groups-to-roles.md#mapping-roles-api) or in [templated role queries](../../../deploy-manage/users-roles/cluster-or-deployment-auth/controlling-access-at-document-field-level.md#templating-role-query). + +The example below includes the user’s common name (`cn`) as an additional field in their metadata. + +```yaml +xpack: + security: + authc: + realms: + ldap: + ldap1: + order: 0 + metadata: cn +``` + + +## Load balancing and failover [ldap-load-balancing] + +The `load_balance.type` setting can be used at the realm level to configure how the {{security-features}} should interact with multiple LDAP servers. The {{security-features}} support both failover and load balancing modes of operation. + +See [Load balancing and failover](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md#load-balancing). + + +## Encrypting communications between {{es}} and LDAP [tls-ldap] + +To protect the user credentials that are sent for authentication in an LDAP realm, it’s highly recommended to encrypt communications between {{es}} and your LDAP server. Connecting using SSL/TLS ensures that the identity of the LDAP server is authenticated before {{es}} transmits the user credentials and the contents of the connection are encrypted. Clients and nodes that connect using TLS to the LDAP server need to have the LDAP server’s certificate or the server’s root CA certificate installed in their keystore or trust store. + +If you're using {{ech}} or {{ece}}, then you must [upload your certificate as a custom bundle](/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md) before it can be referenced. + +If you're using {{eck}}, then install the certificate as a [custom configuration file](/deploy-manage/deploy/cloud-on-k8s/custom-configuration-files-plugins.md#use-a-volume-and-volume-mount-together-with-a-configmap-or-secret). + +:::{tip} + +If you're using {{ece}} or {{ech}}, then these steps are required only if TLS is enabled and the Active Directory controller is using self-signed certificates. +::: + +::::{admonition} Certificate formats +The following example uses a PEM encoded certificate. If your CA certificate is available as a `JKS` or `PKCS#12` keystore, you can reference it in the user settings. For example: + +```yaml +xpack.security.authc.realms.ldap.ldap1.ssl.truststore.path: +"/app/config/truststore/ca.p12" +``` + +If the keystore is also password protected (which isn’t typical for keystores that only contain CA certificates), you can also provide the password for the keystore by adding `xpack.security.authc.realms.active_directory.ldap.ldap1.truststore.password: password` in the user settings. +:::: + +The following example demonstrates how to trust a CA certificate (`cacert.pem`), which is located within the configuration directory. + +```shell +xpack: + security: + authc: + realms: + ldap: + ldap1: + order: 0 + url: "ldaps://ldap.example.com:636" + ssl: + certificate_authorities: [ "cacert.pem" ] +``` + +You can also specify the individual server certificates rather than the CA certificate, but this is only recommended if you have a single LDAP server or the certificates are self-signed + +For more information about these settings, see [LDAP realm settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html#ref-ldap-settings). + +::::{note} +By default, when you configure {{es}} to connect to an LDAP server using SSL/TLS, it attempts to verify the hostname or IP address specified with the `url` attribute in the realm configuration with the values in the certificate. If the values in the certificate and realm configuration do not match, {{es}} does not allow a connection to the LDAP server. This is done to protect against man-in-the-middle attacks. If necessary, you can disable this behavior by setting the `ssl.verification_mode` property to `certificate`. +:::: + +### Using {{kib}} with LDAP [ldap-realm-kibana] + +The LDAP security realm uses the {{kib}}-provided [basic authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md#basic-authentication) login form. Basic authentication is enabled by default. + +You can also use LDAP with [token authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md#token-authentication) in Kibana. \ No newline at end of file diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/looking-up-users-without-authentication.md b/deploy-manage/users-roles/cluster-or-deployment-auth/looking-up-users-without-authentication.md index d04b0a9d5..bbb72bf91 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/looking-up-users-without-authentication.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/looking-up-users-without-authentication.md @@ -1,6 +1,12 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/user-lookup.html +applies_to: + deployment: + ess: + ece: + eck: + self: --- # Looking up users without authentication [user-lookup] @@ -31,7 +37,7 @@ If you want to use a realm only for user lookup and prevent users from authentic :::: -The user lookup feature is an internal capability that is used to implement the `run-as` and delegated authorization features - there are no APIs for user lookup. If you wish to test your user lookup configuration, then you can do this with `run_as`. Use the [Authenticate](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-authenticate) API, authenticate as a `superuser` (e.g. the builtin `elastic` user) and specify the [`es-security-runas-user` request header](submitting-requests-on-behalf-of-other-users.md). +The user lookup feature is an internal capability that is used to implement the `run_as` and delegated authorization features - there are no APIs for user lookup. If you want to test your user lookup configuration, then you can do this with `run_as`. Use the [Authenticate](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-authenticate) API, authenticate as a `superuser` (e.g. the builtin `elastic` user) and specify the [`es-security-runas-user` request header](submitting-requests-on-behalf-of-other-users.md). ::::{note} The [Get users](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-get-user) API and [User profiles](user-profiles.md) feature are alternative ways to retrieve information about a {{stack}} user. Those APIs are not related to the user lookup feature. diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/manage-authentication-for-multiple-clusters.md b/deploy-manage/users-roles/cluster-or-deployment-auth/manage-authentication-for-multiple-clusters.md index 7227bec77..87e53b094 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/manage-authentication-for-multiple-clusters.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/manage-authentication-for-multiple-clusters.md @@ -1,46 +1,38 @@ --- mapped_pages: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-auth-config-using-stack-config-policy.html +applies_to: + deployment: + eck: --- # Manage authentication for multiple clusters [k8s-auth-config-using-stack-config-policy] ::::{warning} -We have identified an issue with Elasticsearch 8.15.1 and 8.15.2 that prevents security role mappings configured via Stack configuration policies to work correctly. Avoid these versions and upgrade to 8.16.0 to remedy this issue if you are affected. +An issue with {{stack}} 8.15.1 and 8.15.2 prevents security role mappings configured through {{stack}} configuration policies to work correctly. Avoid these versions and upgrade to 8.16.0 to remedy this issue if you are affected. :::: - ::::{note} This requires a valid Enterprise license or Enterprise trial license. Check [the license documentation](../../license/manage-your-license-in-eck.md) for more details about managing licenses. :::: -ECK `2.11.0` extends the functionality of [Elastic Stack configuration policies](../../deploy/cloud-on-k8s/elastic-stack-configuration-policies.md) so that it becomes possible to configure Elasticsearch security realms for more than one Elastic stack at once. The authentication will apply to all Elasticsearch clusters and Kibana instances managed by the Elastic Stack configuration policy. +In {{eck}}, you can use {{stack}} configuration policies to configure {{es}} security realms for more than one cluster at once. The authentication will apply to all {{es}} clusters and {{kib}} instances managed by the {{stack}} configuration policy. Examples for configuring some of the authentication methods can be found below: -* [LDAP authentication using Elastic Stack configuration policy](#k8s-ldap-using-stack-config-policy) -* [OpenID Connect authentication using Elastic Stack configuration policy](#k8s-oidc-stack-config-policy) -* [JWT authentication using Elastic Stack configuration policy](#k8s-jwt-stack-config-policy) - -## LDAP using Elastic stack configuration policy [k8s-ldap-using-stack-config-policy] - -::::{warning} -We have identified an issue with Elasticsearch 8.15.1 and 8.15.2 that prevents security role mappings configured via Stack configuration policies to work correctly. Avoid these versions and upgrade to 8.16.0 to remedy this issue if you are affected. -:::: - - -::::{note} -This requires a valid Enterprise license or Enterprise trial license. Check [the license documentation](../../license/manage-your-license-in-eck.md) for more details about managing licenses. -:::: +* [LDAP authentication using {{stack}} configuration policy](#k8s-ldap-using-stack-config-policy) +* [OpenID Connect authentication using {{stack}} configuration policy](#k8s-oidc-stack-config-policy) +* [JWT authentication using {{stack}} configuration policy](#k8s-jwt-stack-config-policy) +## LDAP using {{stack}} configuration policy [k8s-ldap-using-stack-config-policy] ::::{tip} -Make sure you check the complete [guide to setting up LDAP with Elasticsearch](/deploy-manage/users-roles/cluster-or-deployment-auth/ldap.md). +Make sure you check the complete [guide to setting up LDAP with {{es}}](/deploy-manage/users-roles/cluster-or-deployment-auth/ldap.md). :::: -### To configure LDAP using Elastic Stack configuration policy with user search: [k8s_to_configure_ldap_using_elastic_stack_configuration_policy_with_user_search] +### Configure LDAP using {{stack}} configuration policy with user search[k8s_to_configure_ldap_using_elastic_stack_configuration_policy_with_user_search] 1. Add a realm configuration to the `config` field under `elasticsearch` in the `xpack.security.authc.realms.ldap` namespace. At a minimum, you must specify the URL of the LDAP server and the order of the LDAP realm compared to other configured security realms. You also have to set `user_search.base_dn` to the container DN where the users are searched for. Refer to [LDAP realm settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md#ref-ldap-settings) for all of the options you can set for an LDAP realm. For example, the following snippet shows an LDAP realm configured with a user search: @@ -61,7 +53,7 @@ Make sure you check the complete [guide to setting up LDAP with Elasticsearch](/ unmapped_groups_as_roles: false ``` -2. The password for the `bind_dn` user should be configured by adding the appropriate `secure_bind_password` setting to the Elasticsearch keystore. This can be done using the Elastic Stack configuration policy by following the below steps: +2. The password for the `bind_dn` user should be configured by adding the appropriate `secure_bind_password` setting to the [{{es}} keystore](/deploy-manage/security/secure-settings.md). This can be done using the {{stack}} configuration policy by following the below steps: 1. Create a secret that has the `secure_bind_password` in the same namespace as the operator @@ -69,7 +61,7 @@ Make sure you check the complete [guide to setting up LDAP with Elasticsearch](/ kubectl create secret generic ldap-secret --from-literal=xpack.security.authc.realms.ldap.ldap1.secure_bind_password= ``` - 2. Add the secret name to the `secureSettings` field under `elasticsearch` in the Elastic Stack configuration policy + 2. Add the secret name to the `secureSettings` field under `elasticsearch` in the {{stack}} configuration policy ```yaml spec: @@ -81,7 +73,7 @@ Make sure you check the complete [guide to setting up LDAP with Elasticsearch](/ - secretName: ldap-secret ``` -3. Map LDAP groups to roles. In the below example, LDAP users get the Elasticsearch `superuser` role. `dn: "cn=users,dc=example,dc=org"` is the LDAP distinguished name (DN) of the users group. +3. Map LDAP groups to roles. In the below example, LDAP users get the {{es}} `superuser` role. `dn: "cn=users,dc=example,dc=org"` is the LDAP distinguished name (DN) of the users group. ```yaml securityRoleMappings: @@ -95,7 +87,7 @@ Make sure you check the complete [guide to setting up LDAP with Elasticsearch](/ ``` -Simple full example Elastic Stack config policy to configure LDAP realm with user search +Example {{stack}} config policy to configure LDAP realm with user search: ```yaml apiVersion: stackconfigpolicy.k8s.elastic.co/v1alpha1 @@ -133,7 +125,7 @@ spec: ``` -### To configure an LDAP realm with user DN templates: [k8s_to_configure_an_ldap_realm_with_user_dn_templates] +### Configure an LDAP realm with user DN templates[k8s_to_configure_an_ldap_realm_with_user_dn_templates] Add a realm configuration to `elasticsearch.yml` in the xpack.security.authc.realms.ldap namespace. At a minimum, you must specify the url and order of the LDAP server, and specify at least one template with the user_dn_templates option. Check [LDAP realm settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md#ref-ldap-settings) for all of the options you can set for an ldap realm. @@ -155,7 +147,7 @@ xpack: unmapped_groups_as_roles: false ``` -Example Elastic Stack config policy to configure LDAP realm with user DN templates: +Example {{stack}} config policy to configure LDAP realm with user DN templates: ```yaml apiVersion: stackconfigpolicy.k8s.elastic.co/v1alpha1 @@ -192,26 +184,16 @@ The `bind_dn` setting is not used in template mode. All LDAP operations run as t -## OIDC using Elastic stack configuration policy [k8s-oidc-stack-config-policy] - -::::{warning} -We have identified an issue with Elasticsearch 8.15.1 and 8.15.2 that prevents security role mappings configured via Stack configuration policies to work correctly. Avoid these versions and upgrade to 8.16.0 to remedy this issue if you are affected. -:::: - - -::::{note} -This requires a valid Enterprise license or Enterprise trial license. Check [the license documentation](../../license/manage-your-license-in-eck.md) for more details about managing licenses. -:::: - +## OIDC using {{stack}} configuration policy [k8s-oidc-stack-config-policy] ::::{tip} -Make sure you check the complete [guide to setting up OpenID Connect with Elasticsearch](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md). +Make sure you check the complete [guide to setting up OpenID Connect with {{es}}](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md). :::: -Configuring OpenID Connect using Elastic Stack configuration policy +Configuring OpenID Connect using {{stack}} configuration policy -1. Add OIDC realm to the `elasticsearch.yml` file using the `config` field under `elasticsearch` in the Elastic Stack configuration policy, also enable token service. +1. Add OIDC realm to the `elasticsearch.yml` file using the `config` field under `elasticsearch` in the {{stack}} configuration policy, also enable token service. ::::{note} Below snippet is an example of using Google as OpenID provider, the values will change depending on the provider being used. @@ -242,7 +224,7 @@ Configuring OpenID Connect using Elastic Stack configuration policy claim_patterns.principal: "^([^@]+)@elastic\\.co$" ``` -2. Another piece of configuration of the OpenID Connect realm is to set the Client Secret that was assigned to the Relying Parties (RP) during registration in the OpenID Connect Provider (OP). This is a secure setting and as such is not defined in the realm configuration in `elasticsearch.yml` but added to the Elasticsearch keystore. To set this up using Elastic Stack configuration policy, use the following steps. +2. Another piece of configuration of the OpenID Connect realm is to set the Client Secret that was assigned to the Relying Parties (RP) during registration in the OpenID Connect Provider (OP). This is a secure setting and as such is not defined in the realm configuration in `elasticsearch.yml` but added to the {{es}} keystore. To set this up using {{stack}} configuration policy, use the following steps. 1. Create a secret in the operator namespace that has the Client Secret @@ -258,7 +240,7 @@ Configuring OpenID Connect using Elastic Stack configuration policy - secretName: oidc-client-secret ``` -3. When a user authenticates using OpenID Connect, they are identified to the Elastic Stack, but this does not automatically grant them access to perform any actions or access any data. Your OpenID Connect users cannot do anything until they are assigned roles. Roles can be assigned by adding role mappings to the Elastic Stack configuration policy. The below example is giving a specific user access as a superuser to Elasticsearch, if you want to assign roles to all users authenticating with OIDC, you can remove the username field. +3. When a user authenticates using OpenID Connect, they are identified to the {{stack}}, but this does not automatically grant them access to perform any actions or access any data. Your OpenID Connect users cannot do anything until they are assigned roles. Roles can be assigned by adding role mappings to the {{stack}} configuration policy. The below example is giving a specific user access as a superuser to {{es}}, if you want to assign roles to all users authenticating with OIDC, you can remove the username field. ```yaml elasticsearch: @@ -274,7 +256,7 @@ Configuring OpenID Connect using Elastic Stack configuration policy enabled: true ``` -4. Update Kibana to use OpenID Connect as the authentication provider: +4. Update {{kib}} to use OpenID Connect as the authentication provider: ```yaml kibana: @@ -287,7 +269,7 @@ Configuring OpenID Connect using Elastic Stack configuration policy ``` -Example full Elastic Stack configuration policy to configure oidc +Example full {{stack}} configuration policy to configure OIDC: ```yaml apiVersion: stackconfigpolicy.k8s.elastic.co/v1alpha1 @@ -341,9 +323,11 @@ spec: order: 1 ``` -1. The Kibana URL should be an environment variable that should be configured on the Elasticsearch Clusters managed by the Elastic Stack Configuration policy. This can be done by adding an environment variable to the pod template in the Elasticsearch CR.```yaml +1. The {{kib}} URL should be an environment variable that should be configured on the {{es}} Clusters managed by the {{stack}} Configuration policy. This can be done by adding an environment variable to the pod template in the {{es}} CR. + +```yaml apiVersion: elasticsearch.k8s.elastic.co/v1 -kind: Elasticsearch +kind: {{es}} metadata: name: quickstart namespace: kvalliy @@ -368,31 +352,19 @@ spec: ::::{note} -The OpenID Connect Provider (OP) should have support to configure multiple Redirect URLs, so that the same `rp.client_id` and `client_secret` can be used for all the Elasticsearch clusters managed by the Elastic Stack configuration policy. -:::: - - - -## JWT using Elastic Stack configuration policy [k8s-jwt-stack-config-policy] - -::::{warning} -We have identified an issue with Elasticsearch 8.15.1 and 8.15.2 that prevents security role mappings configured via Stack configuration policies to work correctly. Avoid these versions and upgrade to 8.16.0 to remedy this issue if you are affected. -:::: - - -::::{note} -This requires a valid Enterprise license or Enterprise trial license. Check [the license documentation](../../license/manage-your-license-in-eck.md) for more details about managing licenses. +The OpenID Connect Provider (OP) should have support to configure multiple Redirect URLs, so that the same `rp.client_id` and `client_secret` can be used for all the {{es}} clusters managed by the {{stack}} configuration policy. :::: +## JWT using {{stack}} configuration policy [k8s-jwt-stack-config-policy] ::::{tip} -Make sure you check the complete [guide to setting up JWT with Elasticsearch](/deploy-manage/users-roles/cluster-or-deployment-auth/jwt.md). +Make sure you check the complete [guide to setting up JWT with {{es}}](/deploy-manage/users-roles/cluster-or-deployment-auth/jwt.md). :::: -Configuring JWT with Elastic Stack configuration policy +Configuring JWT with {{stack}} configuration policy -1. Add your JWT realm to the `elasticsearch.yml` file using the `config` field under `elasticsearch` in the Elastic Stack configuration policy +1. Add your JWT realm to the `elasticsearch.yml` file using the `config` field under `elasticsearch` in the {{stack}} configuration policy ```yaml elasticsearch: @@ -409,7 +381,7 @@ Configuring JWT with Elastic Stack configuration policy claims.principal: sub ``` -2. Add the `shared_secret` setting that will be used for client authentication to the Elasticsearch keystore. +2. Add the `shared_secret` setting that will be used for client authentication to the {{es}} keystore. 1. Create a secret in the operator namespace containing the shared secret @@ -417,7 +389,7 @@ Configuring JWT with Elastic Stack configuration policy kubectl create secret generic shared-secret --from-literal=xpack.security.authc.realms.jwt.jwt1.client_authentication.shared_secret= ``` - 2. Add the secret name to the `secureSettings` field under `elasticsearch` in the Elastic Stack configuration policy + 2. Add the secret name to the `secureSettings` field under `elasticsearch` in the {{stack}} configuration policy ```yaml elasticsearch: @@ -425,7 +397,7 @@ Configuring JWT with Elastic Stack configuration policy : - secretName: shared-secret ``` -3. Add an additional volume to the Elasticsearch pods that contain the JSON Web Keys, it should be mounted to the path that is configured for the `xpack.security.authc.realms.jwt.jwt1.pkc_jwkset_path` config. The file path is resolved relative to the Elasticsearch configuration directory. +3. Add an additional volume to the {{es}} pods that contain the JSON Web Keys, it should be mounted to the path that is configured for the `xpack.security.authc.realms.jwt.jwt1.pkc_jwkset_path` config. The file path is resolved relative to the {{es}} configuration directory. 1. Create a secret in the operator namespace that has the jwk set @@ -433,7 +405,7 @@ Configuring JWT with Elastic Stack configuration policy kubectl create secret generic jwks-secret --from-file=jwkset.json ``` - 2. Add the secret name and mountpath to the `secretMounts` field under `elasticsearch` in the Elastic Stack configuration policy + 2. Add the secret name and mountpath to the `secretMounts` field under `elasticsearch` in the {{stack}} configuration policy ```yaml secretMounts: @@ -441,7 +413,7 @@ Configuring JWT with Elastic Stack configuration policy mountPath: "/usr/share/elasticsearch/config/jwks" ``` -4. You can use the `securityRoleMappings` field under `elasticsearch` in the Elastic Stack configuration policy to define role mappings that determine which roles should be assigned to each user based on their username, groups, or other metadata. +4. You can use the `securityRoleMappings` field under `elasticsearch` in the {{stack}} configuration policy to define role mappings that determine which roles should be assigned to each user based on their username, groups, or other metadata. ```yaml securityRoleMappings: @@ -455,7 +427,7 @@ Configuring JWT with Elastic Stack configuration policy ``` -The following example demonstrates how an Elastic Stack configuration policy can be used to configure a JWT realm: +The following example demonstrates how an {{stack}} configuration policy can be used to configure a JWT realm: ```yaml apiVersion: stackconfigpolicy.k8s.elastic.co/v1alpha1 diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/manage-elastic-user-cloud.md b/deploy-manage/users-roles/cluster-or-deployment-auth/manage-elastic-user-cloud.md new file mode 100644 index 000000000..84ce3f85c --- /dev/null +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/manage-elastic-user-cloud.md @@ -0,0 +1,41 @@ +--- +mapped_pages: + - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-password-reset-elastic.html + - https://www.elastic.co/guide/en/cloud/current/ec-password-reset.html + - https://www.elastic.co/guide/en/cloud-heroku/current/ech-password-reset.html +applies_to: + deployment: + ece: + ess: +navigation_title: ECH and ECE +--- + +# Reset the `elastic` user password in {{ech}} and {{ece}} [ec-password-reset] + +You might need to reset the password for the `elastic` superuser if you can't authenticate with the `elastic` user ID and are effectively locked out from an Elasticsearch cluster or Kibana. + +::::{note} +Elastic does not manage the `elastic` user and does not have access to the account or its credentials. If you lose the password, you have to reset it. +:::: + +::::{note} +Resetting the `elastic` user password does not interfere with Marketplace integrations. +:::: + +::::{note} +The `elastic` user should be not be used unless you have no other way to access your deployment. [Create API keys for ingesting data](asciidocalypse://docs/beats/docs/reference/filebeat/beats-api-keys.md), and create user accounts with [appropriate roles for user access](../../../deploy-manage/users-roles/cluster-or-deployment-auth/quickstart.md). +:::: + +To reset the password: + +1. Log in to the Elastic Cloud Console. +2. Find your deployment on the home page and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. + + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + +3. From your deployment menu, go to **Security**. +4. Select **Reset password**. +5. Copy down the auto-generated a password for the `elastic` user. + +The password is not accessible after you close the window, so if you lose it, you need to reset the password again. + diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/managed-credentials-eck.md b/deploy-manage/users-roles/cluster-or-deployment-auth/managed-credentials-eck.md new file mode 100644 index 000000000..de8a31a7a --- /dev/null +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/managed-credentials-eck.md @@ -0,0 +1,72 @@ +--- +mapped_urls: + - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-users-and-roles.html + - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-rotate-credentials.html +applies_to: + deployment: + eck: +navigation_title: ECK managed credentials +--- + +# {{eck}} managed credentials + +When deploying an {{stack}} application, the operator generates a set of credentials essential for the operation of that application. For example, these generated credentials include the default `elastic` user for {{es}} and the security token for APM Server. + +To list all auto-generated credentials in a namespace, run the following command: + +```sh +kubectl get secret -l eck.k8s.elastic.co/credentials=true +``` + +## Default elastic user [k8s-default-elastic-user] + +When the {{es}} resource is created, a default user named `elastic` is created automatically, and is assigned the `superuser` role. + +Its password can be retrieved in a Kubernetes secret, whose name is based on the {{es}} resource name: `-es-elastic-user`. + +For example, the password of the `elastic` user for an {{es}} cluster named `quickstart` can be retrieved with: + +```sh +kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}' +``` + +### Disabling the default `elastic` user [k8s_disabling_the_default_elastic_user] + +If your prefer to manage all users via SSO, for example using [SAML Authentication](../../../deploy-manage/users-roles/cluster-or-deployment-auth/saml.md) or OpenID Connect, you can disable the default `elastic` superuser by setting the `auth.disableElasticUser` field in the {{es}} resource to `true`: + +```yaml +apiVersion: elasticsearch.k8s.elastic.co/v1 +kind: Elasticsearch +metadata: + name: elasticsearch-sample +spec: + version: 8.16.1 + auth: + disableElasticUser: true + nodeSets: + - name: default + count: 1 +``` + +## Rotate auto-generated credentials [k8s-rotate-credentials] + +You can force the auto-generated credentials to be regenerated with new values by deleting the appropriate Secret. For example, to change the password for the `elastic` user from the [quickstart example](../../../deploy-manage/deploy/cloud-on-k8s/deploy-an-orchestrator.md), use the following command: + +```sh +kubectl delete secret quickstart-es-elastic-user +``` + +::::{warning} +If you are using the `elastic` user credentials in your own applications, they will fail to connect to {{es}} and Kibana after you run this command. It is not recommended to use `elastic` user credentials for production use cases. Always [create your own users with restricted roles](../../../deploy-manage/users-roles/cluster-or-deployment-auth/native.md) to access Elasticsearch. +:::: + + +To regenerate all auto-generated credentials in a namespace, run the following command: + +```sh +kubectl delete secret -l eck.k8s.elastic.co/credentials=true +``` + +::::{warning} +This command regenerates auto-generated credentials of **all** {{stack}} applications in the namespace. +:::: diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/native.md b/deploy-manage/users-roles/cluster-or-deployment-auth/native.md index 8292c1455..0f73cdfbe 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/native.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/native.md @@ -4,31 +4,112 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-users-and-roles.html - https://www.elastic.co/guide/en/elasticsearch/reference/current/change-passwords-native-users.html - https://www.elastic.co/guide/en/kibana/current/tutorial-secure-access-to-kibana.html +applies_to: + deployment: + self: all + ess: all + ece: all + eck: all +navigation_title: "Native" --- -# Native +# Native user authentication [native-realm] -% What needs to be done: Refine +The easiest way to manage and authenticate users is with the internal `native` realm. You can use [Elasticsearch REST APIs](#native-users-api) or [Kibana](#managing-native-users) to add and remove users, assign user roles, and manage user passwords. -% GitHub issue: https://github.com/elastic/docs-projects/issues/347 +In self-managed {{es}} clusters, you can also reset passwords for users in the native realm [using the command line](#reset-pw-cmd-line). -% Use migrated content from existing pages that map to this page: +:::{{tip}} +This topic describes using the native realm at the cluster or deployment level, for the purposes of authenticating with {{es}} and {{kib}}. -% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/native-realm.md -% - [ ] ./raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-users-and-roles.md -% Notes: native realm content -% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/change-passwords-native-users.md -% - [ ] ./raw-migrated-files/kibana/kibana/tutorial-secure-access-to-kibana.md +You can also manage and authenticate users natively at the following levels: -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): +* For an [{{ece}} installation](/deploy-manage/users-roles/cloud-enterprise-orchestrator/native-user-authentication.md). +* For an [{{ecloud}} organization](/deploy-manage/users-roles/cloud-organization/manage-users.md). +::: -$$$k8s-default-elastic-user$$$ -$$$managing-native-users$$$ +## Configure a native realm [native-realm-configuration] -**This page is a work in progress.** The documentation team is working to combine content pulled from the following pages: +The native realm is available and enabled by default. You can disable it explicitly with the following setting. -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/native-realm.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/native-realm.md) -* [/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-users-and-roles.md](/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-users-and-roles.md) -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/change-passwords-native-users.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/change-passwords-native-users.md) -* [/raw-migrated-files/kibana/kibana/tutorial-secure-access-to-kibana.md](/raw-migrated-files/kibana/kibana/tutorial-secure-access-to-kibana.md) \ No newline at end of file +```yaml +xpack.security.authc.realms.native.native1: + enabled: false +``` + +You can configure a `native` realm in the `xpack.security.authc.realms.native` namespace in `elasticsearch.yml`. Explicitly configuring a native realm enables you to set the order in which it appears in the realm chain, temporarily disable the realm, and control its cache options. + +1. Add a realm configuration to `elasticsearch.yml` under the `xpack.security.authc.realms.native` namespace. It is recommended that you explicitly set the `order` attribute for the realm. + + ::::{note} + You can configure only one native realm on {{es}} nodes. + :::: + + + See [Native realm settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html#ref-native-settings) for all of the options you can set for the `native` realm. For example, the following snippet shows a `native` realm configuration that sets the `order` to zero so the realm is checked first: + + ```yaml + xpack.security.authc.realms.native.native1: + order: 0 + ``` + + ::::{note} + To limit exposure to credential theft and mitigate credential compromise, the native realm stores passwords and caches user credentials according to security best practices. By default, a hashed version of user credentials is stored in memory, using a salted `sha-256` hash algorithm and a hashed version of passwords is stored on disk salted and hashed with the `bcrypt` hash algorithm. To use different hash algorithms, see [User cache and password hash algorithms](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html#hashing-settings). + :::: + +2. Restart {{es}}. + + +## Manage native users using {{kib}} [managing-native-users] + +Elastic enables you to easily manage users in {{kib}} on the **Stack Management > Security > Users** page. From this page, you can create users, edit users, assign roles to users, and change user passwords. You can also deactivate or delete existing users. + +### Example: Create a user [_create_a_user] + +1. Navigate to **Stack Management**, and under **Security**, select **Users**. +2. Click **Create user**. +3. Give the user a descriptive username, and choose a secure password. +4. Optional: assign [roles](/deploy-manage/users-roles/cluster-or-deployment-auth/user-roles.md) to the user. +5. Click **Create user**. + +:::{image} ../../../images/kibana-tutorial-secure-access-example-1-user.png +:alt: Create user UI +:class: screenshot +::: + +## Manage native users using the `user` API [native-users-api] + +You can manage users through the Elasticsearch `user` API. + +For example, you can change a user's password: + +```console +POST /_security/user/user1/_password +{ + "password" : "new-test-password" +} +``` + +For more information and examples, see [Users](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-security). + +## Reset passwords for native users using the command line [reset-pw-cmd-line] + +```{applies_to} +deployment: + self: all +``` + +You can also reset passwords for users in the native realm through the command line using the [`elasticsearch-reset-password`](https://www.elastic.co/guide/en/elasticsearch/reference/current/reset-password.html) tool. + +For example, the following command changes the password for a user with the username `user1` to an auto-generated value, and prints the new password to the terminal: + +```shell +bin/elasticsearch-reset-password -u user1 +``` + +To explicitly set a password for a user, include the `-i` parameter with the intended password. + +```shell +bin/elasticsearch-reset-password -u user1 -i +``` diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/oidc-examples.md b/deploy-manage/users-roles/cluster-or-deployment-auth/oidc-examples.md new file mode 100644 index 000000000..0bf9ef297 --- /dev/null +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/oidc-examples.md @@ -0,0 +1,383 @@ +--- +mapped_urls: + - https://www.elastic.co/guide/en/cloud/current/ec-securing-clusters-oidc-op.html +navigation_title: With Azure, Google, or Okta +applies_to: + deployment: + self: + ess: + ece: + eck: +--- + +# Set up OpenID Connect with Azure, Google, or Okta [ec-securing-clusters-oidc-op] + +This page explains how to implement OIDC, from the OAuth client credentials generation to the realm configuration for Elasticsearch and Kibana, with the following OpenID Connect Providers (OPs): + +* [Azure](#ec-securing-oidc-azure) +* [Google](#ec-securing-oidc-google) +* [Okta](#ec-securing-oidc-okta) + +For further detail about configuring OIDC, refer to [](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md) + +## Setting up OpenID Connect with Azure [ec-securing-oidc-azure] + +Follow these steps to configure OpenID Connect single sign-on on in {{es}} with an Azure OP. + +For more information about OpenID connect in Azure, refer to [Azure OAuth 2.0 and OpenID documentation](https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-v2-protocols). + +1. Configure the OAuth client ID. + + 1. Create a new application: + + 1. Sign into the [Azure Portal](https://portal.azure.com/) and go to **Entra** (formerly Azure Active Directory). From there, select **App registrations** > **New registration** to register a new application. + + :::{image} ../../../images/cloud-ec-oidc-new-app-azure.png + :alt: A screenshot of the Azure Owned Applications tab on the New Registration page + ::: + + 2. Enter a **Name** for your application, for example `ec-oauth2`. + 3. Select a **Supported Account Type** according to your preferences. + 4. Set the **Redirect URI**. + + It will typically be `/api/security/oidc/callback`, where `` is the base URL for your {{kib}} instance. + + If you're using {{ech}}, then set this value to `/api/security/oidc/callback`. + 5. Select **Register**. + 6. Confirm that your new **Application (client) ID** appears in the app details. + + 2. Create a client ID and secret: + + 1. From the application that you created, go to **Certificates & secrets** and create a new secret under **Client secrets** > **New client secret**. + + :::{image} ../../../images/cloud-ec-oidc-oauth-create-credentials-azure.png + :alt: A screenshot of the Azure Add a Client Secret dialog + ::: + + 2. Provide a **Description**, for example `Kibana`. + 3. Select an expiration for the secret. + 4. Select **Add** and copy your newly created client secret for later use. + +2. Add your client secret [to the {{es}} keystore](/deploy-manage/security/secure-settings.md). + + For OIDC, the client secret setting name in the keystore should be in the form `xpack.security.authc.realms.oidc..rp.client_secret`. + +3. Configure Elasticsearch with the OIDC realm. + + To learn more about the available endpoints provided by Microsoft Azure, refer to the **Endpoints** details in the application that you configured. + + :::{image} ../../../images/cloud-ec-oidc-endpoints-azure.png + :alt: A screenshot of the Azure Endpoints dialog with fields for Display Name + ::: + + To configure Elasticsearch for OIDC, [update your Elasticsearch user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) with the following configuration: + + ```sh + xpack: + security: + authc: + realms: + oidc: + oidc1: + order: 2 + rp.client_id: "" + rp.response_type: "code" + rp.requested_scopes: ["openid", "email"] + rp.redirect_uri: "KIBANA_ENDPOINT_URL/api/security/oidc/callback" + op.issuer: "https://login.microsoftonline.com//v2.0" + op.authorization_endpoint: "https://login.microsoftonline.com//oauth2/v2.0/authorize" + op.token_endpoint: "https://login.microsoftonline.com//oauth2/v2.0/token" + op.userinfo_endpoint: "https://graph.microsoft.com/oidc/userinfo" + op.endsession_endpoint: "https://login.microsoftonline.com//oauth2/v2.0/logout" + rp.post_logout_redirect_uri: "KIBANA_ENDPOINT_URL/logged_out" + op.jwkset_path: "https://login.microsoftonline.com//discovery/v2.0/keys" + claims.principal: email + claim_patterns.principal: "^([^@]+)@YOUR_DOMAIN\\.TLD$" + ``` + + Where: + + * `` is your Client ID, available in the application details on Azure. + * `` is your Directory ID, available in the application details on Azure. + * `KIBANA_ENDPOINT_URL` is your Kibana endpoint. + * `YOUR_DOMAIN` and `TLD` in the `claim_patterns.principal` regular expression are your organization email domain and top level domain. + + + If you're using {{ece}} or {{ech}}, and you're using machine learning or a deployment with hot-warm architecture, you must include this configuration in the user settings section for each node type. + +4. Create a role mapping. + + The following role mapping for OIDC restricts access to a specific user `(firstname.lastname)` based on the `claim_patterns.principal` email address. This prevents other users on the same domain from having access to your deployment. You can remove the rule or adjust it at your convenience. + + More details are available in our [Configuring role mappings documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#oidc-role-mappings). + + ```json + POST /_security/role_mapping/oidc_kibana + { + "enabled": true, + "roles": [ "superuser" ], + "rules" : { + "all" : [ + { + "field" : { + "realm.name" : "oidc1" + } + }, + { + "field" : { + "username" : [ + "" + ] + } + } + ] + }, + "metadata": { "version": 1 } + } + ``` + + If you use an email in the `claim_patterns.principal`, you won’t need to add the domain in the role_mapping (for example, `firstname.lastname@your_domain.tld` should be `firstname.lastname`). + +5. Configure Kibana with the OIDC realm. [Update your Kibana user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) with the following configuration: + + ```sh + xpack.security.authc.providers: + oidc.oidc1: + order: 0 + realm: oidc1 + description: "Log in with Azure" + basic.basic1: + order: 1 + ``` + +## Setting up OpenID Connect with Google [ec-securing-oidc-google] + +Follow these steps to configure OpenID Connect single sign-on on in {{es}} with a Google OP. + +For more information about OpenID connect in Google, refer to [Google OpenID Connect documentation](https://developers.google.com/identity/protocols/oauth2/openid-connect). + +1. Configure the OAuth client ID. + + 1. Create a new project: + + 1. Sign in to the Google Cloud and open the [New Project page](https://console.cloud.google.com/projectcreate). Create a new project. + + 2. Create a client ID and secret: + + 1. Navigate to the **APIs & Services** and open the [Credentials](https://console.cloud.google.com/apis/credentials) tab to create your OAuth client ID. + + :::{image} ../../../images/cloud-ec-oidc-oauth-create-credentials-google.png + :alt: A screenshot of the Google Cloud console Create Credentials dialog with the OAuth client ID field highlighted + ::: + + 2. For **Application Type** choose `Web application`. + 3. Choose a **Name** for your OAuth 2 client, for example `ec-oauth2`. + 4. Add an **Authorized redirect URI**. + + It will typically be `/api/security/oidc/callback`, where `` is the base URL for your {{kib}} instance. + + If you're using {{ech}}, then set this value to `/api/security/oidc/callback`. + 5. Select **Create** and copy your client ID and your client secret for later use. + +2. Add your client secret [to the {{es}} keystore](/deploy-manage/security/secure-settings.md). + + For OIDC, the client secret setting name in the keystore should be in the form `xpack.security.authc.realms.oidc..rp.client_secret`. + +3. Configure Elasticsearch with the OIDC realm. + + To learn more about the endpoints provided by Google, refer to this [OpenID configuration](https://accounts.google.com/.well-known/openid-configuration). + + To configure Elasticsearch for OIDC, [update your Elasticsearch user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) with the following configuration: + + ```sh + xpack: + security: + authc: + realms: + oidc: + oidc1: + order: 2 + rp.client_id: "YOUR_CLIENT_ID" + rp.response_type: "code" + rp.requested_scopes: ["openid", "email"] + rp.redirect_uri: "/api/security/oidc/callback" + op.issuer: "https://accounts.google.com" + op.authorization_endpoint: "https://accounts.google.com/o/oauth2/v2/auth" + op.token_endpoint: "https://oauth2.googleapis.com/token" + op.userinfo_endpoint: "https://openidconnect.googleapis.com/v1/userinfo" + op.jwkset_path: "https://www.googleapis.com/oauth2/v3/certs" + claims.principal: email + claim_patterns.principal: "^([^@]+)@YOUR_DOMAIN\\.TLD$" + ``` + + Where: + + * `YOUR_CLIENT_ID` is your Client ID. + * `/api/security/oidc/callback` is your Kibana endpoint. + + It will typically be `/api/security/oidc/callback`, where `` is the base URL for your {{kib}} instance. + + If you're using {{ech}}, then set this value to `/api/security/oidc/callback`. + * `YOUR_DOMAIN` and `TLD` in the `claim_patterns.principal` regular expression are your organization email domain and top level domain. + + + If you're using {{ece}} or {{ech}}, and you're using machine learning or a deployment with hot-warm architecture, you must include this configuration in the user settings section for each node type. + +1. Create a role mapping. + + The following role mapping for OIDC restricts access to a specific user `(firstname.lastname)` based on the `claim_patterns.principal` email address. This prevents other users on the same domain from having access to your deployment. You can remove the rule or adjust it at your convenience. + + More details are available in our [Configuring role mappings documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#oidc-role-mappings). + + ```json + POST /_security/role_mapping/oidc_kibana + { + "enabled": true, + "roles": [ "superuser" ], + "rules" : { + "all" : [ + { + "field" : { + "realm.name" : "oidc1" + } + }, + { + "field" : { + "username" : [ + "" + ] + } + } + ] + }, + "metadata": { "version": 1 } + } + ``` + + If you use an email in the `claim_patterns.principal`, you won’t need to add the domain in the role_mapping (for example, `firstname.lastname@your_domain.tld` should be `firstname.lastname`). + +2. Configure Kibana with the OIDC realm. [Update your Kibana user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) with the following configuration: + + ```sh + xpack.security.authc.providers: + oidc.oidc1: + order: 0 + realm: oidc1 + description: "Log in with Google" + basic.basic1: + order: 1 + ``` + +## Setting up OpenID Connect with Okta [ec-securing-oidc-okta] + +Follow these steps to configure OpenID Connect single sign-on on for {{es}} with an Okta OP. + +For more information about OpenID connect in Okta, refer to [Okta OAuth 2.0 documentation](https://developer.okta.com/docs/guides/implement-oauth-for-okta/create-oauth-app/). + +1. Configure the OAuth client ID. + + 1. Create a new application: + + 1. Go to **Applications** > **Add Application**. + + :::{image} ../../../images/cloud-ec-oidc-new-app-okta.png + :alt: A screenshot of the Get Started tab on the Okta Create A New Application page + ::: + + 2. For the **Platform** page settings, select **Web** then **Next**. + 3. In the **Application settings** choose a **Name** for your application, for example `Kibana OIDC`. + 4. Set the **Base URI** to `KIBANA_ENDPOINT_URL`. + 5. Set the **Login redirect URI**. + + It will typically be `/api/security/oidc/callback`. + + If you're using {{ech}}, then set this value to `/api/security/oidc/callback`. + 6. Set the **Logout redirect URI** as `KIBANA_ENDPOINT_URL/logged_out`. + 7. Choose **Done** and copy your client ID and client secret values for later use. + +2. Add your client secret [to the {{es}} keystore](/deploy-manage/security/secure-settings.md). + + For OIDC, the client secret setting name in the keystore should be in the form `xpack.security.authc.realms.oidc..rp.client_secret`. + +3. Configure Elasticsearch with the OIDC realm. + + To learn more about the available endpoints provided by Okta, refer to the following OpenID configuration: `https://{{yourOktadomain}}/.well-known/openid-configuration` + + To configure Elasticsearch for OIDC, [update your Elasticsearch user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) with the following configuration: + + ```sh + xpack: + security: + authc: + realms: + oidc: + oidc1: + order: 2 + rp.client_id: "YOUR_CLIENT_ID" + rp.response_type: "code" + rp.requested_scopes: ["openid", "email"] + rp.redirect_uri: "KIBANA_ENDPOINT_URL/api/security/oidc/callback" + op.issuer: "https://YOUR_OKTA_DOMAIN" + op.authorization_endpoint: "https://YOUR_OKTA_DOMAIN/oauth2/v1/authorize" + op.token_endpoint: "https://YOUR_OKTA_DOMAIN/oauth2/v1/token" + op.userinfo_endpoint: "https://YOUR_OKTA_DOMAIN/oauth2/v1/userinfo" + op.endsession_endpoint: "https://YOUR_OKTA_DOMAIN/oauth2/v1/logout" + op.jwkset_path: "https://YOUR_OKTA_DOMAIN/oauth2/v1/keys" + claims.principal: email + claim_patterns.principal: "^([^@]+)@YOUR_DOMAIN\\.TLD$" + ``` + + Where: + + * `YOUR_CLIENT_ID` is the Client ID that you set up in the previous steps. + * `KIBANA_ENDPOINT_URL` is your Kibana endpoint, available from the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). + * `YOUR_OKTA_DOMAIN` is the URL of your Okta domain shown on your Okta dashboard. + * `YOUR_DOMAIN` and `TLD` in the `claim_patterns.principal` regular expression are your organization email domain and top level domain. + + +Remember to add this configuration for each node type in the [User settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) if you use several node types based on your deployment architecture (Dedicated Master, High IO, and/or High Storage). + +1. Create a role mapping. + + The following role mapping for OIDC restricts access to a specific user `(firstname.lastname)` based on the `claim_patterns.principal` email address. This prevents other users on the same domain from having access to your deployment. You can remove the rule or adjust it at your convenience. + + More details are available in our [Configuring role mappings documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#oidc-role-mappings). + + ```json + POST /_security/role_mapping/oidc_kibana + { + "enabled": true, + "roles": [ "superuser" ], + "rules" : { + "all" : [ + { + "field" : { + "realm.name" : "oidc1" + } + }, + { + "field" : { + "username" : [ + "" + ] + } + } + ] + }, + "metadata": { "version": 1 } + } + ``` + + If you use an email in the `claim_patterns.principal`, you won’t need to add the domain in the role_mapping (for example, `firstname.lastname@your_domain.tld` should be `firstname.lastname`). + +5. Configure {{kib}} with the OIDC realm. [Update your {{kib}} user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) with the following configuration: + + ```sh + xpack.security.authc.providers: + oidc.oidc1: + order: 0 + realm: oidc1 + description: "Log in with Okta" + basic.basic1: + order: 1 + ``` \ No newline at end of file diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md b/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md index 973329cce..6217ff8cd 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md @@ -4,77 +4,545 @@ mapped_urls: - https://www.elastic.co/guide/en/elasticsearch/reference/current/oidc-guide.html - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-secure-clusters-oidc.html - https://www.elastic.co/guide/en/cloud/current/ec-secure-clusters-oidc.html - - https://www.elastic.co/guide/en/cloud/current/ec-securing-clusters-oidc-op.html - https://www.elastic.co/guide/en/cloud-heroku/current/ech-secure-clusters-oidc.html +navigation_title: OpenID Connect +applies_to: + deployment: + self: + ess: + ece: + eck: --- -# OpenID Connect +# OpenID Connect authentication [oidc-realm] -% What needs to be done: Refine +The OpenID Connect realm enables {{es}} to serve as an OpenID Connect Relying Party (RP) and provides single sign-on (SSO) support in {{kib}}. -% GitHub issue: https://github.com/elastic/docs-projects/issues/347 +It is specifically designed to support authentication using an interactive web browser, so it does not operate as a standard authentication realm. Instead, there are {{kib}} and {{es}} {{security-features}} that work together to enable interactive OpenID Connect sessions. -% Use migrated content from existing pages that map to this page: +This means that the OpenID Connect realm is not suitable for use by standard REST clients. If you configure an OpenID Connect realm for use in {{kib}}, you should also configure another realm, such as the [native realm](/deploy-manage/users-roles/cluster-or-deployment-auth/native.md), in your authentication chain. -% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/oidc-realm.md -% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/oidc-guide.md -% Notes: some steps not needed for cloud / don't work -% - [ ] ./raw-migrated-files/cloud/cloud-enterprise/ece-secure-clusters-oidc.md -% - [ ] ./raw-migrated-files/cloud/cloud/ec-secure-clusters-oidc.md -% - [ ] ./raw-migrated-files/cloud/cloud/ec-securing-clusters-oidc-op.md -% - [ ] ./raw-migrated-files/cloud/cloud-heroku/ech-secure-clusters-oidc.md +Because this feature is designed with {{kib}} in mind, most sections of this guide assume {{kib}} is used. To learn how a custom web application could use the OpenID Connect REST APIs to authenticate the users to {{es}} with OpenID Connect, refer to [OpenID Connect without {{kib}}](#oidc-without-kibana). -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): +For a detailed description of how to implement OpenID Connect with various OpenID Connect Providers (OPs), refer to [Set up OpenID Connect with Azure, Google, or Okta](/deploy-manage/users-roles/cluster-or-deployment-auth/oidc-examples.md). -$$$ec-securing-oidc-azure$$$ +::::{note} +OpenID Connect realm support in {{kib}} is designed with the expectation that it will be the primary authentication method for the users of that {{kib}} instance. The [Configuring {{kib}}](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#oidc-configure-kibana) section describes what this entails and how you can set it up to support other realms if necessary. +:::: -$$$ec-oidc-client-secret$$$ +## The OpenID Connect Provider [oidc-guide-op] -$$$ec-oidc-user-settings$$$ +The OpenID Connect Provider (OP) is the entity in OpenID Connect that is responsible for authenticating the user and for granting the necessary tokens with the authentication and user information to be consumed by the Relying Parties. -$$$ec-securing-oidc-google$$$ +In order for the {{stack}} to be able to use your OpenID Connect Provider for authentication, a trust relationship needs to be established between the OP and the RP. In the OpenID Connect Provider, this means registering the RP as a client. OpenID Connect defines a dynamic client registration protocol but this is usually geared towards real-time client registration and not the trust establishment process for cross security domain single sign on. All OPs will also allow for the manual registration of an RP as a client, via a user interface or (less often) via the consumption of a metadata document. -$$$ec-securing-oidc-okta$$$ +The process for registering the {{stack}} RP will be different from OP to OP, so you should follow your provider's documentation. The information for the RP that you commonly need to provide for registration are the following: -$$$ec-summary-and-references$$$ +* `Relying Party Name`: An arbitrary identifier for the relying party. There are no constraints on this value, either from the specification or the {{stack}} implementation. +* `Redirect URI`: The URI where the OP will redirect the user’s browser after authentication, sometimes referred to as a `Callback URI`. The appropriate value for this will depend on your setup, and whether or not {{kib}} sits behind a proxy or load balancer. + + It will typically be `${kibana-url}/api/security/oidc/callback` (for the authorization code flow) or `${kibana-url}/api/security/oidc/implicit` (for the implicit flow) where *${kibana-url}* is the base URL for your {{kib}} instance. -$$$ece-oidc-client-secret$$$ + If you're using {{ech}}, then set this value to `/api/security/oidc/callback`. -$$$ece-oidc-user-settings$$$ +At the end of the registration process, the OP will assign a Client Identifier and a Client Secret for the RP ({{stack}}) to use. Note these two values as they will be used in the {{es}} configuration. -$$$ech-oidc-client-secret$$$ -$$$ech-oidc-user-settings$$$ +## Prerequisites [oidc-elasticsearch-authentication] -$$$oidc-claim-to-property$$$ +Before you set up an OpenID Connect realm, you must have an OpenID Connect Provider where the {{stack}} Relying Party will be registered. -$$$oidc-claims-mappings$$$ +If you're using a self-managed cluster, then perform the following additional steps: -$$$oidc-configure-kibana$$$ +* Enable TLS for HTTP. -$$$oidc-create-realm$$$ + If your {{es}} cluster is operating in production mode, you must configure the HTTP interface to use SSL/TLS before you can enable OIDC authentication. For more information, see [Encrypt HTTP client communications for {{es}}](../../../deploy-manage/security/set-up-basic-security-plus-https.md#encrypt-http-communication). -$$$oidc-elasticsearch-authentication$$$ + If you started {{es}} [with security enabled](/deploy-manage/deploy/self-managed/installing-elasticsearch.md), then TLS is already enabled for HTTP. -$$$oidc-enable-http$$$ + {{ech}}, {{ece}}, and {{eck}} have TLS enabled by default. -$$$oidc-enable-token$$$ +* Enable the token service. -$$$oidc-guide-op$$$ + The {{es}} OIDC implementation makes use of the {{es}} token service. If you configure TLS on the HTTP interface, this service is automatically enabled. It can be explicitly configured by adding the following setting in your `elasticsearch.yml` file: -$$$oidc-role-mappings$$$ + ```yaml + xpack.security.authc.token.enabled: true + ``` -$$$oidc-user-metadata$$$ + {{ech}}, {{ece}}, and {{eck}} have TLS enabled by default. + +## Create an OpenID Connect realm [oidc-create-realm] + +OpenID Connect based authentication is enabled by configuring the appropriate realm within the authentication chain for {{es}}. + +This realm has a few mandatory settings, and a number of optional settings. The available settings are described in detail in [OpenID Connect realm settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md#ref-oidc-settings). This guide will explore the most common settings. + +1. Create an OpenID Connect (the realm type is `oidc`) realm in your `elasticsearch.yml` file similar to what is shown below. + + If you're using {{ece}} or {{ech}}, and you're using machine learning or a deployment with hot-warm architecture, you must include this configuration in the user settings section for each node type. + + ::::{note} + The values used below are meant to be an example and are not intended to apply to every use case. The details below the configuration snippet provide insights and suggestions to help you pick the proper values, depending on your OP configuration. + :::: + + ```yaml + xpack.security.authc.realms.oidc.oidc1: + order: 2 + rp.client_id: "the_client_id" + rp.response_type: code + rp.redirect_uri: "https://kibana.example.org:5601/api/security/oidc/callback" + op.issuer: "https://op.example.org" + op.authorization_endpoint: "https://op.example.org/oauth2/v1/authorize" + op.token_endpoint: "https://op.example.org/oauth2/v1/token" + op.jwkset_path: oidc/jwkset.json + op.userinfo_endpoint: "https://op.example.org/oauth2/v1/userinfo" + op.endsession_endpoint: "https://op.example.org/oauth2/v1/logout" + rp.post_logout_redirect_uri: "https://kibana.example.org:5601/security/logged_out" + claims.principal: sub + claims.groups: "http://example.info/claims/groups" + ``` + + ::::{dropdown} Common settings + + xpack.security.authc.realms.oidc. + : The OpenID Connect realm name. The realm name can only contain alphanumeric characters, underscores, and hyphens. + + order + : The order of the OpenID Connect realm in your authentication chain. Allowed values are between `2` and 100. Set to `2` unless you plan on configuring multiple SSO realms for this cluster. + + rp.client_id + : The Client Identifier that was assigned to the {{stack}} RP by the OP upon registration. The value is usually an opaque, arbitrary string. + + rp.response_type + : An identifier that controls which OpenID Connect authentication flow this RP supports and also which flow this RP requests the OP should follow. Supported values are: + + * `code`: The Authorization Code flow. If your OP supports the Authorization Code flow, you should select this instead of the Implicit Flow. + * `id_token token`: The Implicit flow, where {{es}} also requests an oAuth2 access token from the OP that can be used for followup requests (`UserInfo`). Select this option if the OP offers a `UserInfo` endpoint in its configuration, or if you know that the claims that you need to use for role mapping aren't available in the ID Token. + * `id_token`: The Implicit flow, without an oAuth2 token request. Select this option if all necessary claims will be contained in the ID Token, or if the OP doesn’t offer a UserInfo endpoint. + + + rp.redirect_uri + : The redirect URI where the OP will redirect the browser after authentication. This needs to be *exactly* the same as the one [configured with the OP upon registration](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#oidc-guide-op) and will typically be `${kibana-url}/api/security/oidc/callback` where *${kibana-url}* is the base URL for your {{kib}} instance. + + op.issuer + : A verifiable Identifier for your OpenID Connect Provider. An Issuer Identifier is usually a case sensitive URL. The value for this setting should be provided by your OpenID Connect Provider. + + op.authorization_endpoint + : The URL for the Authorization Endpoint in the OP. This is where the user’s browser will be redirected to start the authentication process. The value for this setting should be provided by your OpenID Connect Provider. + + op.token_endpoint + : The URL for the Token Endpoint in the OpenID Connect Provider. This is the endpoint where {{es}} will send a request to exchange the code for an ID Token. This setting is optional when you use the implicit flow. The value for this setting should be provided by your OpenID Connect Provider. + + op.jwkset_path + : The path to a file or a URL containing a JSON Web Key Set with the key material that the OpenID Connect Provider uses for signing tokens and claims responses. Your OpenID Connect Provider should provide you with this file or a URL where it is available. + + If your OpenID Connect Provider doesn’t publish its JWKS at an https URL, or if you want to use a local copy, you can upload the JWKS as a file. + + :::{tip} + * In self-managed clusters, the specified path is resolved relative to the {{es}} config directory. {{es}} will automatically monitor this file for changes and will reload the configuration whenever it is updated. + * If you're using {{ece}} or {{ech}}, then you must [upload this file as a custom bundle](/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md) before it can be referenced. + * If you're using {{eck}}, then install the file as a [custom configuration file](/deploy-manage/deploy/cloud-on-k8s/custom-configuration-files-plugins.md#use-a-volume-and-volume-mount-together-with-a-configmap-or-secret). + ::: + + op.userinfo_endpoint + : (Optional) The URL for the UserInfo Endpoint in the OpenID Connect Provider. This is the endpoint of the OP that can be queried to get further user information, if required. The value for this setting should be provided by your OpenID Connect Provider. + + op.endsession_endpoint + : (Optional) The URL to the End Session Endpoint in the OpenID Connect Provider. This is the endpoint where the user’s browser will be redirected after local logout, if the realm is configured for RP-initiated single logout and the OP supports it. The value for this setting should be provided by your OpenID Connect Provider. + + rp.post_logout_redirect_uri + : (Optional) The Redirect URL where the OpenID Connect Provider should redirect the user after a successful single logout (assuming `op.endsession_endpoint` above is also set). This should be set to a value that will not trigger a new OpenID Connect Authentication, such as `${kibana-url}/security/logged_out` or `${kibana-url}/login?msg=LOGGED_OUT` where *${kibana-url}* is the base URL for your {{kib}} instance. + + claims.principal + : See [Claims mapping](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#oidc-claims-mappings). + + claims.groups + : See [Claims mapping](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#oidc-claims-mappings). + + :::: + +1. Set the `Client Secret` that was assigned to the RP during registration in the OP. To set the client secret, add the `xpack.security.authc.realms.oidc..rp.client_secret` setting [to the {{es}} keystore](/deploy-manage/security/secure-settings.md). + +:::{warning} +In {{ech}} and {{ece}}, after you configure Client Secret, any attempt to restart the deployment will fail until you complete the rest of the configuration steps. If you want to roll back the Active Directory realm configurations, you need to remove the `xpack.security.authc.realms.oidc.oidc1.rp.client_secret` that was just added. +::: + +::::{note} +According to the OpenID Connect specification, the OP should also make their configuration available at a well known URL, which is the concatenation of their `Issuer` value with the `.well-known/openid-configuration` string. For example: `https://op.org.com/.well-known/openid-configuration`. + +That document should contain all the necessary information to configure the OpenID Connect realm in {{es}}. +:::: + +## Map claims [oidc-claims-mappings] + +When authenticating to {{kib}} using OpenID Connect, the OP will provide information about the user in the form of **OpenID Connect Claims**. These claims can be included either in the ID Token, or be retrieved from the UserInfo endpoint of the OP. + +An **OpenID Connect Claim** is a piece of information asserted by the OP for the authenticated user, and consists of a name/value pair that contains information about the user. + +**OpenID Connect Scopes** are identifiers that are used to request access to specific lists of claims. The standard defines a set of scope identifiers that can be requested. + +* **Mandatory scopes**: `openid` + +* **Commonly used scopes**: + * `profile`: Requests access to the `name`,`family_name`,`given_name`,`middle_name`,`nickname`, `preferred_username`,`profile`,`picture`,`website`,`gender`,`birthdate`,`zoneinfo`,`locale`, and `updated_at` claims. + * `email`: Requests access to the `email` and `email_verified` claims. + +The RP requests specific scopes during the authentication request. If the OP Privacy Policy allows it and the authenticating user consents to it, the related claims are returned to the RP (either in the ID Token or as a UserInfo response). + +The list of the supported claims will vary depending on the OP you are using, but [standard claims](https://openid.net/specs/openid-connect-core-1_0.md#StandardClaims) are usually supported. + +### How claims appear in user metadata [oidc-user-metadata] + +By default, users who authenticate through OpenID Connect have additional metadata fields. These fields include every OpenID claim that is provided in the authentication response, regardless of whether it is mapped to an {{es}} user property. + +For example, in the metadata field `oidc(claim_name)`, "claim_name" is the name of the claim as it was contained in the ID Token or in the User Info response. Note that these will include all the [ID Token claims](https://openid.net/specs/openid-connect-core-1_0.md#IDToken) that pertain to the authentication event, rather than the user themselves. + +This behavior can be disabled by adding `populate_user_metadata: false` as a setting in the OIDC realm. + + +### Map claims to user properties [oidc-claim-to-property] + +The goal of claims mapping is to configure {{es}} in such a way as to be able to map the values of specified returned claims to one of the [user properties](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#oidc-user-properties) that are supported by {{es}}. These user properties are then utilized to identify the user in the {{kib}} UI or the audit logs, and can also be used to create [role mapping](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#oidc-role-mappings) rules. + +To configure claims mapping: + +1. Using your OP configuration, identify the claims that it might support. + + The list provided in the OP’s metadata or in the configuration page of the OP is a list of potentially supported claims. However, for privacy reasons it might not be a complete one, or not all supported claims will be available for all authenticated users. +2. Review the list of [user properties](#oidc-user-properties) that {{es}} supports, and decide which of them are useful to you, and can be provided by your OP in the form of claims. At a minimum, the `principal` user property is required. +3. Configure your OP to "release" those claims to your {{stack}} Relying Party. This process greatly varies by provider. You can use a static configuration while others will support that the RP requests the scopes that correspond to the claims to be "released" on authentication time. See [`rp.requested_scopes`](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md#ref-oidc-settings) for details about how to configure the scopes to request. To ensure interoperability and minimize the errors, you should only request scopes that the OP supports, and that you intend to map to {{es}} user properties. + + :::{note} + You can only map claims with values that are strings, numbers, Boolean values, or an array consisting of strings, numbers, and Boolean values. + ::: + +4. Configure the OpenID Connect realm in {{es}} to associate the [{{es}} user properties](#oidc-user-properties) to the name of the claims that your OP will release. + + The [sample configuration](#oidc-create-realm) configures the `principal` and `groups` user properties as follows: + + * `claims.principal: sub`: Instructs {{es}} to look for the OpenID Connect claim named `sub` in the ID Token that the OP issued for the user (or in the UserInfo response) and assign the value of this claim to the `principal` user property. + + `sub` is a commonly used claim for the principal property as it is an identifier of the user in the OP and it is also a required claim of the ID Token. This means that `sub` is available in most OPs. However, the OP may provide another claim that is a better fit for your needs. + * `claims.groups: "http://example.info/claims/groups"`: Instructs {{es}} to look for the claim with the name `http://example.info/claims/groups`, either in the ID Token or in the UserInfo response, and map the value(s) of it to the user property `groups` in {{es}}. + + There is no standard claim in the specification that is used for expressing roles or group memberships of the authenticated user in the OP, so the name of the claim that should be mapped here will vary between providers. Consult your OP documentation for more details. + + :::{tip} + In this example, the value is a URI, treated as a string and not a URL pointing to a location that will be retrieved. + ::: + + + +### Mappable {{es}} user properties [oidc-user-properties] + +The {{es}} OpenID Connect realm can be configured to map OpenID Connect claims to the following properties on the authenticated user: + +principal +: *(Required)* This is the username that will be applied to a user that authenticates against this realm. The `principal` appears in places such as the {{es}} audit logs. + +::::{note} +If the principal property fails to be mapped from a claim, the authentication fails. +:::: + + +groups +: *(Recommended)* If you want to use your OP’s concept of groups or roles as the basis for a user’s {{es}} privileges, you should map them with this property. The `groups` are passed directly to your [role mapping rules](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#oidc-role-mappings). + +name +: *(Optional)* The user’s full name. + +mail +: *(Optional)* The user’s email address. + +dn +: *(Optional)* The user’s X.500 Distinguished Name. + + +### Extract partial values from OpenID Connect claims [_extracting_partial_values_from_openid_connect_claims] + +There are some occasions where the value of a claim contains more information than you want to use within {{es}}. A common example of this is one where the OP works exclusively with email addresses, but you want the user’s `principal` to use the `local-name` part of the email address. For example if their email address was `james.wong@staff.example.com`, then you might want their principal to be `james.wong`. + +This can be achieved using the `claim_patterns` setting in the {{es}} realm, as demonstrated in the realm configuration below: + +```yaml +xpack.security.authc.realms.oidc.oidc1: + order: 2 + rp.client_id: "the_client_id" + rp.response_type: code + rp.redirect_uri: "https://kibana.example.org:5601/api/security/oidc/callback" + op.authorization_endpoint: "https://op.example.org/oauth2/v1/authorize" + op.token_endpoint: "https://op.example.org/oauth2/v1/token" + op.userinfo_endpoint: "https://op.example.org/oauth2/v1/userinfo" + op.endsession_endpoint: "https://op.example.org/oauth2/v1/logout" + op.issuer: "https://op.example.org" + op.jwkset_path: oidc/jwkset.json + claims.principal: email_verified + claim_patterns.principal: "^([^@]+)@staff\\.example\\.com$" +``` + +In this case, the user’s `principal` is mapped from the `email_verified` claim, but a regular expression is applied to the value before it is assigned to the user. If the regular expression matches, then the result of the first group is used as the effective value. If the regular expression does not match then the claim mapping fails. + +In this example, the email address must belong to the `staff.example.com` domain, and then the local-part (anything before the `@`) is used as the principal. Any users who try to login using a different email domain will fail because the regular expression will not match against their email address, and thus their principal user property - which is mandatory - will not be populated. + +::::{important} +Small mistakes in these regular expressions can have significant security consequences. For example, if we accidentally left off the trailing `$` from the example above, then we would match any email address where the domain starts with `staff.example.com`, and this would accept an email address such as `admin@staff.example.com.attacker.net`. It is important that you make sure your regular expressions are as precise as possible so that you don't open an avenue for user impersonation attacks. +:::: + + +## Configure third party initiated single sign-on [third-party-login] + +The Open ID Connect realm in {{es}} supports 3rd party initiated login as described in the [specification](https://openid.net/specs/openid-connect-core-1_0.html#ThirdPartyInitiatedLogin). + +This allows the OP, or a third party other than the RP, to initiate the authentication process while requesting the OP to be used for the authentication. The {{stack}} RP should already be configured for this OP for this process to succeed. + + +## Configure RP-initiated logout [oidc-logout] + +The OpenID Connect realm in {{es}} supports RP-initiated logout functionality as described in the [specification](https://openid.net/specs/openid-connect-rpinitiated-1_0.html) + +In this process, the OpenID Connect RP (the {{stack}} in this case) will redirect the user’s browser to predefined URL of the OP after successfully completing a local logout. The OP can then logout the user also, depending on the configuration, and should finally redirect the user back to the RP. + +RP-initiated logout is controlled by two settings: + +* `op.endsession_endpoint`: The URL in the OP that the browser will be redirected to. +* `rp.post_logout_redirect_uri` The URL to redirect the user back to after the OP logs them out. + +When configuring `rp.post_logout_redirect_uri`, do not point to a URL that will trigger re-authentication of the user. For instance, when using OpenID Connect to support single sign-on to {{kib}}, this could be set to either `${kibana-url}/security/logged_out`, which will show a user-friendly message to the user, or `${kibana-url}/login?msg=LOGGED_OUT` which will take the user to the login selector in {{kib}}. + + +## Configure SSL [oidc-ssl-config] + +OpenID Connect depends on TLS to provide security properties such as encryption in transit and endpoint authentication. The RP is required to establish back-channel communication with the OP in order to exchange the code for an ID Token during the Authorization code grant flow and in order to get additional user information from the `UserInfo` endpoint. If you configure `op.jwks_path` as a URL, {{es}} will need to get the OP’s signing keys from the file hosted there. As such, it is important that {{es}} can validate and trust the server certificate that the OP uses for TLS. Because the system trust store is used for the client context of outgoing https connections, if your OP is using a certificate from a trusted CA, no additional configuration is needed. + +However, if the issuer of your OP’s certificate is not trusted by the JVM on which {{es}} is running (e.g it uses an organization CA), then you must configure {{es}} to trust that CA. + +If you're using {{ech}} or {{ece}}, then you must [upload your certificate as a custom bundle](/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md) before it can be referenced. + +If you're using {{eck}}, then install the certificate as a [custom configuration file](/deploy-manage/deploy/cloud-on-k8s/custom-configuration-files-plugins.md#use-a-volume-and-volume-mount-together-with-a-configmap-or-secret). + +The following example demonstrates how to trust a CA certificate (`/oidc/company-ca.pem`), which is located within the configuration directory. + +```yaml +xpack.security.authc.realms.oidc.oidc1: + order: 1 + ... + ssl.certificate_authorities: ["/oidc/company-ca.pem"] +``` + +## Map OIDC users to roles [oidc-role-mappings] + +When a user authenticates using OpenID Connect, they are identified to the {{stack}}, but this does not automatically grant them access to perform any actions or access any data. + +Your OpenID Connect users can't do anything until they are assigned roles. You can map roles This can be done through either the [add role mapping API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-put-role-mapping) or with [authorization realms](/deploy-manage/users-roles/cluster-or-deployment-auth/realm-chains.md#authorization_realms). + +You can map LDAP groups to roles in the following ways: + +* Using the role mappings page in {{kib}}. +* Using the [role mapping API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-put-role-mapping). +* By delegating authorization to another realm. + +For more information, see [Mapping users and groups to roles](/deploy-manage/users-roles/cluster-or-deployment-auth/mapping-users-groups-to-roles.md). + +::::{note} +You can't use [role mapping files](/deploy-manage/users-roles/cluster-or-deployment-auth/mapping-users-groups-to-roles.md#mapping-roles-file) to grant roles to users authenticating using OpenID Connect. +:::: + +### Example: using the role mapping API + +If you want all your users authenticating with OpenID Connect to get access to Kibana, issue the following request to Elasticsearch: + +```sh +POST /_security/role_mapping/CLOUD_OIDC_TO_KIBANA_ADMIN <1> +{ + "enabled": true, + "roles": [ "kibana_admin" ], <2> + "rules": { <3> + "field": { "realm.name": "oidc-realm-name" } <4> + }, + "metadata": { "version": 1 } +} +``` + +1. The name of the new role mapping. +2. The role mapped to the users. +3. The fields to match against. +4. The name of the OpenID Connect realm. This needs to be the same value as the one used in the cluster configuration. + +### Example: Role mapping API, using OpenID Claim information + +The user properties that are mapped via the realm configuration are used to process role mapping rules, and these rules determine which roles a user is granted. + +The user fields that are provided to the role mapping are derived from the OpenID Connect claims as follows: + +* `username`: The `principal` user property +* `dn`: The `dn` user property +* `groups`: The `groups` user property +* `metadata`: See [User metadata](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#oidc-user-metadata) + + +If your OP has the ability to provide groups or roles to RPs using an OpenID Claim, then you should map this claim to the `claims.groups` setting in the {{es}} realm (see [Mapping claims to user properties](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#oidc-claim-to-property)), and then make use of it in a role mapping. + +For example: + +This mapping grants the {{es}} `finance_data` role, to any users who authenticate via the `oidc1` realm with the `finance-team` group membership. + +```console +PUT /_security/role_mapping/oidc-finance +{ + "roles": [ "finance_data" ], + "enabled": true, + "rules": { "all": [ + { "field": { "realm.name": "oidc1" } }, + { "field": { "groups": "finance-team" } } + ] } +} +``` + +### Delegating OIDC authorization to another realm + +If your users also exist in a repository that can be directly accessed by {{es}}, such as an LDAP directory, then you can use [authorization realms](/deploy-manage/users-roles/cluster-or-deployment-auth/authorization-delegation.md) instead of role mappings. + +In this case, you perform the following steps: + +1. In your OpenID Connect realm, assign a claim to act as the lookup userid, by configuring the `claims.principal` setting. +2. Create a new realm that can look up users from your local repository (e.g. an `ldap` realm). +3. In your OpenID Connect realm, set `authorization_realms` to the name of the realm you created in step 2. + +## Configure {{kib}} [oidc-configure-kibana] + +OpenID Connect authentication in {{kib}} requires additional settings in addition to the standard {{kib}} security configuration. + +If you're using a self-managed cluster, then, because OIDC requires {{es}} nodes to use TLS on the HTTP interface, you must configure {{kib}} to use a `https` URL to connect to {{es}}, and you may need to configure `elasticsearch.ssl.certificateAuthorities` to trust the certificates that {{es}} has been configured to use. + +OpenID Connect authentication in {{kib}} is subject to the following timeout settings in `kibana.yml`: + +* [`xpack.security.session.idleTimeout`](/deploy-manage/security/kibana-session-management.md#session-idle-timeout) +* [`xpack.security.session.lifespan`](/deploy-manage/security/kibana-session-management.md#session-lifespan) + +You may want to adjust these timeouts based on your security requirements. + +### Add the OIDC provider to {{kib}} + +::::{tip} +You can configure multiple authentication providers in {{kib}} and let users choose the provider they want to use. For more information, check [the {{kib}} authentication documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md). +:::: + +The three additional settings that are required for OpenID Connect support are shown below: + +```yaml +xpack.security.authc.providers: + oidc.oidc1: + order: 0 + realm: "oidc1" +``` + +The configuration values used in the example above are: + +`xpack.security.authc.providers` +: Add an `oidc` provider to instruct {{kib}} to use OpenID Connect single sign-on as the authentication method. This instructs Kibana to attempt to initiate an SSO flow every time a user attempts to access a URL in {{kib}}, if the user is not already authenticated. + +`xpack.security.authc.providers.oidc..realm` +: The name of the OpenID Connect realm in {{es}} that should handle authentication for this Kibana instance. + +### Supporting OIDC and basic authentication in {{kib}} + +If you also want to allow users to log in with a username and password, you must enable the `basic` authentication provider too. This will allow users that haven’t already authenticated with OpenID Connect to log in using the {{kib}} login form: + +```yaml +xpack.security.authc.providers: + oidc.oidc1: + order: 0 + realm: "oidc1" + description: "Log in with my OpenID Connect" <1> + basic.basic1: + order: 1 +``` + +1. This arbitrary string defines how OpenID Connect login is titled in the Login Selector UI that is shown when you enable multiple authentication providers in {{kib}}. If you have a {{kib}} instance, you can also configure the optional icon and hint settings for any authentication provider. + + +## OpenID Connect without {{kib}} [oidc-without-kibana] + +The OpenID Connect realm is designed to allow users to authenticate to {{kib}}. As a result, most sections of this guide assume {{kib}} is used. This section describes how a custom web application could use the relevant OpenID Connect REST APIs to authenticate the users to {{es}} with OpenID Connect. + +Single sign-on realms such as OpenID Connect and SAML make use of the Token Service in {{es}} and in principle exchange a SAML or OpenID Connect Authentication response for an {{es}} access token and a refresh token. The access token is used as credentials for subsequent calls to {{es}}. The refresh token enables the user to get new {{es}} access tokens after the current one expires. + +::::{note} +The {{es}} Token Service can be seen as a minimal oAuth2 authorization server and the access token and refresh token mentioned above are tokens that pertain *only* to this authorization server. They are generated and consumed *only* by {{es}} and are in no way related to the tokens ( access token and ID Token ) that the OpenID Connect Provider issues. +:::: + + +### Register the RP with an OpenID Connect Provider [_register_the_rp_with_an_openid_connect_provider] + +The Relying Party ({{es}} and the custom web app) will need to be registered as client with the OpenID Connect Provider. Note that when registering the `Redirect URI`, it needs to be a URL in the custom web app. + + +### OpenID Connect Realm [_openid_connect_realm] + +An OpenID Connect realm needs to be created and configured accordingly in {{es}}. See [Configure {{es}} for OpenID Connect authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#oidc-elasticsearch-authentication) + + +### Service Account user for accessing the APIs [_service_account_user_for_accessing_the_apis] + +The realm is designed with the assumption that there needs to be a privileged entity acting as an authentication proxy. In this case, the custom web application is the authentication proxy handling the authentication of end users ( more correctly, "delegating" the authentication to the OpenID Connect Provider ). The OpenID Connect APIs require authentication and the necessary authorization level for the authenticated user. For this reason, a Service Account user needs to be created and assigned a role that gives them the `manage_oidc` cluster privilege. The use of the `manage_token` cluster privilege will be necessary after the authentication takes place, so that the user can maintain access or be subsequently logged out. + +```console +POST /_security/role/facilitator-role +{ + "cluster" : ["manage_oidc", "manage_token"] +} +``` + +```console +POST /_security/user/facilitator +{ + "password" : "", + "roles" : [ "facilitator-role"] +} +``` + + +### Handling the authentication flow [_handling_the_authentication_flow] + +On a high level, the custom web application would need to perform the following steps in order to authenticate a user with OpenID Connect: + +1. Make an HTTP POST request to `_security/oidc/prepare`, authenticating as the `facilitator` user, using the name of the OpenID Connect realm in the {{es}} configuration in the request body. For more details, see [OpenID Connect prepare authentication](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-oidc-prepare-authentication). + + ```console + POST /_security/oidc/prepare + { + "realm" : "oidc1" + } + ``` + +2. Handle the response to `/_security/oidc/prepare`. The response from {{es}} will contain 3 parameters: `redirect`, `state`, `nonce`. The custom web application would need to store the values for `state` and `nonce` in the user’s session (client side in a cookie or server side if session information is persisted this way) and redirect the user’s browser to the URL that will be contained in the `redirect` value. +3. Handle a subsequent response from the OP. After the user is successfully authenticated with the OpenID Connect Provider, they will be redirected back to the callback/redirect URI. Upon receiving this HTTP GET request, the custom web app will need to make an HTTP POST request to `_security/oidc/authenticate`, again - authenticating as the `facilitator` user - passing the URL where the user’s browser was redirected to, as a parameter, along with the values for `nonce` and `state` it had saved in the user’s session previously. If more than one OpenID Connect realms are configured, the custom web app can specify the name of the realm to be used for handling this, but this parameter is optional. For more details, see [OpenID Connect authenticate](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-oidc-authenticate). + + ```console + POST /_security/oidc/authenticate + { + "redirect_uri" : "https://oidc-kibana.elastic.co:5603/api/security/oidc/callback?code=jtI3Ntt8v3_XvcLzCFGq&state=4dbrihtIAt3wBTwo6DxK-vdk-sSyDBV8Yf0AjdkdT5I", + "state" : "4dbrihtIAt3wBTwo6DxK-vdk-sSyDBV8Yf0AjdkdT5I", + "nonce" : "WaBPH0KqPVdG5HHdSxPRjfoZbXMCicm5v1OiAj0DUFM", + "realm" : "oidc1" + } + ``` + + Elasticsearch will validate this and if all is correct will respond with an access token that can be used as a `Bearer` token for subsequent requests and a refresh token that can be later used to refresh the given access token as described in [Get token](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-get-token). + +4. At some point, if necessary, the custom web application can log the user out by using the [OIDC logout API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-oidc-logout) passing the access token and refresh token as parameters. For example: + + ```console + POST /_security/oidc/logout + { + "token" : "dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvbmx5IHRlc3QgZGF0YS4gZG8gbm90IHRyeSB0byByZWFkIHRva2VuIQ==", + "refresh_token": "vLBPvmAB6KvwvJZr27cS" + } + ``` + + If the realm is configured accordingly, this may result in a response with a `redirect` parameter indicating where the user needs to be redirected in the OP in order to complete the logout process. -$$$oidc-user-properties$$$ -$$$oidc-claims-mapping$$$ -**This page is a work in progress.** The documentation team is working to combine content pulled from the following pages: -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/oidc-realm.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/oidc-realm.md) -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/oidc-guide.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/oidc-guide.md) -* [/raw-migrated-files/cloud/cloud-enterprise/ece-secure-clusters-oidc.md](/raw-migrated-files/cloud/cloud-enterprise/ece-secure-clusters-oidc.md) -* [/raw-migrated-files/cloud/cloud/ec-secure-clusters-oidc.md](/raw-migrated-files/cloud/cloud/ec-secure-clusters-oidc.md) -* [/raw-migrated-files/cloud/cloud/ec-securing-clusters-oidc-op.md](/raw-migrated-files/cloud/cloud/ec-securing-clusters-oidc-op.md) -* [/raw-migrated-files/cloud/cloud-heroku/ech-secure-clusters-oidc.md](/raw-migrated-files/cloud/cloud-heroku/ech-secure-clusters-oidc.md) \ No newline at end of file diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/operator-only-functionality.md b/deploy-manage/users-roles/cluster-or-deployment-auth/operator-only-functionality.md index 816a814f0..03c6f9c33 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/operator-only-functionality.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/operator-only-functionality.md @@ -1,12 +1,17 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/operator-only-functionality.html +applies_to: + deployment: + ess: + ece: + eck: --- # Operator-only functionality [operator-only-functionality] -::::{note} -{cloud-only} +::::{admonition} Indirect use only +This feature is designed for indirect use by {{ech}}, {{ece}}, and {{eck}}. Direct use is not supported. :::: diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/operator-privileges-for-snapshot-restore.md b/deploy-manage/users-roles/cluster-or-deployment-auth/operator-privileges-for-snapshot-restore.md index b32204ed3..3378ca70d 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/operator-privileges-for-snapshot-restore.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/operator-privileges-for-snapshot-restore.md @@ -1,16 +1,21 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/operator-only-snapshot-and-restore.html +applies_to: + deployment: + ess: + ece: + eck: --- # Operator privileges for snapshot and restore [operator-only-snapshot-and-restore] -::::{note} -{cloud-only} +::::{admonition} Indirect use only +This feature is designed for indirect use by {{ech}}, {{ece}}, and {{eck}}. Direct use is not supported. :::: -Invoking [operator-only APIs](operator-only-functionality.md#operator-only-apis) or updating [operator-only dynamic cluster settings](operator-only-functionality.md#operator-only-dynamic-cluster-settings) typically results in changes in the cluster state. The cluster state can be included in a cluster [snapshot](../../tools/snapshot-and-restore.md). Snapshots are a great way to preserve the data of a cluster, which can later be restored to bootstrap a new cluster, perform migration, or disaster recovery, for example. In a traditional self-managed environment, the intention is for the restore process to copy the entire cluster state over when requested. However, in a more managed environment, such as [{{ess}}](https://cloud.elastic.co/registration?page=docs&placement=docs-body), data that is associated with [operator-only functionality](operator-only-functionality.md) is explicitly managed by the infrastructure code. +Invoking [operator-only APIs](operator-only-functionality.md#operator-only-apis) or updating [operator-only dynamic cluster settings](operator-only-functionality.md#operator-only-dynamic-cluster-settings) typically results in changes in the cluster state. The cluster state can be included in a cluster [snapshot](../../tools/snapshot-and-restore.md). Snapshots are a great way to preserve the data of a cluster, which can later be restored to bootstrap a new cluster, perform migration, or disaster recovery, for example. In a traditional self-managed environment, the intention is for the restore process to copy the entire cluster state over when requested. However, in a more managed environment, such as [{{ech}}](https://cloud.elastic.co/registration?page=docs&placement=docs-body), data that is associated with [operator-only functionality](operator-only-functionality.md) is explicitly managed by the infrastructure code. Restoring snapshot data associated with operator-only functionality could be problematic because: @@ -18,7 +23,7 @@ Restoring snapshot data associated with operator-only functionality could be pro 2. Even when the infrastructure code can correct the values immediately after a restore, there will always be a short period of time when the cluster could be in an inconsistent state. 3. The infrastructure code prefers to configure operator-only functionality from a single place, that is to say, through API calls. -Therefore, [**when the operator privileges feature is enabled**](configure-operator-privileges.md), snapshot data that is associated with any operator-only functionality is **not** restored. +Therefore, [when the operator privileges feature is enabled](configure-operator-privileges.md), snapshot data that is associated with any operator-only functionality is **not** restored. ::::{note} That information is still included when taking a snapshot so that all data is always preserved. diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/operator-privileges.md b/deploy-manage/users-roles/cluster-or-deployment-auth/operator-privileges.md index 9e530ee70..1ea88cddb 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/operator-privileges.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/operator-privileges.md @@ -1,16 +1,20 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/operator-privileges.html +applies_to: + deployment: + ess: + ece: + eck: --- # Operator privileges [operator-privileges] -::::{note} -{cloud-only} +::::{admonition} Indirect use only +This feature is designed for indirect use by {{ech}}, {{ece}}, and {{eck}}. Direct use is not supported. :::: - -With a typical {{es}} deployment, people who administer the cluster also operate the cluster at the infrastructure level. User authorization based on [role-based access control (RBAC)](user-roles.md) is effective and reliable for this environment. However, in more managed environments, such as [{{ess}}](https://cloud.elastic.co/registration?page=docs&placement=docs-body), there is a distinction between the operator of the cluster infrastructure and the administrator of the cluster. +With a typical {{es}} deployment, people who administer the cluster also operate the cluster at the infrastructure level. User authorization based on [role-based access control (RBAC)](user-roles.md) is effective and reliable for this environment. However, in more managed environments, such as [{{ech}}](https://cloud.elastic.co/registration?page=docs&placement=docs-body), there is a distinction between the operator of the cluster infrastructure and the administrator of the cluster. Operator privileges limit some functionality to operator users *only*. Operator users are just regular {{es}} users with access to specific [operator-only functionality](operator-only-functionality.md). These privileges are not available to cluster administrators, even if they log in as a highly privileged user such as the `elastic` user or another user with the `superuser` role. By limiting system access, operator privileges enhance the {{es}} security model while safeguarding user capabilities. diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/orchestrator-managed-users-overview.md b/deploy-manage/users-roles/cluster-or-deployment-auth/orchestrator-managed-users-overview.md new file mode 100644 index 000000000..cee9b2e04 --- /dev/null +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/orchestrator-managed-users-overview.md @@ -0,0 +1,26 @@ +--- +applies_to: + deployment: + ess: + ece: + eck: +navigation_title: Orchestrator-managed users +--- + +# Orchestrator-managed users + +The {{es}} provides default user credentials to help you get up and running. + +In self-managed clusters, these users are created as [built-in users](/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md). + +In orchestrated deployments (ECH, ECE, and ECK), the `elastic` user is managed by the platform, while other default users are not accessible to end users. The way that credentials are managed for this and other internal users depends on your orchestrator. +In this section, you'll learn how to manage credentials for orchestrator-managed users: + +* In {{ece}} and {{ech}}, [learn how to reset password for the `elastic` user](/deploy-manage/users-roles/cluster-or-deployment-auth/manage-elastic-user-cloud.md). +* In {{eck}}, [learn how to manage the `elastic` user, and how to rotate all auto-generated credentials used by ECK](/deploy-manage/users-roles/cluster-or-deployment-auth/managed-credentials-eck.md). + +:::{{tip}} +To learn more about built-in users in self-managed clusters, and how to reset built-in user passwords, refer to: + +* [](/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md) +* [](/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-sm.md). \ No newline at end of file diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/pki.md b/deploy-manage/users-roles/cluster-or-deployment-auth/pki.md index 46008c5e4..5467e38bb 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/pki.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/pki.md @@ -1,6 +1,12 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/pki-realm.html +applies_to: + deployment: + self: + ess: + ece: + eck: --- # PKI [pki-realm] @@ -11,7 +17,7 @@ You can also use PKI certificates to authenticate to {{kib}}, however this requi ## PKI authentication for clients connecting directly to {{es}} [pki-realm-for-direct-clients] -To use PKI in {{es}}, you configure a PKI realm, enable client authentication on the desired network layers (transport or http), and map the Distinguished Names (DNs) from the Subject field in the user certificates to roles. You create the mappings in a role mapping file or use the role mappings API. +To use PKI in {{es}}, you configure a PKI realm, enable client authentication on the desired network layers (transport or http), and map the Distinguished Names (DNs) from the `Subject` field in the user certificates to roles. You create the mappings in a role mapping file or use the role mappings API. 1. Add a realm configuration for a `pki` realm to `elasticsearch.yml` under the `xpack.security.authc.realms.pki` namespace. You must explicitly set the `order` attribute. See [PKI realm settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md#ref-pki-settings) for all of the options you can set for a `pki` realm. @@ -54,14 +60,13 @@ To use PKI in {{es}}, you configure a PKI realm, enable client authentication on 3. Optional: If you want the same users to also be authenticated using certificates when they connect to {{kib}}, you must configure the {{es}} PKI realm to allow delegation. See [PKI authentication for clients connecting to {{kib}}](#pki-realm-for-proxied-clients). 4. Restart {{es}} because realm configuration is not reloaded automatically. If you’re following through with the next steps, you might wish to hold the restart for last. -5. [Enable SSL/TLS](../../security/secure-cluster-communications.md#encrypt-internode-communication). -6. Enable client authentication on the desired network layers (transport or http). +5. If you're using a self-managed cluster, then [enable SSL/TLS](../../security/secure-cluster-communications.md#encrypt-internode-communication). +6. If you're using a self-managed cluster or {{eck}}, then enable client authentication on the desired network layers (transport or http). ::::{important} - To use PKI when clients connect directly to {{es}}, you must enable SSL/TLS with client authentication. That is to say, you must set `xpack.security.transport.ssl.client_authentication` and `xpack.security.http.ssl.client_authentication` to `optional` or `required`. If the setting value is `optional`, clients without certificates can authenticate with other credentials. + To use PKI when clients connect directly to {{es}}, you must enable SSL/TLS with client authentication by setting `xpack.security.transport.ssl.client_authentication` and `xpack.security.http.ssl.client_authentication` to `optional` or `required`. If the setting value is `optional`, clients without certificates can authenticate with other credentials. :::: - When clients connect directly to {{es}} and are not proxy-authenticated, the PKI realm relies on the TLS settings of the node’s network interface. The realm can be configured to be more restrictive than the underlying network connection. That is, it is possible to configure the node such that some connections are accepted by the network interface but then fail to be authenticated by the PKI realm. However, the reverse is not possible. The PKI realm cannot authenticate a connection that has been refused by the network interface. In particular this means: @@ -72,34 +77,44 @@ To use PKI in {{es}}, you configure a PKI realm, enable client authentication on For an explanation of these settings, see [General TLS settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md#ssl-tls-settings). - The relevant network interface (transport or http) must be configured to trust any certificate that is to be used within the PKI realm. However, it is possible to configure the PKI realm to trust only a *subset* of the certificates accepted by the network interface. This is useful when the SSL/TLS layer trusts clients with certificates that are signed by a different CA than the one that signs your users' certificates. +7. Optional: Configure the PKI realm to trust a subset of certificates. + + The relevant network interface (transport or http) must be configured to trust any certificate that is to be used within the PKI realm. However, it is possible to configure the PKI realm to trust only a *subset* of the certificates accepted by the network interface. This is useful when the SSL/TLS layer trusts clients with certificates that are signed by a different CA than the one that signs your users' certificates. - To configure the PKI realm with its own truststore, specify the `truststore.path` option. The path must be located within the Elasticsearch configuration directory (`ES_PATH_CONF`). For example: + 1. To configure the PKI realm with its own trust store, specify the `truststore.path` option. The path must be located within the {{es}} configuration directory (`ES_PATH_CONF`). For example: - ```yaml - xpack: - security: - authc: - realms: - pki: - pki1: - order: 1 - truststore: - path: "pki1_truststore.jks" - ``` + ```yaml + xpack: + security: + authc: + realms: + pki: + pki1: + order: 1 + truststore: + path: "pki1_truststore.jks" + ``` - If the truststore is password protected, the password should be configured by adding the appropriate `secure_password` setting to the {{es}} keystore. For example, the following command adds the password for the example realm above: + :::{tip} + If you're using {{ece}} or {{ech}}, then you must [upload this file as a custom bundle](/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md) before it can be referenced. + + If you're using {{eck}}, then install the file as a [custom configuration file](/deploy-manage/deploy/cloud-on-k8s/custom-configuration-files-plugins.md#use-a-volume-and-volume-mount-together-with-a-configmap-or-secret). + + If you're using a self-managed cluster, then the file must be present on each node. + ::: + + The `certificate_authorities` option can be used as an alternative to the `truststore.path` setting, when the certificate files are PEM formatted. The setting accepts a list. The two options are exclusive, they cannot be both used simultaneously. + + 2. If the trust store is password protected, the password should be configured by adding the appropriate `secure_password` setting [to the {{es}} keystore](/deploy-manage/security/secure-settings.md). For example, in a self-managed cluster, the following command adds the password for the example realm above: ```shell bin/elasticsearch-keystore add \ xpack.security.authc.realms.pki.pki1.truststore.secure_password ``` - The `certificate_authorities` option can be used as an alternative to the `truststore.path` setting, when the certificate files are PEM formatted. The setting accepts a list. The two options are exclusive, they cannot be both used simultaneously. - -7. Map roles for PKI users. +8. Map roles for PKI users. - You map roles for PKI users through the [role mapping APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-security) or by using a file stored on each node. Both configuration options are merged together. When a user authenticates against a PKI realm, the privileges for that user are the union of all privileges defined by the roles to which the user is mapped. + You map roles for PKI users through the [role mapping APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-security) or by using a file. Both configuration options are merged together. When a user authenticates against a PKI realm, the privileges for that user are the union of all privileges defined by the roles to which the user is mapped. You identify a user by the distinguished name in their certificate. For example, the following mapping configuration maps `John Doe` to the `user` role using the role mapping API: @@ -127,6 +142,13 @@ To use PKI in {{es}}, you configure a PKI realm, enable client authentication on 1. The name of a role. 2. The distinguished name (DN) of a PKI user. + :::{tip} + If you're using {{ece}} or {{ech}}, then you must [upload this file as a custom bundle](/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md) before it can be referenced. + + If you're using {{eck}}, then install the file as a [custom configuration file](/deploy-manage/deploy/cloud-on-k8s/custom-configuration-files-plugins.md#use-a-volume-and-volume-mount-together-with-a-configmap-or-secret). + + If you're using a self-managed cluster, then the file must be present on each node. + ::: The file’s path defaults to `ES_PATH_CONF/role_mapping.yml`. You can specify a different path (which must be within `ES_PATH_CONF`) by using the `files.role_mapping` realm setting (e.g. `xpack.security.authc.realms.pki.pki1.files.role_mapping`). @@ -140,15 +162,13 @@ To use PKI in {{es}}, you configure a PKI realm, enable client authentication on The PKI realm supports [authorization realms](realm-chains.md#authorization_realms) as an alternative to role mapping. :::: - - ## PKI authentication for clients connecting to {{kib}} [pki-realm-for-proxied-clients] -By default, the PKI realm relies on the node’s network interface to perform the SSL/TLS handshake and extract the client certificate. This behaviour requires that clients connect directly to {{es}} so that their SSL connection is terminated by the {{es}} node. If SSL/TLS authentication is to be performed by {{kib}}, the PKI realm must be configured to permit delegation. +By default, the PKI realm relies on the node’s network interface to perform the SSL/TLS handshake and extract the client certificate. This behavior requires that clients connect directly to {{es}} so that their SSL connection is terminated by the {{es}} node. If SSL/TLS authentication is to be performed by {{kib}}, the PKI realm must be configured to permit delegation. Specifically, when clients presenting X.509 certificates connect to {{kib}}, {{kib}} performs the SSL/TLS authentication. {{kib}} then forwards the client’s certificate chain (by calling an {{es}} API) to have them further validated by the PKI realms that have been configured for delegation. -To permit authentication delegation for a specific {{es}} PKI realm, start by configuring the realm for the usual case, as detailed in the [PKI authentication for clients connecting directly to {{es}}](#pki-realm-for-direct-clients) section. In this scenario, when you enable TLS, it is mandatory that you [encrypt HTTP client communications](../../security/secure-http-communications.md#encrypt-http-communication). +To permit authentication delegation for a specific {{es}} PKI realm, start by [configuring the realm](#pki-realm-for-direct-clients). In this scenario, for self-managed clusters, it is mandatory that you [encrypt HTTP client communications](../../security/secure-http-communications.md#encrypt-http-communication) when you enable TLS. You must also explicitly configure a `truststore` (or, equivalently `certificate_authorities`) even though it is the same trust configuration that you have configured on the network layer. The `xpack.security.authc.token.enabled` and `delegation.enabled` settings must also be `true`. For example: @@ -166,7 +186,7 @@ xpack: path: "pki1_truststore.jks" ``` -After you restart {{es}}, this realm can validate delegated PKI authentication. You must then [configure {{kib}} to allow PKI certificate authentication](user-authentication.md#pki-authentication). +After you restart {{es}}, this realm can validate delegated PKI authentication. You must then [configure {{kib}} to allow PKI certificate authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md#pki-authentication). A PKI realm with `delegation.enabled` still works unchanged for clients connecting directly to {{es}}. Directly authenticated users and users that are PKI authenticated by delegation to {{kib}} both follow the same [role mapping rules](mapping-users-groups-to-roles.md) or [authorization realms configurations](realm-chains.md#authorization_realms). @@ -194,5 +214,6 @@ PUT /_security/role_mapping/direct_pki_only 1. If this metadata field is set (that is to say, it is **not** `null`), the user has been authenticated in the delegation scenario. +## Use PKI authentication for {{kib}} [pki-realm-kibana] - +If you want to use PKI authentication to authenticate using your browser and {{kib}}, you need to enable the relevant authentication provider in {{kib}} configuration. See [{{kib}} authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md#pki-authentication). \ No newline at end of file diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/quickstart.md b/deploy-manage/users-roles/cluster-or-deployment-auth/quickstart.md index d55b31160..1f62145b6 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/quickstart.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/quickstart.md @@ -1,59 +1,64 @@ --- mapped_pages: - https://www.elastic.co/guide/en/kibana/current/tutorial-secure-access-to-kibana.html +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all --- # Quickstart [tutorial-secure-access-to-kibana] -{{kib}} is home to an ever-growing suite of powerful features, which help you get the most out of your data. Your data is important, and should be protected. {{kib}} allows you to secure access to your data and control how users are able to interact with your data. +If you plan to use native Elasticsearch user and role management, then you can manage your users and roles completely within your {{kib}} instance. -For example, some users might only need to view your stunning dashboards, while others might need to manage your fleet of Elastic agents and run machine learning jobs to detect anomalous behavior in your network. - -This guide introduces you to three of {{kib}}'s security features: spaces, roles, and users. By the end of this tutorial, you will learn how to manage these entities, and how you can leverage them to secure access to both {{kib}} and your data. +You can use native access management features to give your users access to only the surfaces and features they need. For example, some users might only need to view your dashboards, while others might need to manage your fleet of Elastic agents and run machine learning jobs to detect anomalous behavior in your network. +This guide introduces you to three basic user and access management features: [spaces](/deploy-manage/manage-spaces.md), [roles](/deploy-manage/users-roles/cluster-or-deployment-auth/user-roles.md), and [native users](/deploy-manage/users-roles/cluster-or-deployment-auth/native.md). By the end of this tutorial, you will learn how to manage these entities, and how you can leverage them to secure access to {{es}}, {{kib}}, and your data. ## Spaces [_spaces] Do you have multiple teams using {{kib}}? Do you want a “playground” to experiment with new visualizations or rules? If so, then [{{kib}} Spaces](../../manage-spaces.md) can help. -Think of a space as another instance of {{kib}}. A space allows you to organize your [dashboards](../../../explore-analyze/dashboards.md), [rules](../../../explore-analyze/alerts-cases/alerts.md), [machine learning jobs](../../../explore-analyze/machine-learning/machine-learning-in-kibana.md), and much more into their own categories. For example, you might have a Marketing space for your marketeers to track the results of their campaigns, and an Engineering space for your developers to [monitor application performance](/solutions/observability/apps/application-performance-monitoring-apm.md). +Think of a space as another instance of {{kib}}. A space allows you to organize your [dashboards](../../../explore-analyze/dashboards.md), [rules](../../../explore-analyze/alerts-cases/alerts.md), [machine learning jobs](../../../explore-analyze/machine-learning/machine-learning-in-kibana.md), and much more into their own categories. For example, you might have a **Marketing** space for your marketers to track the results of their campaigns, and an **Engineering** space for your developers to [monitor application performance](/solutions/observability/apps/application-performance-monitoring-apm.md). The assets you create in one space are isolated from other spaces, so when you enter a space, you only see the assets that belong to that space. -Refer to the [Spaces documentation](../../manage-spaces.md) for more information. +Refer to the [Spaces documentation](/deploy-manage/manage-spaces.md) for more information. ## Roles [_roles] -Once your spaces are setup, the next step to securing access is to provision your roles. Roles are a collection of privileges that allow you to perform actions in {{kib}} and Elasticsearch. Roles are assigned to users, and to [system accounts](built-in-users.md) that power the Elastic Stack. +After your spaces are set up, the next step to securing access is to provision your roles. Roles are a collection of privileges that allow you to perform actions in {{kib}} and Elasticsearch. Roles are assigned to users, and to [system accounts](built-in-users.md) that power the Elastic Stack. You can create your own roles, or use any of the [built-in roles](built-in-roles.md). Some built-in roles are intended for Elastic Stack components and should not be assigned to end users directly. -One of the more useful built-in roles is `kibana_admin`. Assigning this role to your users will grant access to all of {{kib}}'s features. This includes the ability to manage Spaces. +An example of a built-in role is `kibana_admin`. Assigning this role to your users will grant access to all of {{kib}}'s features. This includes the ability to manage spaces. -The built-in roles are great for getting started with the Elastic Stack, and for system administrators who do not need more restrictive access. With so many features, it’s not possible to ship more granular roles to accommodate everyone’s needs. This is where custom roles come in. +Built-in roles are great for getting started with the Elastic Stack, and for system administrators who do not need more restrictive access. However, if you need to control access with more precision, you can create [custom roles](/deploy-manage/users-roles/cluster-or-deployment-auth/defining-roles.md). As an administrator, you have the ability to create your own roles to describe exactly the kind of access your users should have. For example, you might create a `marketing_user` role, which you then assign to all users in your marketing department. This role would grant access to all of the necessary data and features for this team to be successful, without granting them access they don’t require. ## Users [_users] -Once your roles are setup, the next step to securing access is to create your users, and assign them one or more roles. {{kib}}'s user management allows you to provision accounts for each of your users. +After your roles are set up, the next step to securing access is to create your users, and assign them one or more roles. {{kib}}'s user management allows you to provision accounts for each of your users. ::::{tip} -Want Single Sign-on? {{kib}} supports a wide range of SSO implementations, including SAML, OIDC, LDAP/AD, and Kerberos. [Learn more about {{kib}}'s SSO features](user-authentication.md). +Want Single Sign-on? {{kib}} supports a wide range of SSO implementations, including SAML, OIDC, LDAP/AD, and Kerberos. [Learn more about SSO options](/deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md). :::: ## Example: Create a user with access only to dashboards [tutorial-secure-kibana-dashboards-only] -Let’s work through an example together. Consider a marketing analyst who wants to monitor the effectiveness of their campaigns. They should be able to see their team’s dashboards, but not be allowed to view or manage anything else in {{kib}}. All of the team’s dashboards are located in the Marketing space. +Let’s work through an example together. Consider a marketing analyst who wants to monitor the effectiveness of their campaigns. They should be able to see their team’s dashboards, but not be allowed to view or manage anything else in {{kib}}. ### Create a space [_create_a_space] -Create a Marketing space for your marketing analysts to use. +Create a **Marketing** space for your marketing analysts to use. 1. Go to the **Spaces** management page using the navigation menu or the [global search field](/explore-analyze/find-and-organize/find-apps-and-objects.md). 2. Click **Create a space**. @@ -93,7 +98,7 @@ To create the role: 5. To grant access to dashboards in the `Marketing` space, locate the {{kib}} section, and click **Add {{kib}} privilege**: 1. From the **Spaces** dropdown, select the `Marketing` space. - 2. Expand the **Analytics*** section, and select the ***Read*** privilege for ***Dashboard**. + 2. Expand the **Analytics** section, and select the **Read** privilege for **Dashboard**. 3. Click **Add Kibana privilege**. 6. Click **Create role**. @@ -127,7 +132,7 @@ Now that you created a role, create a user account. Verify that the user and role are working correctly. -1. Logout of {{kib}} if you are already logged in. +1. Log out of {{kib}} if you are already logged in. 2. In the login screen, enter the username and password for the account you created. You’re taken into the `Marketing` space, and the main navigation shows only the **Dashboard** application. @@ -139,11 +144,9 @@ Verify that the user and role are working correctly. -## What’s next? [_whats_next_2] +## Next steps -This guide is an introduction to {{kib}}'s security features. Check out these additional resources to learn more about authenticating and authorizing your users. +This guide is an introduction to basic auth features. Check out these additional resources to learn more about authenticating and authorizing your users. * View the [authentication guide](user-authentication.md) to learn more about single-sign on and other login features. * View the [authorization guide](defining-roles.md) to learn more about authorizing access to {{kib}}'s features. - -Still have questions? Ask on our [Kibana discuss forum](https://discuss.elastic.co/c/kibana) and a fellow community member or Elastic engineer will help out. diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/realm-chains.md b/deploy-manage/users-roles/cluster-or-deployment-auth/realm-chains.md index a7c87cb9f..8d4e1ee05 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/realm-chains.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/realm-chains.md @@ -1,22 +1,35 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/realm-chains.html +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all --- # Realm chains [realm-chains] -[Realms](authentication-realms.md) live within a *realm chain*. It is essentially a prioritized list of configured realms (typically of various types). Realms are consulted in ascending order (that is to say, the realm with the lowest `order` value is consulted first). You must make sure each configured realm has a distinct `order` setting. In the event that two or more realms have the same `order`, the node will fail to start. +[Realms](authentication-realms.md) live within a *realm chain*. It is essentially a prioritized list of configured realms typically of various types. Realms are consulted in ascending order: the realm with the lowest `order` value is consulted first. -During the authentication process, {{stack}} {{security-features}} consult and try to authenticate the request one realm at a time. Once one of the realms successfully authenticates the request, the authentication is considered to be successful. The authenticated user is associated with the request, which then proceeds to the authorization phase. If a realm cannot authenticate the request, the next realm in the chain is consulted. If all realms in the chain cannot authenticate the request, the authentication is considered to be unsuccessful and an authentication error is returned (as HTTP status code `401`). +You must make sure each configured realm has a distinct `order` setting. In the event that two or more realms have the same `order`, the node will fail to start. + +During the authentication process, {{es}} consults and tries to authenticate the request one realm at a time. Once one of the realms successfully authenticates the request, the authentication is considered to be successful. The authenticated user is associated with the request, which then proceeds to the authorization phase. If a realm can't authenticate the request, the next realm in the chain is consulted. If none of the realms in the chain can authenticate the request, the authentication is considered to be unsuccessful and an authentication error is returned (as HTTP status code `401`). ::::{note} -Some systems (e.g. Active Directory) have a temporary lock-out period after several successive failed login attempts. If the same username exists in multiple realms, unintentional account lockouts are possible. For more information, see [Users are frequently locked out of Active Directory](../../../troubleshoot/elasticsearch/security/trouble-shoot-active-directory.md). +Some systems, such as Active Directory, have a temporary lock-out period after several successive failed login attempts. If the same username exists in multiple realms, unintentional account lockouts are possible. For more information, see [Users are frequently locked out of Active Directory](/troubleshoot/elasticsearch/security/trouble-shoot-active-directory.md). :::: +## Configure a realm chain + +The default realm chain contains the `file` and `native` realms. To explicitly configure a realm chain, specify the chain in the `elasticsearch.yml` file. -The default realm chain contains the `file` and `native` realms. To explicitly configure a realm chain, you specify the chain in the `elasticsearch.yml` file. If your realm chain does not contain `file` or `native` realm or does not disable them explicitly, `file` and `native` realms will be added automatically to the beginning of the realm chain in that order. To opt-out from the automatic behaviour, you can explicitly configure the `file` and `native` realms with the `order` and `enabled` settings. +If your realm chain does not contain `file` or `native` realm or does not disable them explicitly, `file` and `native` realms will be added automatically to the beginning of the realm chain in that order. To opt out from this automatic behavior, you can explicitly configure the `file` and `native` realms with the `order` and `enabled` settings. -The following snippet configures a realm chain that enables the `file` realm as well as two LDAP realms and an Active Directory realm, but disables the `native` realm. +Each realm has a unique name that identifies it. Each type of realm dictates its own set of required and optional settings. There are also settings that are common to all realms. To explore these settings, refer to [Realm settings](https://www.elastic.co/guide/en/elasticsearch/reference/current/security-settings.html#realm-settings). + +The following snippet configures a realm chain that enables the `file` realm, two LDAP realms, and an Active Directory realm, and disables the `native` realm. ```yaml xpack.security.authc.realms: @@ -42,19 +55,17 @@ xpack.security.authc.realms: enabled: false ``` -As can be seen above, each realm has a unique name that identifies it. Each type of realm dictates its own set of required and optional settings. That said, there are [settings that are common to all realms](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md#ref-realm-settings). - ## Delegating authorization to another realm [authorization_realms] -Some realms have the ability to perform *authentication* internally, but delegate the lookup and assignment of roles (that is, *authorization*) to another realm. +Some realms have the ability to perform authentication internally, but delegate the lookup and assignment of roles (authorization) to another realm. -For example, you may wish to use a PKI realm to authenticate your users with TLS client certificates, then lookup that user in an LDAP realm and use their LDAP group assignments to determine their roles in Elasticsearch. +For example, you might want to use a PKI realm to authenticate your users with TLS client certificates, then look up that user in an LDAP realm and use their LDAP group assignments to determine their roles in {{es}}. -Any realm that supports retrieving users (without needing their credentials) can be used as an *authorization realm* (that is, its name may appear as one of the values in the list of `authorization_realms`). See [Looking up users without authentication](looking-up-users-without-authentication.md) for further explanation on which realms support this. +Any realm that supports retrieving users without needing their credentials can be used as an authorization realm. Refer to [Looking up users without authentication](looking-up-users-without-authentication.md) to learn which realms can be used as authorization realms. For realms that support this feature, it can be enabled by configuring the `authorization_realms` setting on the authenticating realm. Check the list of [supported settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md#realm-settings) for each realm to see if they support the `authorization_realms` setting. -If delegated authorization is enabled for a realm, it authenticates the user in its standard manner (including relevant caching) then looks for that user in the configured list of authorization realms. It tries each realm in the order they are specified in the `authorization_realms` setting. The user is retrieved by principal - the user must have identical usernames in the *authentication* and *authorization realms*. If the user cannot be found in any of the authorization realms, authentication fails. +If delegated authorization is enabled for a realm, it authenticates the user in its standard manner, including relevant caching, then looks for that user in the configured list of authorization realms. It tries each realm in the order they are specified in the `authorization_realms` setting. The user is retrieved by principal - the user must have identical usernames in the authentication and authorization realms. If the user can't be found in any of the authorization realms, then authentication fails. See [Configuring authorization delegation](authorization-delegation.md) for more details. diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/saml-entra.md b/deploy-manage/users-roles/cluster-or-deployment-auth/saml-entra.md new file mode 100644 index 000000000..b1213b102 --- /dev/null +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/saml-entra.md @@ -0,0 +1,145 @@ +--- +mapped_urls: + - https://www.elastic.co/guide/en/cloud/current/ec-securing-clusters-saml-azure.html +navigation_title: With Microsoft Entra ID +applies_to: + deployment: + self: + ess: + ece: + eck: +--- +# Set up SAML with Microsoft Entra ID [ec-securing-clusters-saml-azure] + +This guide provides a walk-through of how to configure Microsoft Entra ID, formerly known as Azure Active Directory, as an identity provider for SAML single sign-on (SSO) authentication, used for accessing {{kib}} in {{ech}}. + +For more information about SAML configuration, refer to: + +* [Secure your clusters with SAML](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md) +* [Single Sign-On SAML protocol](https://docs.microsoft.com/en-us/azure/active-directory/develop/single-sign-on-saml-protocol) + + +## Configure SAML with Microsoft Entra ID to access {{kib}} [ec-securing-clusters-saml-azure-kibana] + +Follow these steps to configure SAML with Microsoft Entra ID as an identity provider to access {{kib}}. + +1. Configure the Entra Identity Provider: + + 1. Log in to the [Azure Portal](https://portal.azure.com/) and navigate to **Entra** (formerly Azure Active Directory). + 2. Click **Enterprise applications** and then **New application** to register a new application. + 3. Click **Create your own application**, provide a name, and select the **Integrate any other application you don’t find in the gallery** option. + + :::{image} ../../../images/cloud-ec-saml-azuread-create-app.png + :alt: The Azure Create your own application flyout + ::: + + 4. Navigate to the new application, click **Users and groups**, and add all necessary users and groups. Only the users and groups that you add here will have SSO access to the {{stack}}. + + :::{image} ../../../images/cloud-ec-saml-azuread-users-and-groups.png + :alt: The Entra User and groups page + ::: + + 5. Navigate to **Single sign-on** and edit the basic SAML configuration, adding the following information: + + * `Identifier (Entity ID)` - a string that uniquely identifies a SAML service provider. We recommend using your {{kib}} URL, but you can use any identifier. + + For example, `https://saml-azure.kb.northeurope.azure.elastic-cloud.com:443`. + + * `Reply URL` - This is the {{kib}} URL with `/api/security/saml/callback` appended. + + For example, `https://saml-azure.kb.northeurope.azure.elastic-cloud.com:443/api/security/saml/callback`. + + * `Logout URL` - This is the {{kib}} URL with `/logout` appended. + + For example, `https://saml-azure.kb.northeurope.azure.elastic-cloud.com:443/logout`. + + :::{image} ../../../images/cloud-ec-saml-azuread-kibana-config.png + :alt: The Entra SAML configuration page with {{kib}} settings + ::: + + 6. Navigate to **SAML-based Single sign-on**, open the **User Attributes & Claims** configuration, and update the fields to suit your needs. These settings control what information from will be made available to the {{stack}} during SSO. This information can be used to identify a user in the {{stack}} and/or to assign different roles to users in the {{stack}}. We suggest that you configure a proper value for the `Unique User Identifier (Name ID)` claim that identifies the user uniquely and is not prone to changes. + + :::{image} ../../../images/cloud-ec-saml-azuread-user-attributes.png + :alt: The Entra ID User Attributes & Claims page + ::: + + 7. From the SAML configuration page, make a note of the `App Federation Metadata URL`. + +2. Configure {{es}} and {{kib}} for SAML: + + 1. [Update your {{es}} user settings](/deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) with the following configuration: + + ```sh + xpack.security.authc.realms.saml.kibana-realm: + order: 2 + attributes.principal: nameid + attributes.groups: "http://schemas.microsoft.com/ws/2008/06/identity/claims/groups" + idp.metadata.path: "https://login.microsoftonline.com//federationmetadata/2007-06/federationmetadata.xml?appid=" + idp.entity_id: "https://sts.windows.net//" + sp.entity_id: "" + sp.acs: "/api/security/saml/callback" + sp.logout: "/logout" + ``` + + Where: + + * `` is your Application ID, available in the application details in Azure. + * `` is your Tenant ID, available in the tenant overview page in Azure. + * `` is your {{kib}} endpoint, available from the {{ech}} console. Ensure this is the same value that you set for `Identifier (Entity ID)` in the earlier Microsoft Entra ID configuration step. + + For `idp.metadata.path`, we’ve shown the format to construct the URL. This value should be identical to the `App Federation Metadata URL` setting that you made a note of in the previous step. + + If you're using {{ece}} or {{ech}}, and you're using machine learning or a deployment with hot-warm architecture, you must include this configuration in the user settings section for each node type. + + 2. Next, configure {{kib}} to enable SAML authentication: + 1. [Update your {{kib}} user settings](/deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) with the following configuration: + + ```yaml + xpack.security.authc.providers: + saml.kibana-realm: + order: 0 + realm: kibana-realm + description: "Log in with Microsoft Entra ID" + ``` + + The configuration values used in the example above are: + + `xpack.security.authc.providers` + : Add `saml` provider to instruct {{kib}} to use SAML SSO as the authentication method. + + `xpack.security.authc.providers.saml..realm` + : Set this to the name of the SAML realm that you have used in your [{{es}} realm configuration](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-create-realm). For this example, use the realm name that you configured in the previous step: `kibana-realm`. + + 2. Create a role mapping. + + The following role mapping for SAML SSO restricts access to a specific user `(email)` based on the `attributes.principal` email address. This prevents other users on the same domain from having access to your deployment. You can remove the rule or adjust it at your convenience. + + ```json + POST /_security/role_mapping/SAML_kibana + { + "enabled": true, + "roles": [ "superuser" ], + "rules" : { + "all" : [ + { + "field" : { + "realm.name" : "kibana-realm" + } + }, + { + "field" : { + "username" : [ + "" + ] + } + } + ] + }, + "metadata": { "version": 1 } + } + ``` + + For more information, refer to [Configure role mapping](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-role-mapping) in the {{es}} SAML documentation. + + +You should now have successfully configured SSO access to {{kib}} with Microsoft Entra ID as the identity provider. diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md b/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md index 183c987c2..6d3893716 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md @@ -6,84 +6,841 @@ mapped_urls: - https://www.elastic.co/guide/en/cloud-enterprise/current/ece-securing-clusters-SAML.html - https://www.elastic.co/guide/en/cloud/current/ec-securing-clusters-SAML.html - https://www.elastic.co/guide/en/cloud/current/ec-sign-outgoing-saml-message.html - - https://www.elastic.co/guide/en/cloud/current/ec-securing-clusters-saml-azure.html - https://www.elastic.co/guide/en/cloud-heroku/current/ech-securing-clusters-SAML.html - https://www.elastic.co/guide/en/cloud-heroku/current/echsign-outgoing-saml-message.html - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-saml-authentication.html - https://www.elastic.co/guide/en/elasticsearch/reference/current/saml-guide-stack.html +navigation_title: SAML +applies_to: + deployment: + self: + ess: + ece: + eck: --- -# SAML +# SAML authentication [saml-realm] -% What needs to be done: Refine +The {{stack}} supports SAML single-sign-on (SSO) into {{kib}}, using {{es}} as a backend service. -% GitHub issue: https://github.com/elastic/docs-projects/issues/347 +The {{security-features}} provide this support using the Web Browser SSO profile of the SAML 2.0 protocol. This protocol is specifically designed to support authentication using an interactive web browser, so it does not operate as a standard authentication realm. Instead, there are {{kib}} and {{es}} {{security-features}} that work together to enable interactive SAML sessions. -% Use migrated content from existing pages that map to this page: +This means that the SAML realm is not suitable for use by standard REST clients. If you configure a SAML realm for use in {{kib}}, you should also configure another realm, such as the [native realm](/deploy-manage/users-roles/cluster-or-deployment-auth/native.md) in your authentication chain. -% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/saml-realm.md -% - [ ] ./raw-migrated-files/cloud/cloud-enterprise/ece_sign_outgoing_saml_message.md -% - [ ] ./raw-migrated-files/cloud/cloud-enterprise/ece_optional_settings.md -% - [ ] ./raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters-SAML.md -% - [ ] ./raw-migrated-files/cloud/cloud/ec-securing-clusters-SAML.md -% - [ ] ./raw-migrated-files/cloud/cloud/ec-sign-outgoing-saml-message.md -% - [ ] ./raw-migrated-files/cloud/cloud/ec-securing-clusters-saml-azure.md -% - [ ] ./raw-migrated-files/cloud/cloud-heroku/ech-securing-clusters-SAML.md -% - [ ] ./raw-migrated-files/cloud/cloud-heroku/echsign-outgoing-saml-message.md -% - [ ] ./raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-saml-authentication.md -% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/saml-guide-stack.md -% Notes: some steps not needed for cloud / don't work needs clarification that there is an orch level +Because this feature is designed with {{kib}} in mind, most sections of this guide assume {{kib}} is used. To learn how a custom web application could use the OpenID Connect REST APIs to authenticate the users to {{es}} with SAML, refer to [SAML without {{kib}}](#saml-no-kibana). -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): +The SAML support in {{kib}} is designed with the expectation that it will be the primary (or sole) authentication method for users of that {{kib}} instance. After you enable SAML authentication in {{kib}}, it will affect all users who try to login. The [Configuring {{kib}}](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-configure-kibana) section provides more detail about how this works. -$$$saml-create-realm$$$ +For a detailed walk-through of how to implement SAML authentication for {{kib}} with Microsoft Entra ID as an identity provider, refer to our guide [Set up SAML with Microsoft Entra ID](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md). -$$$saml-attributes-mapping$$$ +To configure SAML, you need to perform the following steps: -$$$saml-attribute-mapping-nameid$$$ +1. [Configure the prerequisites](#prerequisites) +2. [Create one or more SAML realms](#saml-create-realm) +3. [Configure role mappings](#saml-role-mapping) +4. [Configure Kibana to use SAML as the authentication provider](#saml-configure-kibana) -$$$saml-kibana-basic$$$ +Additional steps outlined in this document are optional. -$$$ec-securing-clusters-saml-azure-kibana$$$ +::::{note} +{{stack}} SSO is a [subscription feature](https://www.elastic.co/subscriptions). +:::: -$$$ec-securing-clusters-saml-azure-enterprise-search$$$ +::::{tip} +This topic describes implementing SAML SSO at the deployment or cluster level, for the purposes of authenticating with a {{kib}} instance. -$$$saml-role-mapping$$$ +Depending on your deployment type, you can also configure SSO for the following use cases: -$$$saml-configure-kibana$$$ +* If you're using {{ech}} or {{serverless-full}}, then you can configure SAML SSO [at the organization level](/deploy-manage/users-roles/cloud-organization/configure-saml-authentication.md). SAML SSO configured at this level can be used to control access to both the {{ecloud}} Console and to specific {{ech}} deployments and {{serverless-full}} projects. [Learn more about deployment-level vs. organization-level SSO](/deploy-manage/users-roles/cloud-organization.md#organization-deployment-sso). +* If you're using {{ece}}, then you can configure SAML [at the installation level](/deploy-manage/users-roles/cloud-enterprise-orchestrator/saml.md), and then configure [SSO](/deploy-manage/users-roles/cloud-enterprise-orchestrator/configure-sso-for-deployments.md) for deployments. +:::: -$$$saml-logout$$$ +## Identity provider requirements [saml-guide-idp] -$$$saml-enable-http$$$ +In SAML terminology, the {{stack}} is operating as a *Service Provider*. -$$$saml-enable-token$$$ +The other component that is needed to enable SAML single-sign-on is the *Identity Provider*, which is a service that handles your credentials and performs that actual authentication of users. -$$$saml-es-user-properties$$$ +If you are interested in configuring SSO into {{kib}}, then you need to provide {{es}} with information about your *Identity Provider*, and you will need to register the {{stack}} as a known *Service Provider* within that Identity Provider. There are also a few configuration changes that are required in {{kib}} to activate the SAML authentication provider. -$$$saml-enc-sign$$$ +### Supported IdPs -$$$saml-user-metadata$$$ +The {{stack}} supports the SAML 2.0 Web Browser SSO and Single Logout profiles, and can integrate with any Identity Provider (IdP) that supports at least the SAML 2.0 Web Browser SSO profile. It has been tested with a number of popular IdP implementations, such as [Microsoft Active Directory Federation Services (ADFS)](https://www.elastic.co/blog/how-to-configure-elasticsearch-saml-authentication-with-adfs), [Microsoft Entra ID](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md), and [Okta](https://www.elastic.co/blog/how-to-set-up-okta-saml-login-kibana-elastic-cloud). -$$$saml-elasticsearch-authentication$$$ +### Required IdP information -$$$saml-no-kibana-sp-init-sso$$$ +The {{stack}} accepts a standard XML-formatted SAML *metadata* document, which defines the capabilities and features of your IdP. You should be able to download or generate such a document within your IdP administration interface. You can pass this IdP document as a URL, or download it and make the file available to {{es}} For more information, see [`idp.metadata.path`](#idp-metadata-path). -$$$req-authn-context$$$ +The IdP will have been assigned an identifier or *EntityID*, which is most commonly expressed in *Uniform Resource Identifier* (URI) form. Your admin interface might tell you what this is, or you might need to read the metadata document to find it - look for the `entityID` attribute on the `EntityDescriptor` element. -$$$saml-guide-idp$$$ +Most IdPs will provide an appropriate metadata file with all the features that the {{stack}} requires, and should only require the configuration steps described below. The minimum requirements that the {{stack}} has for the IdP’s metadata are: + +* An `` with an `entityID` that matches the {{es}} [configuration](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-create-realm) +* An `` that supports the SAML 2.0 protocol (`urn:oasis:names:tc:SAML:2.0:protocol`). +* At least one `` that is configured for *signing* (that is, it has `use="signing"` or leaves the `use` unspecified) +* A `` with binding of HTTP-Redirect (`urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect`) +* If you want to support [Single Logout](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-logout), a `` with binding of HTTP-Redirect (`urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect`) + +### Signing requirements + +The {{stack}} requires that all messages from the IdP are signed. For authentication `` messages, the signature can be applied to either the response itself, or to the individual assertions. For `` messages, the message itself must be signed, and the signature should be provided as a URL parameter, as required by the `HTTP-Redirect` binding. + +## Prerequisites + +Before you set up SAML single-sign on, you must have a [SAML IdP](#saml-guide-idp) configured. + +If you're using a self-managed cluster, then perform the following additional steps: + +* Enable TLS for HTTP. + + If your {{es}} cluster is operating in production mode, you must configure the HTTP interface to use SSL/TLS before you can enable SAML authentication. For more information, see [Encrypt HTTP client communications for {{es}}](/deploy-manage/security/set-up-basic-security-plus-https.md#encrypt-http-communication). + + If you started {{es}} [with security enabled](/deploy-manage/deploy/self-managed/installing-elasticsearch.md), then TLS is already enabled for HTTP. + + {{ech}}, {{ece}}, and {{eck}} have TLS enabled by default. + +* Enable the token service. + + The {{es}} SAML implementation makes use of the {{es}} token service. If you configure TLS on the HTTP interface, this service is automatically enabled. It can be explicitly configured by adding the following setting in your `elasticsearch.yml` file: + + ```yaml + xpack.security.authc.token.enabled: true + ``` + + {{ech}}, {{ece}}, and {{eck}} have TLS enabled by default. + +## Create a SAML realm [saml-create-realm] + +SAML authentication is enabled by configuring a SAML realm within the authentication chain for {{es}}. + +This realm has a few mandatory settings, and a number of optional settings. The available settings are described in detail in [Security settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md): +* [SAML realm settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md#ref-saml-settings) +* [SAML realm signing settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md#ref-saml-signing-settings) +* [SAML realm encryption settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md#ref-saml-encryption-settings) +* [SAML realm SSL settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md#ref-saml-ssl-settings) + +This guide will walk you through the most common settings. + +Create a realm by adding the following to your `elasticsearch.yml` configuration file. Each configuration value is explained below. + +If you're using {{ece}} or {{ech}}, and you're using machine learning or a deployment with hot-warm architecture, you must include this configuration in the user settings section for each node type. + +```yaml +xpack.security.authc.realms.saml.saml1: + order: 2 + idp.metadata.path: saml/idp-metadata.xml + idp.entity_id: "https://sso.example.com/" + sp.entity_id: "https://kibana.example.com/" + sp.acs: "https://kibana.example.com/api/security/saml/callback" + sp.logout: "https://kibana.example.com/logout" + attributes.principal: "urn:oid:0.9.2342.19200300.100.1.1" + attributes.groups: "urn:oid:1.3.6.1.4.1.5923.1.5.1." +``` + +::::{dropdown} Common settings + +xpack.security.authc.realms.saml.saml1 +: Defines a new `saml` authentication realm named "saml1". See [Realms](/deploy-manage/users-roles/cluster-or-deployment-auth/authentication-realms.md) for more explanation of realms. + +order +: The order of the realm within the realm chain. Realms with a lower order have highest priority and are consulted first. We recommend giving password-based realms such as file, native, LDAP, and Active Directory the lowest order (highest priority), followed by SSO realms such as SAML and OpenID Connect. If you have multiple realms of the same type, give the most frequently accessed realm the lowest order to have it consulted first. + + If you're using {{eck}}, then make sure not to disable Elasticsearch’s file realm set by ECK, as ECK relies on the file realm for its operation. Set the `order` setting of the SAML realm to a greater value than the `order` value set for the file and native realms, which is by default -100 and -99 respectively. + +idp.metadata.path +: $$$idp-metadata-path$$$ The path to the metadata file for your Identity Provider. The metadata file path can either be a path, or an HTTPS URL. + + :::{tip} + If you want to pass a file path, then review the following: + + * File path settings are resolved relative to the {{es}} config directory. {{es}} will automatically monitor this file for changes and will reload the configuration whenever it is updated. + * If you're using {{ece}} or {{ech}}, then you must upload the file [as a custom bundle](/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md) before it can be referenced. + * If you're using {{eck}}, then install the file as [custom configuration files](/deploy-manage/deploy/cloud-on-k8s/custom-configuration-files-plugins.md#use-a-volume-and-volume-mount-together-with-a-configmap-or-secret). + ::: + +idp.entity_id +: The identifier (SAML EntityID) that your IdP uses. It should match the `entityID` attribute within the metadata file. + +sp.entity_id +: A unique identifier for your {{kib}} instance, expressed as a URI. You will use this value when you add {{kib}} as a service provider within your IdP. We recommend that you use the base URL for your {{kib}} instance as the entity ID. + +sp.acs +: The *Assertion Consumer Service* (ACS) endpoint is the URL within {{kib}} that accepts authentication messages from the IdP. This ACS endpoint supports the SAML HTTP-POST binding only. It must be a URL that is accessible from the web browser of the user who is attempting to login to {{kib}}, it does not need to be directly accessible by {{es}} or the IdP. The correct value may vary depending on how you have installed {{kib}} and whether there are any proxies involved, but it will typically be `${kibana-url}/api/security/saml/callback` where *${kibana-url}* is the base URL for your {{kib}} instance. + +sp.logout +: The URL within {{kib}} that accepts logout messages from the IdP. Like the `sp.acs` URL, it must be accessible from the web browser, but does not need to be directly accessible by {{es}} or the IdP. The correct value may vary depending on how you have installed {{kib}} and whether there are any proxies involved, but it will typically be `${kibana-url}/logout` where *${kibana-url}* is the base URL for your {{kib}} instance. + +attributes.principal +: See [Attribute mapping](#saml-attributes-mapping). + +attributes.groups +: See [Attribute mapping](#saml-attributes-mapping). +:::: + +## Map SAML attributes to {{es}} attributes [saml-attributes-mapping] + +When a user connects to {{kib}} through your Identity Provider, the Identity Provider will supply a SAML Assertion about the user. The assertion will contain an *Authentication Statement* indicating that the user has successfully authenticated to the IdP and one or more *Attribute Statements* that will include *Attributes* for the user. + +These attributes might include information like: + +* The user’s username +* The user’s email address +* The user’s groups or roles + +Attributes in SAML are [usually](#saml-attribute-mapping-nameid) named using a URI such as `urn:oid:0.9.2342.19200300.100.1.1` or `http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn`, and have one or more values associated with them. + +These attribute identifiers vary between IdPs, and most IdPs offer ways to customize the URIs and their associated value. + +{{es}} uses these attributes to infer information about the user who has logged in, and they can be used for [role mapping](#saml-role-mapping). + +### How attributes appear in user metadata [saml-user-metadata] + +By default users who authenticate via SAML will have some additional metadata fields. + +* `saml_nameid` will be set to the value of the `NameID` element in the SAML authentication response +* `saml_nameid_format` will be set to the full URI of the NameID’s `format` attribute +* Every SAML Attribute that is provided in the authentication response (regardless of whether it is mapped to an {{es}} user property), will be added as the metadata field `saml(name)` where "name" is the full URI name of the attribute. For example `saml(urn:oid:0.9.2342.19200300.100.1.3)`. +* For every SAML Attribute that has a *friendlyName*, will also be added as the metadata field `saml_friendlyName` where "name" is the full URI name of the attribute. For example `saml_mail`. + +This behavior can be disabled by adding `populate_user_metadata: false` to as a setting in the saml realm. + +### Map attributes + +In order for SAML attributes to be useful in {{es}}, {{es}} and the IdP need to have a common value for the names of the attributes. This is done manually, by configuring the IdP and the SAML realm to use the same URI name for each logical user attribute. + +The recommended steps for configuring these SAML attributes are as follows: + +1. Consult your IdP to see what user attributes it can provide. This varies greatly between providers, but you should be able to obtain a list from the documentation, or from your local admin. +2. Review the list of [user properties](#saml-es-user-properties) that {{es}} supports, and decide which of them are useful to you, and can be provided by your IdP. At a *minimum*, the `principal` attribute is required. +3. Configure your IdP to "release" those attributes to your {{kib}} SAML service provider. This process varies by provider: some will provide a user interface for this, while others may require that you edit configuration files. + + Because {{es}} does not require that any specific URIs are used, you can use any URIs to use for each attribute as they are recommended by the IDP or your local administrator. +4. Configure the SAML realm in {{es}} to associate the [{{es}} user properties](#saml-es-user-properties) to the URIs that you configured in your IdP. The [sample configuration](#saml-create-realm) configures the `principal` and `groups` attributes. + +### Special attribute names [saml-attribute-mapping-nameid] + +In general, {{es}} expects that the configured value for an attribute will be a URI, such as `urn:oid:0.9.2342.19200300.100.1.1`. However, there are some additional names that can be used: + +`nameid` +: This uses the SAML `NameID` value (all leading and trailing whitespace removed) instead of a SAML attribute. SAML `NameID` elements are an optional, but frequently provided, field within a SAML Assertion that the IdP may use to identify the Subject of that Assertion. In some cases the `NameID` will relate to the user’s login identifier (username) within the IdP, but in many cases they will be internally generated identifiers that have no obvious meaning outside of the IdP. + +`nameid:persistent` +: This uses the SAML `NameID` value (all leading and trailing whitespace removed), but only if the NameID format is `urn:oasis:names:tc:SAML:2.0:nameid-format:persistent`. A SAML `NameID` element has an optional `Format` attribute that indicates the semantics of the provided name. It is common for IdPs to be configured with "transient" NameIDs that present a new identifier for each session. Since it is rarely useful to use a transient NameID as part of an attribute mapping, the "nameid:persistent" attribute name can be used as a safety mechanism that will cause an error if you attempt to map from a `NameID` that does not have a persistent value. + +:::{note} +Identity Providers can be either statically configured to release a `NameID` with a specific format, or they can be configured to try to conform with the requirements of the SP. The SP declares its requirements as part of the Authentication Request, using an element which is called the `NameIDPolicy`. If this is needed, you can set the relevant [settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md#ref-saml-settings) named `nameid_format` in order to request that the IdP releases a `NameID` with a specific format. +::: + +*friendlyName* +: A SAML attribute may have a *friendlyName* in addition to its URI based name. For example the attribute with a name of `urn:oid:0.9.2342.19200300.100.1.1` might also have a friendlyName of `uid`. You may use these friendly names within an attribute mapping, but it is recommended that you use the URI based names, as friendlyNames are neither standardized or mandatory. + +The example below configures a realm to use a persistent nameid for the principal, and the attribute with the friendlyName "roles" for the user’s groups. + +```yaml +xpack.security.authc.realms.saml.saml1: + order: 2 + idp.metadata.path: saml/idp-metadata.xml + idp.entity_id: "https://sso.example.com/" + sp.entity_id: "https://kibana.example.com/" + sp.acs: "https://kibana.example.com/api/security/saml/callback" + attributes.principal: "nameid:persistent" + attributes.groups: "roles" + nameid_format: "urn:oasis:names:tc:SAML:2.0:nameid-format:persistent" +``` + + +### Mappable {{es}} user properties [saml-es-user-properties] + +The {{es}} SAML realm can be configured to map SAML `attributes` to the following properties on the authenticated user: + +principal +: *(Required)* This is the *username* that will be applied to a user that authenticates against this realm. The `principal` appears in places such as the {{es}} audit logs. + +groups +: *(Recommended)* If you want to use your IdP’s concept of groups or roles as the basis for a user’s {{es}} privileges, you should map them with this attribute. The `groups` are passed directly to your [role mapping rules](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-role-mapping). + + :::{note} + Some IdPs are configured to send the `groups` list as a single value, comma-separated string. To map this SAML attribute to the `attributes.groups` setting in the {{es}} realm, you can configure a string delimiter using the `attribute_delimiters.group` setting.

For example, splitting the SAML attribute value `engineering,elasticsearch-admins,employees` on a delimiter value of `,` will result in `engineering`, `elasticsearch-admins`, and `employees` as the list of groups for the user. + :::: + +name +: *(Optional)* The user’s full name. + +mail +: *(Optional)* The user’s email address. + +dn +: *(Optional)* The user’s X.500 *Distinguished Name*. + + +### Extract partial values from SAML attributes [_extracting_partial_values_from_saml_attributes] + +There are some occasions where the IdP’s attribute may contain more information than you want to use within {{es}}. A common example of this is one where the IdP works exclusively with email addresses, but you want the user’s `principal` to use the `local-name` part of the email address. For example if their email address was `james.wong@staff.example.com`, then you might want their principal to be `james.wong`. + +This can be achieved using the `attribute_patterns` setting in the {{es}} realm, as demonstrated in the realm configuration below: + +```yaml +xpack.security.authc.realms.saml.saml1: + order: 2 + idp.metadata.path: saml/idp-metadata.xml + idp.entity_id: "https://sso.example.com/" + sp.entity_id: "https://kibana.example.com/" + sp.acs: "https://kibana.example.com/api/security/saml/callback" + attributes.principal: "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress" + attribute_patterns.principal: "^([^@]+)@staff\\.example\\.com$" +``` + +In this case, the user’s `principal` is mapped from an email attribute, but a regular expression is applied to the value before it is assigned to the user. If the regular expression matches, then the result of the first group is used as effective value. If the regular expression does not match then the attribute mapping fails. + +In this example, the email address must belong to the `staff.example.com` domain, and then the local-part (anything before the `@`) is used as the principal. Any users who try to login using a different email domain will fail because the regular expression will not match against their email address, and thus their principal attribute - which is mandatory - will not be populated. + +::::{important} +Small mistakes in these regular expressions can have significant security consequences. For example, if we accidentally left off the trailing `$` from the example above, then we would match any email address where the domain starts with `staff.example.com`, and this would accept an email address such as `admin@staff.example.com.attacker.net`. It is important that you make sure your regular expressions are as precise as possible so that you don't open an avenue for user impersonation attacks. +:::: + +## Request specific authentication methods [req-authn-context] + +It is sometimes necessary for a SAML SP to be able to impose specific restrictions regarding the authentication that will take place at an IdP, in order to assess the level of confidence that it can place in the corresponding authentication response. The restrictions might have to do with the authentication method (password, client certificates, etc), the user identification method during registration, and other details. {{es}} implements [SAML 2.0 Authentication Context](https://docs.oasis-open.org/security/saml/v2.0/saml-authn-context-2.0-os.pdf), which can be used for this purpose as defined in SAML 2.0 Core Specification. + +The SAML SP defines a set of Authentication Context Class Reference values, which describe the restrictions to be imposed on the IdP, and sends these in the Authentication Request. The IdP attempts to grant these restrictions. If it cannot grant them, the authentication attempt fails. If the user is successfully authenticated, the Authentication Statement of the SAML Response contains an indication of the restrictions that were satisfied. + +You can define the Authentication Context Class Reference values by using the `req_authn_context_class_ref` option in the SAML realm configuration. See [SAML realm settings](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/security-settings.md#ref-saml-settings). + +{{es}} supports only the `exact` comparison method for the Authentication Context. When it receives the Authentication Response from the IdP, {{es}} examines the value of the Authentication Context Class Reference that is part of the Authentication Statement of the SAML Assertion. If it matches one of the requested values, the authentication is considered successful. Otherwise, the authentication attempt fails. + +## Configure SAML logout [saml-logout] + +The SAML protocol supports the concept of Single Logout (SLO). The level of support for SLO varies between Identity Providers. You should consult the documentation for your IdP to determine what Logout services it offers. + +By default, the {{stack}} will support SAML SLO if the following are true: + +* Your IdP metadata specifies that the IdP offers a SLO service +* Your IdP releases a NameID in the subject of the SAML assertion that it issues for your users +* You configure `sp.logout` +* The setting `idp.use_single_logout` is not `false` + +### IdP SLO service [_idp_slo_service] + +One of the values that {{es}} reads from the IdP’s SAML metadata is the ``. For Single Logout to work with the {{stack}}, {{es}} requires that this exist and support a binding of `urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect`. + +The {{stack}} will send both `` and `` messages to this service as appropriate. + + +### The sp.logout setting [_the_sp_logout_setting] + +The {{es}} realm setting `sp.logout` specifies a URL in {{kib}} to which the IdP can send both `` and `` messages. This service uses the SAML HTTP-Redirect binding. + +{{es}} will process `` messages, and perform a global signout that invalidates any existing {{es}} security tokens that are associated with the provided SAML session. + +If you don't configure a value for `sp.logout`, {{es}} will refuse all `` messages. + +::::{note} +It is common for IdPs to require that `LogoutRequest` messages be signed, so you may need to configure [signing credentials](#saml-enc-sign). +:::: + +### The idp.use_single_logout setting [_the_idp_use_single_logout_setting] + +If your IdP provides a `` but you do not want to use it, you can configure `idp.use_single_logout: false` in your SAML realm, and {{es}} will ignore the SLO service that your IdP provides. In this case, when a user logs out of {{kib}} it will invalidate their {{es}} session (security token), but will not perform any logout at the IdP. + + +### Using {{kib}} without single logout [_using_kib_without_single_logout] + +If your IdP does not support Single Logout, or you choose not to use it, then {{kib}} will perform a "local logout" only. + +This means that {{kib}} will invalidate the session token it is using to communicate with {{es}}, but will not be able to perform any sort of invalidation of the Identity Provider session. In most cases, this will mean that {{kib}} users are still considered to be logged in to the IdP. Consequently, if the user navigates to the {{kib}} landing page, they will be automatically reauthenticated, and will commence a new {{kib}} session without needing to enter any credentials. + +The possible solutions to this problem are: + +* Ask your IdP administrator or vendor to provide a Single Logout service +* If your Idp does provide a Single Logout Service, make sure it is included in the IdP metadata file, and do *not* set `idp.use_single_logout` to `false`. +* Advise your users to close their browser after logging out of {{kib}} +* Enable the `force_authn` setting on your SAML realm. This setting causes the {{stack}} to request fresh authentication from the IdP every time a user attempts to log in to {{kib}}. This setting defaults to `false` because it can be a more cumbersome user experience, but it can also be an effective protection to stop users piggy-backing on existing IdP sessions. + +## Encryption and signing [saml-enc-sign] + +The {{stack}} supports generating signed SAML messages (for authentication and/or logout), verifying signed SAML messages from the IdP (for both authentication and logout) and can process encrypted content. + +You can configure {{es}} for signing, encryption or both, using a single key or individual keys. + +The {{stack}} uses X.509 certificates with RSA private keys for SAML cryptography. These keys can be generated using any standard SSL tool, including the `elasticsearch-certutil` tool. + +Your IdP may require that the {{stack}} have a cryptographic key for signing SAML messages, and that you provide the corresponding signing certificate within the Service Provider configuration (either within the {{stack}} SAML metadata file, or manually configured within the IdP administration interface). + +While most IdPs do not expect authentication requests to be signed, it is commonly the case that signatures are required for logout requests. Your IdP will validate these signatures against the signing certificate that has been configured for the {{stack}} Service Provider. + +Encryption certificates are rarely needed, but the {{stack}} supports them for cases where IdPs or local policies mandate their use. + +### Generate certificates and keys [_generating_certificates_and_keys] + +{{es}} supports certificates and keys in either PEM, PKCS#12 or JKS format. Some Identity Providers are more restrictive in the formats they support, and will require you to provide the certificates as a file in a particular format. You should consult the documentation for your IdP to determine what formats they support. + +#### Example: Using `openssl` + +```sh +openssl req -new -x509 -days 3650 -nodes -sha256 -out saml-sign.crt -keyout saml-sign.key +``` + +#### Example: Using `elasticsearch-certutil` + +```{applies_to} +deployment: + self: +``` + +Using the [`elasticsearch-certutil` tool](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/command-line-tools/certutil.md), you can generate a signing certificate with the following command. Because PEM format is the most commonly supported format, the example generates a certificate in that format. + +```sh +bin/elasticsearch-certutil cert --self-signed --pem --days 1100 --name saml-sign --out saml-sign.zip +``` + +This will do the following: + +* generate a certificate and key pair (the `cert` subcommand) +* create the files in PEM format (`-pem` option) +* generate a certificate that is valid for 3 years (`-days 1100`) +* name the certificate `saml-sign` (`-name` option) +* save the certificate and key in the `saml-sign.zip` file (`-out` option) + +The generated zip archive will contain 3 files: + +* `saml-sign.crt`, the public certificate to be used for signing +* `saml-sign.key`, the private key for the certificate +* `ca.crt`, a CA certificate that is not need, and can be ignored. + +Encryption certificates can be generated with the same process. + + +### Sign outgoing SAML messages [_configuring_es_for_signing] + +By default, {{es}} will sign *all* outgoing SAML messages if a signing key has been configured. + +:::{tip} +* In self-managed clusters, file path settings is resolved relative to the {{es}} config directory. {{es}} will automatically monitor this file for changes and will reload the configuration whenever it is updated. +* If you're using {{ece}} or {{ech}}, then you must upload any certificate or keystore files [as a custom bundle](/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md) before it can be referenced. You can add this file to your existing SAML bundle. +* If you're using {{eck}}, then install the files as [custom configuration files](/deploy-manage/deploy/cloud-on-k8s/custom-configuration-files-plugins.md#use-a-volume-and-volume-mount-together-with-a-configmap-or-secret). +::: + +::::{tab-set} +:::{tab-item} PEM formatted keys + +If you want to use **PEM formatted** keys and certificates for signing, then you should configure the following settings on the SAML realm: + +`signing.certificate` +: The path to the PEM formatted certificate file. e.g. `saml/saml-sign.crt` + +`signing.key` +: The path to the PEM formatted key file. e.g. `saml/saml-sign.key` + +`signing.secure_key_passphrase` +: The passphrase for the key, if the file is encrypted. This is a secure setting that must be uploaded to your [{{es}} keystore](/deploy-manage/security/secure-settings.md). + +::: +:::{tab-item} PKCS#12 or Java Keystore +If you want to use **PKCS#12 formatted** files or a **Java Keystore** for signing, then you should configure the following settings on the SAML realm: + +`signing.keystore.path` +: The path to the PKCS#12 or JKS keystore. e.g. `saml/saml-sign.p12` + +`signing.keystore.alias` +: The alias of the key within the keystore. e.g. `signing-key` + +`signing.keystore.secure_password` +: The passphrase for the keystore, if the file is encrypted. This is a secure setting that must be uploaded to your [{{es}} keystore](/deploy-manage/security/secure-settings.md). +::: +:::: + +#### Sign only certain message types + +If you want to sign some, but not all outgoing **SAML messages**, then configure `signing.saml_messages` with a comma separated list of message types to sign. Supported values are `AuthnRequest`, `LogoutRequest` and `LogoutResponse` and the default value is `*`. + +For example: + +```sh +xpack: + security: + authc: + realms: + saml-realm-name: + order: 2 + ... + signing.saml_messages: AuthnRequest <1> + ... +``` + +1. This configuration ensures that only SAML authentication requests will be sent signed to the Identity Provider. + + +### Configuring {{es}} for encrypted messages [_configuring_es_for_encrypted_messages] + +The {{es}} {{security-features}} support a single key for message decryption. If a key is configured, then {{es}} attempts to use it to decrypt `EncryptedAssertion` and `EncryptedAttribute` elements in Authentication responses, and `EncryptedID` elements in Logout requests. + +{{es}} rejects any SAML message that contains an `EncryptedAssertion` that cannot be decrypted. + +If an `Assertion` contains both encrypted and plain-text attributes, then failure to decrypt the encrypted attributes will not cause an automatic rejection. Rather, {{es}} processes the available plain-text attributes (and any `EncryptedAttributes` that could be decrypted). + +:::{tip} +* In self-managed clusters, file path settings is resolved relative to the {{es}} config directory. {{es}} will automatically monitor this file for changes and will reload the configuration whenever it is updated. +* If you're using {{ece}} or {{ech}}, then you must upload any certificate or keystore files [as a custom bundle](/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md) before it can be referenced. You can add this file to your existing SAML bundle. +* If you're using {{eck}}, then install the files as [custom configuration files](/deploy-manage/deploy/cloud-on-k8s/custom-configuration-files-plugins.md#use-a-volume-and-volume-mount-together-with-a-configmap-or-secret). +::: + +::::{tab-set} +:::{tab-item} PEM-formatted keys + +If you want to use **PEM formatted** keys and certificates for SAML encryption, then you should configure the following settings on the SAML realm: + +`encryption.certificate` +: The path to the PEM formatted certificate file. e.g. `saml/saml-crypt.crt` + +`encryption.key` +: The path to the PEM formatted key file. e.g. `saml/saml-crypt.key` + +`encryption.secure_key_passphrase` +: The passphrase for the key, if the file is encrypted. This is a secure setting that must be uploaded to your [{{es}} keystore](/deploy-manage/security/secure-settings.md). + +::: +:::{tab-item} PKCS#12 or Java Keystore + +If you want to use **PKCS#12 formatted** files or a **Java Keystore** for SAML encryption, then you should configure the following settings on the SAML realm: + +`encryption.keystore.path` +: The path to the PKCS#12 or JKS keystore. e.g. `saml/saml-crypt.p12` + +`encryption.keystore.alias` +: The alias of the key within the keystore. e.g. `encryption-key` + +`encryption.keystore.secure_password` +: The passphrase for the keystore, if the file is encrypted. This is a secure setting that must be uploaded to your [{{es}} keystore](/deploy-manage/security/secure-settings.md). + +::: +:::: + +## Generate SAML metadata for the Service Provider [saml-sp-metadata] + +Some Identity Providers support importing a metadata file from the Service Provider. This will automatically configure many of the integration options between the IdP and the SP. + +The {{stack}} supports generating such a metadata file using the [SAML service provider metadata API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-saml-service-provider-metadata) or the [`bin/elasticsearch-saml-metadata` command](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/command-line-tools/saml-metadata.md). + +### Using the SAML service provider metadata API + +You can generate the SAML metadata by issuing the API request to {{es}} and store it as an XML file using tools like `jq`. For example, the following command generates the metadata for the SAML realm `realm1` and saves it to a `metadata.xml` file: + +```console +curl -u user_name:password -X GET http://localhost:9200/_security/saml/metadata/saml1 -H 'Content-Type: application/json' | jq -r '.[]' > metadata.xml +``` + +### Using the `elasticsearch-saml-metadata` command + +You can generate the SAML metadata by running the [`bin/elasticsearch-saml-metadata` command](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/command-line-tools/saml-metadata.md). + +```{applies_to} +deployment: + self: + eck: +``` + +::::{tab-set} +::: {tab-item} Self-managed +```sh +bin/elasticsearch-saml-metadata --realm saml1 +``` +::: +::: {tab-item} ECK +To generate the Service Provider metadata using the `elasticsearch-saml-metadata` command in {{eck}}, you need to run the command using `kubectl`, and then copy the generated metadata file to your local machine. For example: + +```sh +# Create metadata +kubectl exec -it elasticsearch-sample-es-default-0 -- sh -c "/usr/share/elasticsearch/bin/elasticsearch-saml-metadata --realm saml1" + +# Copy metadata file +kubectl cp elasticsearch-sample-es-default-0:/usr/share/elasticsearch/saml-elasticsearch-metadata.xml saml-elasticsearch-metadata.xml +``` +::: +:::: + + + +## Configure role mappings [saml-role-mapping] + +When a user authenticates using SAML, they are identified to the {{stack}}, but this does not automatically grant them access to perform any actions or access any data. + +Your SAML users cannot do anything until they are assigned roles. This can be done through either the [add role mapping API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-put-role-mapping) or with [authorization realms](/deploy-manage/users-roles/cluster-or-deployment-auth/realm-chains.md#authorization_realms). + +You can map SAML users to roles in the following ways: + +* Using the role mappings page in {{kib}}. +* Using the [role mapping API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-put-role-mapping). +* By delegating authorization to another realm. + +::::{note} +You can't use [role mapping files](/deploy-manage/users-roles/cluster-or-deployment-auth/mapping-users-groups-to-roles.md#mapping-roles-file) to grant roles to users authenticating using SAML. +:::: + +### Example: Using the role mapping API + +This is an example of a simple role mapping that grants the `example_role` role to any user who authenticates against the `saml1` realm: + +```console +PUT /_security/role_mapping/saml-example +{ + "roles": [ "example_role" ], <1> + "enabled": true, + "rules": { + "field": { "realm.name": "saml1" } + } +} +``` + +1. The `example_role` role is **not** a builtin Elasticsearch role. This example assumes that you have created a custom role of your own, with appropriate access to your [data streams, indices,](/deploy-manage/users-roles/cluster-or-deployment-auth/defining-roles.md#roles-indices-priv) and [Kibana features](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-privileges.md#kibana-feature-privileges). + +### Example: Role mapping API, using SAML attributes + +The attributes that are mapped via the realm configuration are used to process role mapping rules, and these rules determine which roles a user is granted. + +The user fields that are provided to the role mapping are derived from the SAML attributes as follows: + +* `username`: The `principal` attribute +* `dn`: The `dn` attribute +* `groups`: The `groups` attribute +* `metadata`: See [User metadata](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-user-metadata) + +If your IdP has the ability to provide groups or roles to Service Providers, then you should map this SAML attribute to the `attributes.groups` setting in the {{es}} realm, and then make use of it in a role mapping. + +For example: + +This mapping grants the {{es}} `finance_data` role, to any users who authenticate via the `saml1` realm with the `finance-team` group. + +```console +PUT /_security/role_mapping/saml-finance +{ + "roles": [ "finance_data" ], + "enabled": true, + "rules": { "all": [ + { "field": { "realm.name": "saml1" } }, + { "field": { "groups": "finance-team" } } <1> + ] } +} +``` + +1. The `groups` attribute supports using wildcards (`*`). Refer to the [create or update role mappings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-put-role-mapping) for more information. + +### Delegating SAML authorization to another realm + +If your users also exist in a repository that can be directly accessed by {{es}} (such as an LDAP directory) then you can use [authorization realms](/deploy-manage/users-roles/cluster-or-deployment-auth/realm-chains.md#authorization_realms) instead of role mappings. + +In this case, you perform the following steps: + +1. In your SAML realm, assigned a SAML attribute to act as the lookup userid, by configuring the `attributes.principal` setting. +2. Create a new realm that can lookup users from your local repository (e.g. an `ldap` realm) +3. In your SAML realm, set `authorization_realms` to the name of the realm you created in step 2. + +## Configure {{kib}} [saml-configure-kibana] + +SAML authentication in {{kib}} requires additional settings in addition to the standard {{kib}} security configuration. + +If you're using a self-managed cluster, then, because OIDC requires {{es}} nodes to use TLS on the HTTP interface, you must configure {{kib}} to use a `https` URL to connect to {{es}}, and you may need to configure `elasticsearch.ssl.certificateAuthorities` to trust the certificates that {{es}} has been configured to use. + +SAML authentication in {{kib}} is subject to the following timeout settings in `kibana.yml`: + +* [`xpack.security.session.idleTimeout`](/deploy-manage/security/kibana-session-management.md#session-idle-timeout) +* [`xpack.security.session.lifespan`](/deploy-manage/security/kibana-session-management.md#session-lifespan) + +You may want to adjust these timeouts based on your security requirements. + +### Add the SAML provider to {{kib}} + +::::{tip} +You can configure multiple authentication providers in {{kib}} and let users choose the provider they want to use. For more information, check [the {{kib}} authentication documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md). +:::: + +The three additional settings that are required for SAML support are shown below: + +```yaml +xpack.security.authc.providers: + saml.saml1: + order: 0 + realm: "saml1" +``` + +The configuration values used in the example above are: + +`xpack.security.authc.providers` +: Add `saml` provider to instruct {{kib}} to use SAML SSO as the authentication method. + +`xpack.security.authc.providers.saml..realm` +: Set this to the name of the SAML realm that you have used in your [Elasticsearch realm configuration](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-create-realm), for instance: `saml1` + +### Supporting SAML and basic authentication in {{kib}} [saml-kibana-basic] + +The SAML support in {{kib}} is designed on the expectation that it will be the primary (or sole) authentication method for users of that {{kib}} instance. However, it is possible to support both SAML and Basic authentication within a single {{kib}} instance by setting `xpack.security.authc.providers` as per the example below: + +```yaml +xpack.security.authc.providers: + saml.saml1: + order: 0 + realm: "saml1" + basic.basic1: + order: 1 +``` + +If {{kib}} is configured in this way, users are presented with a choice at the Login Selector UI. They log in with SAML or they provide a username and password and rely on one of the other security realms within {{es}}. Only users who have a username and password for a configured {{es}} authentication realm can log in via {{kib}} login form. + +Alternatively, when the `basic` authentication provider is enabled, you can place a reverse proxy in front of {{kib}}, and configure it to send a basic authentication header (`Authorization: Basic ....`) for each request. If this header is present and valid, {{kib}} will not initiate the SAML authentication process. + + +### Operating multiple {{kib}} instances [_operating_multiple_kib_instances] + +If you want to have multiple {{kib}} instances that authenticate against the same {{es}} cluster, then each {{kib}} instance that is configured for SAML authentication, requires its own SAML realm. + +Each SAML realm must have its own unique Entity ID (`sp.entity_id`), and its own *Assertion Consumer Service* (`sp.acs`). Each {{kib}} instance will be mapped to the correct realm by looking up the matching `sp.acs` value. + +These realms may use the same Identity Provider, but are not required to. + +The following is example of having 3 difference {{kib}} instances, 2 of which use the same internal IdP, and another which uses a different IdP. + +```yaml +xpack.security.authc.realms.saml.saml_finance: + order: 2 + idp.metadata.path: saml/idp-metadata.xml + idp.entity_id: "https://sso.example.com/" + sp.entity_id: "https://kibana.finance.example.com/" + sp.acs: "https://kibana.finance.example.com/api/security/saml/callback" + sp.logout: "https://kibana.finance.example.com/logout" + attributes.principal: "urn:oid:0.9.2342.19200300.100.1.1" + attributes.groups: "urn:oid:1.3.6.1.4.1.5923.1.5.1." +xpack.security.authc.realms.saml.saml_sales: + order: 3 + idp.metadata.path: saml/idp-metadata.xml + idp.entity_id: "https://sso.example.com/" + sp.entity_id: "https://kibana.sales.example.com/" + sp.acs: "https://kibana.sales.example.com/api/security/saml/callback" + sp.logout: "https://kibana.sales.example.com/logout" + attributes.principal: "urn:oid:0.9.2342.19200300.100.1.1" + attributes.groups: "urn:oid:1.3.6.1.4.1.5923.1.5.1." +xpack.security.authc.realms.saml.saml_eng: + order: 4 + idp.metadata.path: saml/idp-external.xml + idp.entity_id: "https://engineering.sso.example.net/" + sp.entity_id: "https://kibana.engineering.example.com/" + sp.acs: "https://kibana.engineering.example.com/api/security/saml/callback" + sp.logout: "https://kibana.engineering.example.com/logout" + attributes.principal: "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn" +``` + +It is possible to have one or more {{kib}} instances that use SAML, while other instances use basic authentication against another realm type (e.g. [Native](/deploy-manage/users-roles/cluster-or-deployment-auth/native.md) or [LDAP](/deploy-manage/users-roles/cluster-or-deployment-auth/ldap.md)). + +## Troubleshooting SAML realm configuration [saml-troubleshooting] + +The SAML 2.0 specification offers a lot of options and flexibility for the implementers of the standard which in turn adds to the complexity and the number of configuration options that are available both at the Service Provider ({{stack}}) and at the Identity Provider. Additionally, different security domains have different security requirements that need specific configuration to be satisfied. A conscious effort has been made to mask this complexity with sane defaults and the detailed documentation above but in case you encounter issues while configuring a SAML realm, you can look through our [SAML troubleshooting documentation](../../../troubleshoot/elasticsearch/security/trb-security-saml.md) that has suggestions and resolutions for common issues. + + +## SAML without {{kib}} [saml-no-kibana] + +The SAML realm in {{es}} is designed to allow users to authenticate to {{kib}} and as such, most of the parts of the guide above make the assumption that {{kib}} is used. This section describes how a custom web application could use the relevant SAML REST APIs in order to authenticate the users to {{es}} with SAML. + +::::{note} +This section assumes that you are familiar with the SAML 2.0 standard and more specifically with the SAML 2.0 Web Browser Single Sign On profile. +:::: + +Single sign-on realms such as OpenID Connect and SAML make use of the Token Service in {{es}} and in principle exchange a SAML or OpenID Connect Authentication response for an {{es}} access token and a refresh token. The access token is used as credentials for subsequent calls to {{es}}. The refresh token enables the user to get new {{es}} access tokens after the current one expires. + +### SAML realm [saml-no-kibana-realm] + +You must create a SAML realm and configure it accordingly in {{es}}. See [Configure {{es}} for SAML authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-create-realm) + + +### Service Account user for accessing the APIs [saml-no-kibana-user] + +The realm is designed with the assumption that there needs to be a privileged entity acting as an authentication proxy. In this case, the custom web application is the authentication proxy handling the authentication of end users (more correctly, "delegating" the authentication to the SAML Identity Provider). The SAML related APIs require authentication and the necessary authorization level for the authenticated user. For this reason, you must create a Service Account user and assign it a role that gives it the `manage_saml` cluster privilege. The use of the `manage_token` cluster privilege will be necessary after the authentication takes place, so that the service account user can maintain access in order refresh access tokens on behalf of the authenticated users or to subsequently log them out. + +```console +POST /_security/role/saml-service-role +{ + "cluster" : ["manage_saml", "manage_token"] +} +``` + +```console +POST /_security/user/saml-service-user +{ + "password" : "", + "roles" : ["saml-service-role"] +} +``` + + +### Handling the SP-initiated authentication flow [saml-no-kibana-sp-init-sso] + +On a high level, the custom web application would need to perform the following steps in order to authenticate a user with SAML against {{es}}: + +1. Make an HTTP POST request to `_security/saml/prepare`, authenticating as the `saml-service-user` user. Use either the name of the SAML realm in the {{es}} configuration or the value for the Assertion Consumer Service URL in the request body. See the [SAML prepare authentication API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-saml-prepare-authentication) for more details. + + ```console + POST /_security/saml/prepare + { + "realm" : "saml1" + } + ``` + +2. Handle the response from `/_security/saml/prepare`. The response from {{es}} will contain 3 parameters: `redirect`, `realm` and `id`. The custom web application would need to store the value for `id` in the user’s session (client side in a cookie or server side if session information is persisted this way). It must also redirect the user’s browser to the URL that was returned in the `redirect` parameter. The `id` value should not be disregarded as it is used as a nonce in SAML in order to mitigate against replay attacks. +3. Handle a subsequent response from the SAML IdP. After the user is successfully authenticated with the Identity Provider they will be redirected back to the Assertion Consumer Service URL. This `sp.acs` needs to be defined as a URL which the custom web application handles. When it receives this HTTP POST request, the custom web application must parse it and make an HTTP POST request itself to the `_security/saml/authenticate` API. It must authenticate as the `saml-service-user` user and pass the Base64 encoded SAML Response that was sent as the body of the request. It must also pass the value for `id` that it had saved in the user’s session previously. + + See [SAML authenticate API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-saml-authenticate) for more details. + + ```console + POST /_security/saml/authenticate + { + "content" : "PHNhbWxwOlJlc3BvbnNlIHhtbG5zOnNhbWxwPSJ1cm46b2FzaXM6bmFtZXM6dGM6U0FNTDoyLjA6cHJvdG9jb2wiIHhtbG5zOnNhbWw9InVybjpvYXNpczpuYW1lczp0YzpTQU1MOjIuMD.....", + "ids" : ["4fee3b046395c4e751011e97f8900b5273d56685"] + } + ``` + + Elasticsearch will validate this and if all is correct will respond with an access token that can be used as a `Bearer` token for subsequent requests. It also supplies a refresh token that can be later used to refresh the given access token as described in [get token API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-get-token). + +4. The response to calling `/_security/saml/authenticate` will contain only the username of the authenticated user. If you need to get the values for the SAML Attributes that were contained in the SAML Response for that user, you can call the Authenticate API `/_security/_authenticate/` using the access token as a `Bearer` token and the SAML attribute values will be contained in the response as part of the [User metadata](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-user-metadata). + + +### Handling the IdP-initiated authentication flow [saml-no-kibana-idp-init-sso] + +{{es}} can also handle the IdP-initiated Single Sign On flow of the SAML 2 Web Browser SSO profile. In this case the authentication starts with an unsolicited authentication response from the SAML Identity Provider. The difference with the [SP initiated SSO](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-no-kibana-sp-init-sso) is that the web application needs to handle requests to the `sp.acs` that will not come as responses to previous redirections. As such, it will not have a session for the user already, and it will not have any stored values for the `id` parameter. The request to the `_security/saml/authenticate` API will look like the one below in this case: + +```console +POST /_security/saml/authenticate +{ + "content" : "PHNhbWxwOlJlc3BvbnNlIHhtbG5zOnNhbWxwPSJ1cm46b2FzaXM6bmFtZXM6dGM6U0FNTDoyLjA6cHJvdG9jb2wiIHhtbG5zOnNhbWw9InVybjpvYXNpczpuYW1lczp0YzpTQU1MOjIuMD.....", + "ids" : [] +} +``` + + +### Handling the logout flow [saml-no-kibana-slo] + +1. At some point, if necessary, the custom web application can log the user out by using the [SAML logout API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-saml-logout) and passing the access token and refresh token as parameters. For example: + + ```console + POST /_security/saml/logout + { + "token" : "46ToAxZVaXVVZTVKOVF5YU04ZFJVUDVSZlV3", + "refresh_token": "mJdXLtmvTUSpoLwMvdBt_w" + } + ``` + + If the SAML realm is configured accordingly and the IdP supports it (see [SAML logout](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-logout)), this request will trigger a SAML SP-initiated Single Logout. In this case, the response will include a `redirect` parameter indicating where the user needs to be redirected at the IdP in order to complete the logout. + +2. Alternatively, the IdP might initiate the Single Logout flow at some point. In order to handle this, the Logout URL (`sp.logout`) needs to be handled by the custom web app. The query part of the URL that the user will be redirected to will contain a SAML Logout request and this query part needs to be relayed to {{es}} using the [SAML invalidate API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-saml-invalidate) + + ```console + POST /_security/saml/invalidate + { + "query" : "SAMLRequest=nZFda4MwFIb%2FiuS%2BmviRpqFaClKQdbvo2g12M2KMraCJ9cRR9utnW4Wyi13sMie873MeznJ1aWrnS3VQGR0j4mLkKC1NUeljjA77zYyhVbIE0dR%2By7fmaHq7U%2BdegXWGpAZ%2B%2F4pR32luBFTAtWgUcCv56%2Fp5y30X87Yz1khTIycdgpUW9kY7WdsC9zxoXTvMvWuVV98YyMnSGH2SYE5pwALBIr9QKiwDGpW0oGVUznGeMyJZKFkQ4jBf5HnhUymjIhzCAL3KNFihbYx8TBYzzGaY7EnIyZwHzCWMfiDnbRIftkSjJr%2BFu0e9v%2B0EgOquRiiZjKpiVFp6j50T4WXoyNJ%2FEWC9fdqc1t%2F1%2B2F3aUpjzhPiXpqMz1%2FHSn4A&SigAlg=http%3A%2F%2Fwww.w3.org%2F2001%2F04%2Fxmldsig-more%23rsa-sha256&Signature=MsAYz2NFdovMG2mXf6TSpu5vlQQyEJAg%2B4KCwBqJTmrb3yGXKUtIgvjqf88eCAK32v3eN8vupjPC8LglYmke1ZnjK0%2FKxzkvSjTVA7mMQe2AQdKbkyC038zzRq%2FYHcjFDE%2Bz0qISwSHZY2NyLePmwU7SexEXnIz37jKC6NMEhus%3D", + "realm" : "saml1" + } + ``` + + The custom web application will then need to also handle the response, which will include a `redirect` parameter with a URL in the IdP that contains the SAML Logout response. The application should redirect the user there to complete the logout. + + +For SP-initiated Single Logout, the IdP may send back a logout response which can be verified by {{es}} using the [SAML complete logout API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-saml-complete-logout). -$$$saml-sp-metadata$$$ -**This page is a work in progress.** The documentation team is working to combine content pulled from the following pages: -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/saml-realm.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/saml-realm.md) -* [/raw-migrated-files/cloud/cloud-enterprise/ece_sign_outgoing_saml_message.md](/raw-migrated-files/cloud/cloud-enterprise/ece_sign_outgoing_saml_message.md) -* [/raw-migrated-files/cloud/cloud-enterprise/ece_optional_settings.md](/raw-migrated-files/cloud/cloud-enterprise/ece_optional_settings.md) -* [/raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters-SAML.md](/raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters-SAML.md) -* [/raw-migrated-files/cloud/cloud/ec-securing-clusters-SAML.md](/raw-migrated-files/cloud/cloud/ec-securing-clusters-SAML.md) -* [/raw-migrated-files/cloud/cloud/ec-sign-outgoing-saml-message.md](/raw-migrated-files/cloud/cloud/ec-sign-outgoing-saml-message.md) -* [/raw-migrated-files/cloud/cloud/ec-securing-clusters-saml-azure.md](/raw-migrated-files/cloud/cloud/ec-securing-clusters-saml-azure.md) -* [/raw-migrated-files/cloud/cloud-heroku/ech-securing-clusters-SAML.md](/raw-migrated-files/cloud/cloud-heroku/ech-securing-clusters-SAML.md) -* [/raw-migrated-files/cloud/cloud-heroku/echsign-outgoing-saml-message.md](/raw-migrated-files/cloud/cloud-heroku/echsign-outgoing-saml-message.md) -* [/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-saml-authentication.md](/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-saml-authentication.md) -* [/raw-migrated-files/elasticsearch/elasticsearch-reference/saml-guide-stack.md](/raw-migrated-files/elasticsearch/elasticsearch-reference/saml-guide-stack.md) \ No newline at end of file diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/security-domains.md b/deploy-manage/users-roles/cluster-or-deployment-auth/security-domains.md index 1c1f0237c..ceae5141c 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/security-domains.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/security-domains.md @@ -1,6 +1,12 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/security-domain.html +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all --- # Security domains [security-domain] @@ -21,7 +27,7 @@ Security domains make resource sharing across realms possible by grouping those ### Managing roles across realms [security-domain-realm-roles] -{{es}} provides multiple ways to consistently apply roles across realms. For example, you can use [authorization delegation](authorization-delegation.md) to ensure that a user is assigned the same roles from multiple realms. You can also manually configure multiple realms that are backed by the same directory service. Though it’s possible to configure different [roles](user-roles.md#roles) for the same user when authenticating with different realms, it is *not* recommended. +{{es}} provides multiple ways to consistently apply roles across realms. For example, you can use [authorization delegation](authorization-delegation.md) to ensure that a user is assigned the same roles from multiple realms. You can also manually configure multiple realms that are backed by the same directory service. Though it’s possible to configure different [roles](user-roles.md#roles) for the same user when authenticating with different realms, it is not recommended. @@ -65,7 +71,7 @@ To configure a security domain: 2. Restart {{es}}. ::::{important} - {{es}} can fail to start if the domain configuration is invalid, such as: + {{es}} can fail to start if the domain configuration is invalid. Invalid configurations include: * The same realm is configured under multiple domains. * Any undefined realm, synthetic realm, or the reserved realm is configured to be under a domain. diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/service-accounts.md b/deploy-manage/users-roles/cluster-or-deployment-auth/service-accounts.md index 0686dc8f3..941a67771 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/service-accounts.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/service-accounts.md @@ -1,6 +1,12 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/service-accounts.html +applies_to: + deployment: + ess: + ece: + eck: + self: --- # Service accounts [service-accounts] @@ -47,7 +53,7 @@ Service tokens can be backed by either the `.security` index (recommended) or th You must create a service token to use a service account. You can create a service token using either: * The [create service account token API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-service-token), which saves the new service token in the `.security` index and returns the bearer token in the HTTP response. -* The [elasticsearch-service-tokens](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/command-line-tools/service-tokens-command.md) CLI tool, which saves the new service token in the `$ES_HOME/config/service_tokens` file and outputs the bearer token to your terminal +* Self-managed and {{eck}} deployments only: The [elasticsearch-service-tokens](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/command-line-tools/service-tokens-command.md) CLI tool, which saves the new service token in the `$ES_HOME/config/service_tokens` file and outputs the bearer token to your terminal We recommend that you create service tokens via the REST API rather than the CLI. The API stores service tokens within the `.security` index which means that the tokens are available for authentication on all nodes, and will be backed up within cluster snapshots. The use of the CLI is intended for cases where there is an external orchestration process (such as [{{ece}}](https://www.elastic.co/guide/en/cloud-enterprise/current) or [{{eck}}](https://www.elastic.co/guide/en/cloud-on-k8s/current)) that will manage the creation and distribution of the `service_tokens` file. diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/token-based-authentication-services.md b/deploy-manage/users-roles/cluster-or-deployment-auth/token-based-authentication-services.md index a998a6dfe..dfe380194 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/token-based-authentication-services.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/token-based-authentication-services.md @@ -1,6 +1,12 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/token-authentication-services.html +applies_to: + deployment: + ess: + ece: + eck: + self: --- # Token-based authentication services [token-authentication-services] diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md b/deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md index b6b7406dc..8f70dd022 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md @@ -2,11 +2,12 @@ mapped_urls: - https://www.elastic.co/guide/en/elasticsearch/reference/current/setting-up-authentication.html - https://www.elastic.co/guide/en/kibana/current/kibana-authentication.html -applies: - hosted: all - ece: all - eck: all - stack: all +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all --- # User authentication @@ -19,61 +20,54 @@ applies: % Use migrated content from existing pages that map to this page: -% - [ ] ./raw-migrated-files/elasticsearch/elasticsearch-reference/setting-up-authentication.md % - [ ] ./raw-migrated-files/kibana/kibana/kibana-authentication.md -% Notes: this is a good overview -% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc): +Authentication identifies an individual. To gain access to restricted resources, a user must prove their identity, using passwords, credentials, or some other means (typically referred to as authentication tokens). -$$$pki-authentication$$$ +The {{stack}} authenticates users by identifying the users behind the requests that hit the cluster and verifying that they are who they claim to be. The authentication process is handled by one or more authentication services called [*realms*](/deploy-manage/users-roles/cluster-or-deployment-auth/authentication-realms.md). -$$$anonymous-authentication$$$ +You can manage and authenticate users natively, or integrate with external user management systems such as LDAP and Active Directory. If none of the built-in realms meet your needs, you can also build your own custom realm and plug it into the {{stack}}. -$$$basic-authentication$$$ +When {{security-features}} are enabled, depending on the realms you’ve configured, you must attach your user credentials to requests sent to {{es}}. For example, when using realms that support usernames and passwords, you can attach a [basic auth](https://en.wikipedia.org/wiki/Basic_access_authentication) header to the requests. -$$$embedded-content-authentication$$$ +The {{security-features}} provide two services: the token service and the API key service. You can use these services to exchange the current authentication for a token or key. This token or key can then be used as credentials for authenticating new requests. The API key service is enabled by default. The token service is enabled by default when TLS/SSL is enabled for HTTP. -$$$http-authentication$$$ +Review the following topics to learn about authentication in your {{es}} cluster. -$$$kerberos$$$ +:::{tip} +If you use {{ece}} or {{ech}}, then you can also manage authentication at the level of your [{{ece}} orchestrator](/deploy-manage/users-roles/cloud-enterprise-orchestrator.md) or [{{ecloud}} organization](/deploy-manage/users-roles/cloud-organization.md). -$$$multiple-authentication-providers$$$ - -$$$oidc$$$ - -$$$saml$$$ - -$$$token-authentication$$$ - - - -Review the following topics to learn about authentication in your Elasticsearch cluster: +If you use {{serverless-full}}, then you can only manage authentication at the [{{ecloud}} organization level](/deploy-manage/users-roles/cloud-organization.md). +::: ### Set up user authentication -* Learn about the available [realms](/deploy-manage/users-roles/cluster-or-deployment-auth/authentication-realms.md) that you can use to authenticate users -* Manage passwords for [built-in users](/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md) -* Manage users [natively](/deploy-manage/users-roles/cluster-or-deployment-auth/native.md) -* Integrate with external authentication providers using [external realms](/deploy-manage/users-roles/cluster-or-deployment-auth/external-authentication.md): - * [Active Directory](/deploy-manage/users-roles/cluster-or-deployment-auth/active-directory.md) - * [JWT](/deploy-manage/users-roles/cluster-or-deployment-auth/jwt.md) - * [Kerberos](/deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md) - * [LDAP](/deploy-manage/users-roles/cluster-or-deployment-auth/ldap.md) - * [OpenID Connect](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md) - * [SAML](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md) - * [PKI](/deploy-manage/users-roles/cluster-or-deployment-auth/pki.md) - * [Implement a custom realm](/deploy-manage/users-roles/cluster-or-deployment-auth/custom.md) -* Configure [file-based authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/file-based.md) -* Enable [anonymous access](/deploy-manage/users-roles/cluster-or-deployment-auth/anonymous-access.md) -* Set up a [user access agreement](/deploy-manage/users-roles/cluster-or-deployment-auth/access-agreement.md) +* Set up an authentication method: + * Learn about the available [realms](/deploy-manage/users-roles/cluster-or-deployment-auth/authentication-realms.md) that you can use to authenticate users. + * Manage passwords for [default users](/deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md). + * Manage users using [internal realms](/deploy-manage/users-roles/cluster-or-deployment-auth/internal-authentication.md): + * Manage users [natively](/deploy-manage/users-roles/cluster-or-deployment-auth/native.md) + * Configure [file-based authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/file-based.md) + * Integrate with external authentication providers using [external realms](/deploy-manage/users-roles/cluster-or-deployment-auth/external-authentication.md): + * [Active Directory](/deploy-manage/users-roles/cluster-or-deployment-auth/active-directory.md) + * [JWT](/deploy-manage/users-roles/cluster-or-deployment-auth/jwt.md) + * [Kerberos](/deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md) + * [LDAP](/deploy-manage/users-roles/cluster-or-deployment-auth/ldap.md) + * [OpenID Connect](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md) + * [SAML](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md) + * [PKI](/deploy-manage/users-roles/cluster-or-deployment-auth/pki.md) + * [Implement a custom realm](/deploy-manage/users-roles/cluster-or-deployment-auth/custom.md) +* Configure [authentication mechanisms for {{kib}}](kibana-authentication.md). +* Enable [anonymous access](/deploy-manage/users-roles/cluster-or-deployment-auth/anonymous-access.md). +* Set up a [user access agreement](/deploy-manage/users-roles/cluster-or-deployment-auth/access-agreement.md). ### Advanced topics * Learn about [internal users](/deploy-manage/users-roles/cluster-or-deployment-auth/internal-users.md), which are responsible for the operations that take place inside an Elasticsearch cluster. -* Learn about [service accounts](/deploy-manage/users-roles/cluster-or-deployment-auth/service-accounts.md), which are used for integration with external services that connect to Elasticsearch -* Learn about the [services used for token-based authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/token-based-authentication-services.md) -* Learn about the [services used by orchestrators](/deploy-manage/users-roles/cluster-or-deployment-auth/operator-privileges.md) (applies to {{ece}}, {{ech}}, and {{eck}}) -* Manage [user profiles](/deploy-manage/users-roles/cluster-or-deployment-auth/user-profiles.md) -* Learn about [user lookup technologies](/deploy-manage/users-roles/cluster-or-deployment-auth/looking-up-users-without-authentication.md) -* [Manage the user cache](/deploy-manage/users-roles/cluster-or-deployment-auth/controlling-user-cache.md) +* Learn about [service accounts](/deploy-manage/users-roles/cluster-or-deployment-auth/service-accounts.md), which are used for integration with external services that connect to Elasticsearch. +* Learn about the [services used for token-based authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/token-based-authentication-services.md). +* Learn about the [services used by orchestrators](/deploy-manage/users-roles/cluster-or-deployment-auth/operator-privileges.md). +* Manage [user profiles](/deploy-manage/users-roles/cluster-or-deployment-auth/user-profiles.md). +* Learn about [user lookup technologies](/deploy-manage/users-roles/cluster-or-deployment-auth/looking-up-users-without-authentication.md). +* [Manage the user cache](/deploy-manage/users-roles/cluster-or-deployment-auth/controlling-user-cache.md). * Manage authentication for [multiple clusters](/deploy-manage/users-roles/cluster-or-deployment-auth/manage-authentication-for-multiple-clusters.md) using {{stack}} configuration policies ({{eck}} only) diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/user-profiles.md b/deploy-manage/users-roles/cluster-or-deployment-auth/user-profiles.md index c9303a2cb..43268da6b 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/user-profiles.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/user-profiles.md @@ -1,11 +1,16 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/user-profile.html +applies_to: + deployment: + ess: + ece: + eck: --- # User profiles [user-profile] -::::{note} +::::{admonition} Indirect use only The user profile feature is designed only for use by {{kib}} and Elastic’s {{observability}}, and {{elastic-sec}} solutions. Individual users and external applications should not call this API directly. Elastic reserves the right to change or remove this feature in future releases without prior notice. :::: diff --git a/deploy-manage/users-roles/cluster-or-deployment-auth/user-roles.md b/deploy-manage/users-roles/cluster-or-deployment-auth/user-roles.md index 8ab0b9ab5..c59837341 100644 --- a/deploy-manage/users-roles/cluster-or-deployment-auth/user-roles.md +++ b/deploy-manage/users-roles/cluster-or-deployment-auth/user-roles.md @@ -1,11 +1,12 @@ --- mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/authorization.html -applies: - hosted: all - ece: all - eck: all - stack: all +applies_to: + deployment: + ess: all + ece: all + eck: all + self: all --- # User roles [authorization] @@ -83,7 +84,7 @@ The method for assigning roles to users varies depending on which realms you use The {{security-features}} also provide an attribute-based access control (ABAC) mechanism, which enables you to use attributes to restrict access to documents in search queries and aggregations. For example, you can assign attributes to users and documents, then implement an access policy in a role definition. Users with that role can read a specific document only if they have all the required attributes. -For more information, see [Document-level attribute-based access control with X-Pack 6.1](https://www.elastic.co/blog/attribute-based-access-control-with-xpack). +For more information, see [Document-level attribute-based access control with {{es}}](https://www.elastic.co/blog/attribute-based-access-control-elasticsearch). diff --git a/deploy-manage/users-roles/custom-roles.md b/deploy-manage/users-roles/custom-roles.md index 3b5c685ff..8aec4afc6 100644 --- a/deploy-manage/users-roles/custom-roles.md +++ b/deploy-manage/users-roles/custom-roles.md @@ -1,13 +1,13 @@ --- mapped_urls: - https://www.elastic.co/guide/en/serverless/current/custom-roles.html -applies: +applies_to: serverless: all --- This content applies to: [![Elasticsearch](../../images/serverless-es-badge.svg "")](../../solutions/search.md) [![Security](../../images/serverless-sec-badge.svg "")](../../solutions/security/elastic-security-serverless.md) -# Project custom roles [custom-roles] +# Serverless project custom roles [custom-roles] Built-in [organization-level roles](/deploy-manage/users-roles/cloud-organization/user-roles.md#ec_organization_level_roles) and [instance access roles](/deploy-manage/users-roles/cloud-organization/user-roles.md#ec_instance_access_roles) are great for getting started with {{serverless-full}}, and for system administrators who do not need more restrictive access. diff --git a/docset.yml b/docset.yml index 5fcde28fb..bd70147d2 100644 --- a/docset.yml +++ b/docset.yml @@ -3,6 +3,25 @@ exclude: - 'README.md' cross_links: - asciidocalypse + - kibana + - integration-docs + - integrations + - logstash + - elasticsearch + - cloud + - beats + - go-elasticsearch + - elasticsearch-java + - elasticsearch-net + - elasticsearch-php + - elasticsearch-py + - elasticsearch-ruby + - elasticsearch-js + - ecs + - ecs-logging + - search-ui + - cloud-on-k8s + toc: - file: index.md - toc: get-started @@ -12,6 +31,9 @@ toc: - toc: deploy-manage - toc: cloud-account - toc: troubleshoot + - toc: release-notes + - toc: reference + - toc: extend - toc: raw-migrated-files subs: @@ -172,7 +194,6 @@ subs: ess-trial: "https://cloud.elastic.co/registration?page=docs&placement=docs-body" ess-product: "https://www.elastic.co/cloud/elasticsearch-service?page=docs&placement=docs-body" ess-console: "https://cloud.elastic.co?page=docs&placement=docs-body" - ess-console-name: "Elasticsearch Service Console" ess-deployments: "https://cloud.elastic.co/deployments?page=docs&placement=docs-body" ece-ref: "https://www.elastic.co/guide/en/cloud-enterprise/current" eck-ref: "https://www.elastic.co/guide/en/cloud-on-k8s/current" diff --git a/explore-analyze/alerts-cases/alerts/notifications-domain-allowlist.md b/explore-analyze/alerts-cases/alerts/notifications-domain-allowlist.md index c2fe82a78..94e0e85e4 100644 --- a/explore-analyze/alerts-cases/alerts/notifications-domain-allowlist.md +++ b/explore-analyze/alerts-cases/alerts/notifications-domain-allowlist.md @@ -33,7 +33,7 @@ This updates the notifications settings for {{es}} and {{kib}} to reflect what i ### Use the {{ecloud}} Control CLI [use-the-ecloud-control-cli] -Updating multiple deployments through the UI can take a lot of time. Instead, you can use the [{{ecloud}} Control](asciidocalypse://docs/ecctl/docs/reference/cloud/ecctl/index.md) command-line interface (`ecctl`) to automate the deployment update. +Updating multiple deployments through the UI can take a lot of time. Instead, you can use the [{{ecloud}} Control](asciidocalypse://docs/ecctl/docs/reference/index.md) command-line interface (`ecctl`) to automate the deployment update. The following example script shows how to update all deployments of an organization: diff --git a/explore-analyze/alerts-cases/watcher/watcher-ui.md b/explore-analyze/alerts-cases/watcher/watcher-ui.md index 93965a7b8..d8eb217ff 100644 --- a/explore-analyze/alerts-cases/watcher/watcher-ui.md +++ b/explore-analyze/alerts-cases/watcher/watcher-ui.md @@ -38,7 +38,7 @@ If you are creating a threshold watch, you must also have the `view_index_metada A threshold alert is one of the most common types of watches that you can create. This alert periodically checks when your data is above, below, equals, or is in between a certain threshold within a given time interval. -The following example walks you through creating a threshold alert. The alert is triggered when the maximum total CPU usage on a machine goes above a certain percentage. The example uses [Metricbeat](https://www.elastic.co/products/beats/metricbeat) to collect metrics from your systems and services. [Learn more](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/metricbeat-installation-configuration.md) on how to install and get started with Metricbeat. +The following example walks you through creating a threshold alert. The alert is triggered when the maximum total CPU usage on a machine goes above a certain percentage. The example uses [Metricbeat](https://www.elastic.co/products/beats/metricbeat) to collect metrics from your systems and services. [Learn more](asciidocalypse://docs/beats/docs/reference/metricbeat/metricbeat-installation-configuration.md) on how to install and get started with Metricbeat. ### Define the watch input and schedule [_define_the_watch_input_and_schedule] diff --git a/explore-analyze/find-and-organize/saved-objects.md b/explore-analyze/find-and-organize/saved-objects.md index c76adde28..ce93453cf 100644 --- a/explore-analyze/find-and-organize/saved-objects.md +++ b/explore-analyze/find-and-organize/saved-objects.md @@ -149,7 +149,7 @@ After you upgrade, or if you set up a new {{kib}} instance using version 8.x or #### Accessing saved objects using old URLs [saved-object-ids-impact-when-using-legacy-urls] -When you upgrade {{kib}} and saved object IDs change, the "deep link" URLs to access those saved objects will also change. To reduce the impact, each existing URL is preserved with a special [legacy URL alias](asciidocalypse://docs/kibana/docs/extend/contribute-to-kibana/legacy-url-aliases.md). This means that if you use a bookmark for a saved object ID that was changed, you’ll be redirected to the new URL for that saved object. +When you upgrade {{kib}} and saved object IDs change, the "deep link" URLs to access those saved objects will also change. To reduce the impact, each existing URL is preserved with a special [legacy URL alias](asciidocalypse://docs/kibana/docs/extend/legacy-url-aliases.md). This means that if you use a bookmark for a saved object ID that was changed, you’ll be redirected to the new URL for that saved object. #### Importing and copying saved objects [saved-object-ids-impact-when-using-import-and-copy] diff --git a/explore-analyze/machine-learning/anomaly-detection/ml-configuring-categories.md b/explore-analyze/machine-learning/anomaly-detection/ml-configuring-categories.md index 51a5bc987..d6c898cde 100644 --- a/explore-analyze/machine-learning/anomaly-detection/ml-configuring-categories.md +++ b/explore-analyze/machine-learning/anomaly-detection/ml-configuring-categories.md @@ -84,7 +84,7 @@ Another advanced option is the `categorization_filters` property, which can cont ## Per-partition categorization [ml-per-partition-categorization] -If you enable per-partition categorization, categories are determined independently for each partition. For example, if your data includes messages from multiple types of logs from different applications, you can use a field like the ECS [`event.dataset` field](asciidocalypse://docs/ecs/docs/reference/ecs/ecs-event.md) as the `partition_field_name` and categorize the messages for each type of log separately. +If you enable per-partition categorization, categories are determined independently for each partition. For example, if your data includes messages from multiple types of logs from different applications, you can use a field like the ECS [`event.dataset` field](asciidocalypse://docs/ecs/docs/reference/ecs-event.md) as the `partition_field_name` and categorize the messages for each type of log separately. If your job has multiple detectors, every detector that uses the `mlcategory` keyword must also define a `partition_field_name`. You must use the same `partition_field_name` value in all of these detectors. Otherwise, when you create or update a job and enable per-partition categorization, it fails. diff --git a/explore-analyze/machine-learning/data-frame-analytics/ml-trained-models.md b/explore-analyze/machine-learning/data-frame-analytics/ml-trained-models.md index 70d0cd4b9..66ff63e90 100644 --- a/explore-analyze/machine-learning/data-frame-analytics/ml-trained-models.md +++ b/explore-analyze/machine-learning/data-frame-analytics/ml-trained-models.md @@ -115,4 +115,4 @@ If you also want to copy the {{dfanalytics-job}} to the new cluster, you can exp ## Importing an external model to the {{stack}} [import-external-model-to-es] -It is possible to import a model to your {{es}} cluster even if the model is not trained by Elastic {{dfanalytics}}. Eland supports [importing models](asciidocalypse://docs/eland/docs/reference/elasticsearch/elasticsearch-client-eland/machine-learning.md) directly through its APIs. Please refer to the latest [Eland documentation](https://eland.readthedocs.io/en/latest/index.md) for more information on supported model types and other details of using Eland to import models with. +It is possible to import a model to your {{es}} cluster even if the model is not trained by Elastic {{dfanalytics}}. Eland supports [importing models](asciidocalypse://docs/eland/docs/reference/machine-learning.md) directly through its APIs. Please refer to the latest [Eland documentation](https://eland.readthedocs.io/en/latest/index.md) for more information on supported model types and other details of using Eland to import models with. diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-import-model.md b/explore-analyze/machine-learning/nlp/ml-nlp-import-model.md index 9892b78a3..23ceccf46 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-import-model.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-import-model.md @@ -9,7 +9,7 @@ mapped_pages: # Import the trained model and vocabulary [ml-nlp-import-model] ::::{important} -If you want to install a trained model in a restricted or closed network, refer to [these instructions](asciidocalypse://docs/eland/docs/reference/elasticsearch/elasticsearch-client-eland/machine-learning.md#ml-nlp-pytorch-air-gapped). +If you want to install a trained model in a restricted or closed network, refer to [these instructions](asciidocalypse://docs/eland/docs/reference/machine-learning.md#ml-nlp-pytorch-air-gapped). :::: After you choose a model, you must import it and its tokenizer vocabulary to your cluster. When you import the model, it must be chunked and imported one chunk at a time for storage in parts due to its size. @@ -22,7 +22,7 @@ Trained models must be in a TorchScript representation for use with {{stack-ml-f ## Import with the Eland client installed [ml-nlp-import-script] -1. Install the [Eland Python client](asciidocalypse://docs/eland/docs/reference/elasticsearch/elasticsearch-client-eland/installation.md) with PyTorch extra dependencies. +1. Install the [Eland Python client](asciidocalypse://docs/eland/docs/reference/installation.md) with PyTorch extra dependencies. ```shell python -m pip install 'eland[pytorch]' @@ -43,7 +43,7 @@ Trained models must be in a TorchScript representation for use with {{stack-ml-f 3. Specify the identifier for the model in the Hugging Face model hub. 4. Specify the type of NLP task. Supported values are `fill_mask`, `ner`, `question_answering`, `text_classification`, `text_embedding`, `text_expansion`, `text_similarity`, and `zero_shot_classification`. -For more details, refer to [asciidocalypse://docs/eland/docs/reference/elasticsearch/elasticsearch-client-eland/machine-learning.md#ml-nlp-pytorch](asciidocalypse://docs/eland/docs/reference/elasticsearch/elasticsearch-client-eland/machine-learning.md#ml-nlp-pytorch). +For more details, refer to [asciidocalypse://docs/eland/docs/reference/elasticsearch/elasticsearch-client-eland/machine-learning.md#ml-nlp-pytorch](asciidocalypse://docs/eland/docs/reference/machine-learning.md#ml-nlp-pytorch). ## Import with Docker [ml-nlp-import-docker] diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md b/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md index 4e9f55def..b1c2d3305 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-ner-example.md @@ -43,7 +43,7 @@ docker run -it --rm docker.elastic.co/eland/eland \ --start ``` -You need to provide an administrator username and its password and replace the `$CLOUD_ID` with the ID of your Cloud deployment. This Cloud ID can be copied from the deployments page on your Cloud website. +You need to provide an administrator username and its password and replace the `$CLOUD_ID` with the ID of your Cloud deployment. This Cloud ID can be copied from the **Deployments** page on your Cloud website. Since the `--start` option is used at the end of the Eland import command, {{es}} deploys the model ready to use. If you have multiple models and want to select which model to deploy, you can use the **{{ml-app}} > Model Management** user interface in {{kib}} to manage the starting and stopping of models. diff --git a/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md b/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md index 0fb5a4f1f..ca3609c6c 100644 --- a/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md +++ b/explore-analyze/machine-learning/nlp/ml-nlp-text-emb-vector-search-example.md @@ -47,7 +47,7 @@ docker run -it --rm docker.elastic.co/eland/eland \ --start ``` -You need to provide an administrator username and password and replace the `$CLOUD_ID` with the ID of your Cloud deployment. This Cloud ID can be copied from the deployments page on your Cloud website. +You need to provide an administrator username and password and replace the `$CLOUD_ID` with the ID of your Cloud deployment. This Cloud ID can be copied from the **Deployments** page on your Cloud website. Since the `--start` option is used at the end of the Eland import command, {{es}} deploys the model ready to use. If you have multiple models and want to select which model to deploy, you can use the **{{ml-app}} > Model Management** user interface in {{kib}} to manage the starting and stopping of models. diff --git a/explore-analyze/query-filter/tools.md b/explore-analyze/query-filter/tools.md index 925c6af3b..56232579b 100644 --- a/explore-analyze/query-filter/tools.md +++ b/explore-analyze/query-filter/tools.md @@ -8,16 +8,16 @@ mapped_pages: # Query tools [devtools-kibana] -Elasticsearch offers tools that you can use to query your data, manage those queries, and optimize them to be as efficient as possible. +Access these specialized tools in Kibana and the Serverless UI to develop, debug, and refine your search queries while monitoring their performance and efficiency. -| | | -| --- | --- | -| [Saved queries](tools/saved-queries.md) | Save your searches and queries to reuse them later. | +| Tool | Function | +|------|----------| +| [Saved queries](tools/saved-queries.md) | Save your searches and queries to reuse them later. | | [Console](tools/console.md) | Interact with the REST APIs of {{es}} and {{kib}}, including sending requests and viewing API documentation. | | [{{searchprofiler}}](tools/search-profiler.md) | Inspect and analyze your search queries. | -| [Grok Debugger   ](tools/grok-debugger.md) | Build and debug grok patterns before you use them in your data processing pipelines. | -| [Painless Lab](../scripting/painless-lab.md) | [beta] Test and debug Painless scripts in real-time. | -| [Playground](tools/playground.md) | Combine your Elasticsearch data with the power of large language models (LLMs) for retrieval augmented generation (RAG), using a chat interface. | +| [Grok Debugger](tools/grok-debugger.md) | Build and debug grok patterns before you use them in your data processing pipelines. | +| [Painless Lab](../scripting/painless-lab.md) | [beta] Test and debug Painless scripts in real-time. | +| [Playground](tools/playground.md) | Combine your Elasticsearch data with the power of large language models (LLMs) for retrieval augmented generation (RAG), using a chat interface. | diff --git a/explore-analyze/query-filter/tools/grok-debugger.md b/explore-analyze/query-filter/tools/grok-debugger.md index e0550a363..7e9e9f135 100644 --- a/explore-analyze/query-filter/tools/grok-debugger.md +++ b/explore-analyze/query-filter/tools/grok-debugger.md @@ -10,7 +10,7 @@ mapped_pages: You can build and debug grok patterns in the {{kib}} **Grok Debugger** before you use them in your data processing pipelines. Grok is a pattern matching syntax that you can use to parse arbitrary text and structure it. Grok is good for parsing syslog, apache, and other webserver logs, mysql logs, and in general, any log format that is written for human consumption. -Grok patterns are supported in {{es}} [runtime fields](../../../manage-data/data-store/mapping/runtime-fields.md), the {{es}} [grok ingest processor](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/enrich-processor/grok-processor.md), and the {{ls}} [grok filter](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-filters-grok.md). For syntax, see [Grokking grok](../../scripting/grok.md). +Grok patterns are supported in {{es}} [runtime fields](../../../manage-data/data-store/mapping/runtime-fields.md), the {{es}} [grok ingest processor](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/enrich-processor/grok-processor.md), and the {{ls}} [grok filter](asciidocalypse://docs/logstash/docs/reference/plugins-filters-grok.md). For syntax, see [Grokking grok](../../scripting/grok.md). The {{stack}} ships with more than 120 reusable grok patterns. For a complete list of patterns, see [{{es}} grok patterns](https://github.com/elastic/elasticsearch/tree/master/libs/grok/src/main/resources/patterns) and [{{ls}} grok patterns](https://github.com/logstash-plugins/logstash-patterns-core/tree/master/patterns). diff --git a/explore-analyze/report-and-share.md b/explore-analyze/report-and-share.md index 8530287a4..602a2a86a 100644 --- a/explore-analyze/report-and-share.md +++ b/explore-analyze/report-and-share.md @@ -56,7 +56,7 @@ When sharing an object with unsaved changes, you get a temporary link that might To access the object shared with the link, users need to authenticate. -Anonymous users can also access the link if you have configured [Anonymous authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md#anonymous-authentication) and your anonymous service account has privileges to access what you want to share. +Anonymous users can also access the link if you have configured [Anonymous authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md#anonymous-authentication) and your anonymous service account has privileges to access what you want to share. :::{image} ../images/share-dashboard.gif :alt: getting a shareable link for a dashboard diff --git a/explore-analyze/scripting/grok.md b/explore-analyze/scripting/grok.md index fdfa80441..370afb18b 100644 --- a/explore-analyze/scripting/grok.md +++ b/explore-analyze/scripting/grok.md @@ -46,7 +46,7 @@ The first value is a number, followed by what appears to be an IP address. You c To ease migration to the [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/current), a new set of ECS-compliant patterns is available in addition to the existing patterns. The new ECS pattern definitions capture event field names that are compliant with the schema. -The ECS pattern set has all of the pattern definitions from the legacy set, and is a drop-in replacement. Use the [`ecs-compatability`](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-filters-grok.md#plugins-filters-grok-ecs_compatibility) setting to switch modes. +The ECS pattern set has all of the pattern definitions from the legacy set, and is a drop-in replacement. Use the [`ecs-compatability`](asciidocalypse://docs/logstash/docs/reference/plugins-filters-grok.md#plugins-filters-grok-ecs_compatibility) setting to switch modes. New features and enhancements will be added to the ECS-compliant files. The legacy patterns may still receive bug fixes which are backwards compatible. diff --git a/explore-analyze/scripting/modules-scripting-engine.md b/explore-analyze/scripting/modules-scripting-engine.md index ac4c6ff36..06e0f1d25 100644 --- a/explore-analyze/scripting/modules-scripting-engine.md +++ b/explore-analyze/scripting/modules-scripting-engine.md @@ -10,7 +10,7 @@ mapped_pages: A `ScriptEngine` is a backend for implementing a scripting language. It may also be used to write scripts that need to use advanced internals of scripting. For example, a script that wants to use term frequencies while scoring. -The plugin [documentation](asciidocalypse://docs/elasticsearch/docs/extend/create-elasticsearch-plugins/index.md) has more information on how to write a plugin so that Elasticsearch will properly load it. To register the `ScriptEngine`, your plugin should implement the `ScriptPlugin` interface and override the `getScriptEngine(Settings settings)` method. +The plugin [documentation](asciidocalypse://docs/elasticsearch/docs/extend/index.md) has more information on how to write a plugin so that Elasticsearch will properly load it. To register the `ScriptEngine`, your plugin should implement the `ScriptPlugin` interface and override the `getScriptEngine(Settings settings)` method. The following is an example of a custom `ScriptEngine` which uses the language name `expert_scripts`. It implements a single script called `pure_df` which may be used as a search script to override each document’s score as the document frequency of a provided term. diff --git a/explore-analyze/transforms/transform-checkpoints.md b/explore-analyze/transforms/transform-checkpoints.md index b2cf637aa..09699505e 100644 --- a/explore-analyze/transforms/transform-checkpoints.md +++ b/explore-analyze/transforms/transform-checkpoints.md @@ -39,7 +39,7 @@ If the cluster experiences unsuitable performance degradation due to the {{trans ## Using the ingest timestamp for syncing the {{transform}} [sync-field-ingest-timestamp] -In most cases, it is strongly recommended to use the ingest timestamp of the source indices for syncing the {{transform}}. This is the most optimal way for {{transforms}} to be able to identify new changes. If your data source follows the [ECS standard](asciidocalypse://docs/ecs/docs/reference/ecs/index.md), you might already have an [`event.ingested`](asciidocalypse://docs/ecs/docs/reference/ecs/ecs-event.md#field-event-ingested) field. In this case, use `event.ingested` as the `sync`.`time`.`field` property of your {{transform}}. +In most cases, it is strongly recommended to use the ingest timestamp of the source indices for syncing the {{transform}}. This is the most optimal way for {{transforms}} to be able to identify new changes. If your data source follows the [ECS standard](asciidocalypse://docs/ecs/docs/reference/index.md), you might already have an [`event.ingested`](asciidocalypse://docs/ecs/docs/reference/ecs-event.md#field-event-ingested) field. In this case, use `event.ingested` as the `sync`.`time`.`field` property of your {{transform}}. If you don’t have a `event.ingested` field or it isn’t populated, you can set it by using an ingest pipeline. Create an ingest pipeline either using the [ingest pipeline API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-ingest-put-pipeline) (like the example below) or via {{kib}} under **Stack Management > Ingest Pipelines**. Use a [`set` processor](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/enrich-processor/set-processor.md) to set the field and associate it with the value of the ingest timestamp. diff --git a/explore-analyze/visualize/legacy-editors/timelion.md b/explore-analyze/visualize/legacy-editors/timelion.md index 937708294..abaa0e958 100644 --- a/explore-analyze/visualize/legacy-editors/timelion.md +++ b/explore-analyze/visualize/legacy-editors/timelion.md @@ -80,7 +80,7 @@ You collected data from your operating system using Metricbeat, and you want to Set up Metricbeat, then create the dashboard. -1. To set up Metricbeat, go to [Metricbeat quick start: installation and configuration](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/metricbeat-installation-configuration.md) +1. To set up Metricbeat, go to [Metricbeat quick start: installation and configuration](asciidocalypse://docs/beats/docs/reference/metricbeat/metricbeat-installation-configuration.md) 2. Go to **Dashboards**. 3. On the **Dashboards** page, click **Create dashboard**. diff --git a/extend/index.md b/extend/index.md new file mode 100644 index 000000000..9a118b7f3 --- /dev/null +++ b/extend/index.md @@ -0,0 +1,20 @@ +# Extend and contribute + +This section contains information on how to extend or contribute to our various products. + +## Contributing to Elastic Projects + +You can contribute to various projects, including: + +- [Kibana](asciidocalypse://docs/extend/index.md): Enhance our data visualization platform by contributing to Kibana. +- [Logstash](asciidocalypse://docs/extend/index.md): Help us improve the data processing pipeline with your contributions to Logstash. +- [Beats](asciidocalypse://docs/extend/index.md): Add new features or beats to our lightweight data shippers. + +## Creating Integrations + +Extend the capabilities of Elastic by creating integrations that connect Elastic products with other tools and systems. Visit our [Integrations Guide](asciidocalypse://docs/extend/index.md) to get started. + +## Elasticsearch Plugins + +Develop custom plugins to add new functionalities to Elasticsearch. Check out our [Elasticsearch Plugins Development Guide](asciidocalypse://docs/extend/index.md) for detailed instructions and best practices. + diff --git a/extend/toc.yml b/extend/toc.yml new file mode 100644 index 000000000..f2ab23679 --- /dev/null +++ b/extend/toc.yml @@ -0,0 +1,2 @@ +toc: + - file: index.md \ No newline at end of file diff --git a/get-started/introduction.md b/get-started/introduction.md index 8fb48ab1e..e0241383b 100644 --- a/get-started/introduction.md +++ b/get-started/introduction.md @@ -55,10 +55,10 @@ The {{stack}} is used for a wide and growing range of use cases. Here are a few - **Semantic search**: Understand the intent and contextual meaning behind search queries using tools like synonyms, dense vector embeddings, and learned sparse query-document expansion. - **Hybrid search**: Combine full-text search with vector search using state-of-the-art ranking algorithms. - **Build search experiences**: Add hybrid search capabilities to apps or websites, or build enterprise search engines over your organization’s internal data sources. -- **Retrieval augmented generation (RAG)**: Use {{ess}} as a retrieval engine to supplement generative AI models with more relevant, up-to-date, or proprietary data for a range of use cases. +- **Retrieval augmented generation (RAG)**: Use {{ecloud}} as a retrieval engine to supplement generative AI models with more relevant, up-to-date, or proprietary data for a range of use cases. - **Geospatial search**: Search for locations and calculate spatial relationships using geospatial queries. -This is just a sample of search, observability, and security use cases enabled by {{ess}}. Refer to Elastic [customer success stories](https://www.elastic.co/customers/success-stories) for concrete examples across a range of industries. +This is just a sample of search, observability, and security use cases enabled by {{ecloud}}. Refer to Elastic [customer success stories](https://www.elastic.co/customers/success-stories) for concrete examples across a range of industries. % TODO: cleanup these links, consolidate with Explore and analyze diff --git a/get-started/the-stack.md b/get-started/the-stack.md index f3f4f1667..12d944d97 100644 --- a/get-started/the-stack.md +++ b/get-started/the-stack.md @@ -46,7 +46,7 @@ APM $$$stack-components-beats$$$ {{beats}} -: {{beats}} are data shippers that you install as agents on your servers to send operational data to {{es}}. {{beats}} are available for many standard observability data scenarios, including audit data, log files and journals, cloud data, availability, metrics, network traffic, and Windows event logs. [Learn more about {{beats}}](asciidocalypse://docs/beats/docs/reference/ingestion-tools/index.md). +: {{beats}} are data shippers that you install as agents on your servers to send operational data to {{es}}. {{beats}} are available for many standard observability data scenarios, including audit data, log files and journals, cloud data, availability, metrics, network traffic, and Windows event logs. [Learn more about {{beats}}](asciidocalypse://docs/beats/docs/reference/index.md). $$$stack-components-ingest-pipelines$$$ @@ -56,7 +56,7 @@ $$$stack-components-ingest-pipelines$$$ $$$stack-components-logstash$$$ {{ls}} -: {{ls}} is a data collection engine with real-time pipelining capabilities. It can dynamically unify data from disparate sources and normalize the data into destinations of your choice. {{ls}} supports a broad array of input, filter, and output plugins, with many native codecs further simplifying the ingestion process. [Learn more about {{ls}}](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/index.md). +: {{ls}} is a data collection engine with real-time pipelining capabilities. It can dynamically unify data from disparate sources and normalize the data into destinations of your choice. {{ls}} supports a broad array of input, filter, and output plugins, with many native codecs further simplifying the ingestion process. [Learn more about {{ls}}](asciidocalypse://docs/logstash/docs/reference/index.md). ### Store [_store] diff --git a/get-started/versioning-availability.md b/get-started/versioning-availability.md index fb859ed04..c427dcbfa 100644 --- a/get-started/versioning-availability.md +++ b/get-started/versioning-availability.md @@ -18,20 +18,31 @@ It's important to understand this versioning system, for compatibility and [upgr ## Availability of features -Elastic products and features have different availability states across deployment types: - -- **Generally Available**: Feature is production-ready (default if not specified) -- **Beta**: Feature is nearing general availability but is not yet ready for production usage -- **Technical preview**: Feature is in early development -- **Coming**: Feature is announced for a future release -- **Discontinued**: Feature is being phased out -- **Unavailable**: Feature is not supported in this deployment type or version +Elastic products and features have different availability states across deployment types and lifecycle stages. Features may have different availability states between: -- Elastic Stack versions (for example, 9.0, 9.1) -- Serverless projects (Security, {{es}}, Observability) -- Deployment types (and versions) +- **Deployment type**: The environment where the feature is available (Stack, Serverless, ECE, ECK, etc.) +- **Lifecycle state**: The development or support status of the feature (GA, Beta, etc.) +- **Version**: The specific version the lifecycle state applies to + +#### Lifecycle states + +| State | Description | +|-------|-------------| +| **Generally Available (GA)** | Production-ready feature (default if not specified) | +| **Beta** | Feature is nearing general availability but not yet production-ready | +| **Technical preview** | Feature is in early development stage | +| **Coming** | Feature announced for a future release | +| **Discontinued** | Feature is being phased out | +| **Unavailable** | Feature is not supported in this deployment type or version | + +### Where feature availability may differ + +Features may have different states between: + +- **[Elastic Stack](the-stack.md)** versions (e.g., 9.0, 9.1) +- **Deployment types** (and deployment versions) - [Elastic Cloud Hosted](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md) - [Elastic Cloud Serverless](/deploy-manage/deploy/elastic-cloud/serverless.md) - [Self-managed deployments](/deploy-manage/deploy/self-managed.md) @@ -39,9 +50,60 @@ Features may have different availability states between: - ECE deployment versions (for example, 4.0.0) - [Elastic Cloud on Kubernetes (ECK)](/deploy-manage/deploy/cloud-on-k8s.md) - ECK deployment versions (for example, 3.0.0) +- **Serverless project types** + - Security + - Elasticsearch + - Observability + +### Important tips when reading the docs + +- Always check feature lifecycle state for your specific deployment type and version +- Pay attention to Elastic Stack version requirements +- Note that Serverless features may vary by project type + +### Availability badges in the docs + +Our documentation uses badges to help you quickly identify where and when features are available for your specific environment. + +Badges can appear in two places: +1. **Page headers**: Shows the overall availability across all deployment types +2. **Section headers**: Indicates specific availability for content in that section + +### How to read the badges + +Here are some examples to help you understand how to read the availability badges. + +#### Example #1: Stack only feature + +```yaml {applies_to} +stack: ga 9.1 +``` +- **Deployment type**: Elastic Stack +- **Version**: 9.1 +- **Lifecycle**: Generally Available (GA) — default state + +#### Example #2: Serverless-only feature with project differences + +```yaml {applies_to} +serverless: + security: beta + elasticsearch: ga +``` +- **Deployment type**: Serverless +- **Lifecycle**: + - Beta for Security projects + - Generally Available for Elasticsearch projects + +#### Example #3: Discontinued feature on one deployment type -When reading the Elastic documentation be sure to: +```yaml {applies_to} +deployment: + ece: discontinued 4.1.0 +``` +- **Deployment type**: Elastic Cloud Enterprise +- **Lifecycle**: Discontinued +- **Version**: 4.1.0 -- Check feature availability for your deployment type and version -- Note stack version requirements -- Be aware that Serverless features may vary by project type \ No newline at end of file +:::{tip} +For contributors and those interested in the technical details, see the [Elastic docs syntax guide](https://elastic.github.io/docs-builder/syntax/applies/) for more information on how these badges are implemented. +::: \ No newline at end of file diff --git a/images/observability-apm-help-me-decide.svg b/images/observability-apm-help-me-decide.svg new file mode 100644 index 000000000..dfd0c7fe7 --- /dev/null +++ b/images/observability-apm-help-me-decide.svg @@ -0,0 +1,47 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/images/security-nav-overview.gif b/images/security-nav-overview.gif deleted file mode 100644 index 00c0bb0a7..000000000 Binary files a/images/security-nav-overview.gif and /dev/null differ diff --git a/images/serverless--events-correlation-tab-eql-query.png b/images/serverless--events-correlation-tab-eql-query.png deleted file mode 100644 index 56d45538a..000000000 Binary files a/images/serverless--events-correlation-tab-eql-query.png and /dev/null differ diff --git a/images/serverless--events-esql-tab.png b/images/serverless--events-esql-tab.png deleted file mode 100644 index deb79f0e1..000000000 Binary files a/images/serverless--events-esql-tab.png and /dev/null differ diff --git a/images/serverless--events-timeline-disable-filter.png b/images/serverless--events-timeline-disable-filter.png deleted file mode 100644 index 9a73b5b87..000000000 Binary files a/images/serverless--events-timeline-disable-filter.png and /dev/null differ diff --git a/images/serverless--events-timeline-field-exists.png b/images/serverless--events-timeline-field-exists.png deleted file mode 100644 index c78c05415..000000000 Binary files a/images/serverless--events-timeline-field-exists.png and /dev/null differ diff --git a/images/serverless--events-timeline-filter-exclude.png b/images/serverless--events-timeline-filter-exclude.png deleted file mode 100644 index 8df9ee851..000000000 Binary files a/images/serverless--events-timeline-filter-exclude.png and /dev/null differ diff --git a/images/serverless--events-timeline-filter-value.png b/images/serverless--events-timeline-filter-value.png deleted file mode 100644 index 7e51f9041..000000000 Binary files a/images/serverless--events-timeline-filter-value.png and /dev/null differ diff --git a/images/serverless--events-timeline-sidebar.png b/images/serverless--events-timeline-sidebar.png deleted file mode 100644 index 76d45ff77..000000000 Binary files a/images/serverless--events-timeline-sidebar.png and /dev/null differ diff --git a/images/serverless--events-timeline-ui-filter-options.png b/images/serverless--events-timeline-ui-filter-options.png deleted file mode 100644 index e3aeddcec..000000000 Binary files a/images/serverless--events-timeline-ui-filter-options.png and /dev/null differ diff --git a/images/serverless--events-timeline-ui-renderer.png b/images/serverless--events-timeline-ui-renderer.png deleted file mode 100644 index 207d5e5cc..000000000 Binary files a/images/serverless--events-timeline-ui-renderer.png and /dev/null differ diff --git a/images/serverless--events-timeline-ui-updated.png b/images/serverless--events-timeline-ui-updated.png deleted file mode 100644 index 63450436c..000000000 Binary files a/images/serverless--events-timeline-ui-updated.png and /dev/null differ diff --git a/images/serverless--events-timeline-ui.png b/images/serverless--events-timeline-ui.png deleted file mode 100644 index 929c7e1a9..000000000 Binary files a/images/serverless--events-timeline-ui.png and /dev/null differ diff --git a/index.md b/index.md index 09d5ec543..7412976c3 100644 --- a/index.md +++ b/index.md @@ -1 +1 @@ -# Elastic documentation!!!! \ No newline at end of file +# Elastic documentation!!!! diff --git a/manage-data/data-store/data-streams/set-up-data-stream.md b/manage-data/data-store/data-streams/set-up-data-stream.md index ab310ac96..581ee8196 100644 --- a/manage-data/data-store/data-streams/set-up-data-stream.md +++ b/manage-data/data-store/data-streams/set-up-data-stream.md @@ -21,7 +21,7 @@ You can also [convert an index alias to a data stream](#convert-index-alias-to-d ::::{important} If you use {{fleet}}, {{agent}}, or {{ls}}, skip this tutorial. They all set up data streams for you. -For {{fleet}} and {{agent}}, check out this [data streams documentation](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/data-streams.md). For {{ls}}, check out the [data streams settings](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-outputs-elasticsearch.md#plugins-outputs-elasticsearch-data_stream) for the `elasticsearch output` plugin. +For {{fleet}} and {{agent}}, check out this [data streams documentation](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/data-streams.md). For {{ls}}, check out the [data streams settings](asciidocalypse://docs/logstash/docs/reference/plugins-outputs-elasticsearch.md#plugins-outputs-elasticsearch-data_stream) for the `elasticsearch output` plugin. :::: diff --git a/solutions/search/search-approaches/near-real-time-search.md b/manage-data/data-store/near-real-time-search.md similarity index 93% rename from solutions/search/search-approaches/near-real-time-search.md rename to manage-data/data-store/near-real-time-search.md index c2387ee26..57b3fc8bd 100644 --- a/solutions/search/search-approaches/near-real-time-search.md +++ b/manage-data/data-store/near-real-time-search.md @@ -3,7 +3,6 @@ mapped_pages: - https://www.elastic.co/guide/en/elasticsearch/reference/current/near-real-time.html applies_to: stack: - serverless: --- # Near real-time search [near-real-time] @@ -14,7 +13,7 @@ Lucene, the Java libraries on which {{es}} is based, introduced the concept of p Sitting between {{es}} and the disk is the filesystem cache. Documents in the in-memory indexing buffer ([Figure 1](#img-pre-refresh)) are written to a new segment ([Figure 2](#img-post-refresh)). The new segment is written to the filesystem cache first (which is cheap) and only later is it flushed to disk (which is expensive). However, after a file is in the cache, it can be opened and read just like any other file. -:::{image} ../../../images/elasticsearch-reference-lucene-in-memory-buffer.png +:::{image} /images/elasticsearch-reference-lucene-in-memory-buffer.png :alt: A Lucene index with new documents in the in-memory buffer :title: A Lucene index with new documents in the in-memory buffer :name: img-pre-refresh @@ -22,7 +21,7 @@ Sitting between {{es}} and the disk is the filesystem cache. Documents in the in Lucene allows new segments to be written and opened, making the documents they contain visible to search ​without performing a full commit. This is a much lighter process than a commit to disk, and can be done frequently without degrading performance. -:::{image} ../../../images/elasticsearch-reference-lucene-written-not-committed.png +:::{image} /images/elasticsearch-reference-lucene-written-not-committed.png :alt: The buffer contents are written to a segment, which is searchable, but is not yet committed :title: The buffer contents are written to a segment, which is searchable, but is not yet committed :name: img-post-refresh diff --git a/manage-data/ingest/ingest-reference-architectures.md b/manage-data/ingest/ingest-reference-architectures.md index 88c3f0684..47bebdb00 100644 --- a/manage-data/ingest/ingest-reference-architectures.md +++ b/manage-data/ingest/ingest-reference-architectures.md @@ -24,6 +24,6 @@ You can host {{es}} on your own hardware or send your data to {{es}} on {{ecloud | [*{{agent}} to {{ls}} to Elasticsearch*](./ingest-reference-architectures/agent-ls.md)

![Image showing {{agent}} to {{ls}} to {{es}}](../../images/ingest-ea-ls-es.png "") | You need additional capabilities offered by {{ls}}:

* [**enrichment**](./ingest-reference-architectures/ls-enrich.md) between {{agent}} and {{es}}
* [**persistent queue (PQ) buffering**](./ingest-reference-architectures/lspq.md) to accommodate network issues and downstream unavailability
* [**proxying**](./ingest-reference-architectures/ls-networkbridge.md) in cases where {{agent}}s have network restrictions for connecting outside of the {{agent}} network
* data needs to be [**routed to multiple**](./ingest-reference-architectures/ls-multi.md) {{es}} clusters and other destinations depending on the content
| | [*{{agent}} to proxy to Elasticsearch*](./ingest-reference-architectures/agent-proxy.md)

![Image showing connections between {{agent}} and {{es}} using a proxy](../../images/ingest-ea-proxy-es.png "") | Agents have [network restrictions](./ingest-reference-architectures/agent-proxy.md) that prevent connecting outside of the {{agent}} network Note that [{{ls}} as proxy](./ingest-reference-architectures/ls-networkbridge.md) is one option.
| | [*{{agent}} to {{es}} with Kafka as middleware message queue*](./ingest-reference-architectures/agent-kafka-es.md)

![Image showing {{agent}} collecting data and using Kafka as a message queue enroute to {{es}}](../../images/ingest-ea-kafka.png "") | Kafka is your [middleware message queue](./ingest-reference-architectures/agent-kafka-es.md):

* [Kafka ES sink connector](./ingest-reference-architectures/agent-kafka-essink.md) to write from Kafka to {{es}}
* [{{ls}} to read from Kafka and route to {{es}}](./ingest-reference-architectures/agent-kafka-ls.md)
| -| [*{{ls}} to Elasticsearch*](./ingest-reference-architectures/ls-for-input.md)

![Image showing {{ls}} collecting data and sending to {{es}}](../../images/ingest-ls-es.png "") | You need to collect data from a source that {{agent}} can’t read (such as databases, AWS Kinesis). Check out the [{{ls}} input plugins](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/input-plugins.md).
| +| [*{{ls}} to Elasticsearch*](./ingest-reference-architectures/ls-for-input.md)

![Image showing {{ls}} collecting data and sending to {{es}}](../../images/ingest-ls-es.png "") | You need to collect data from a source that {{agent}} can’t read (such as databases, AWS Kinesis). Check out the [{{ls}} input plugins](asciidocalypse://docs/logstash/docs/reference/input-plugins.md).
| | [*Elastic air-gapped architectures*](./ingest-reference-architectures/airgapped-env.md)

![Image showing {{stack}} in an air-gapped environment](../../images/ingest-ea-airgapped.png "") | You want to deploy {{agent}} and {{stack}} in an air-gapped environment (no access to outside networks)
| diff --git a/manage-data/ingest/ingest-reference-architectures/agent-kafka-essink.md b/manage-data/ingest/ingest-reference-architectures/agent-kafka-essink.md index 5376333f7..61d96a3c9 100644 --- a/manage-data/ingest/ingest-reference-architectures/agent-kafka-essink.md +++ b/manage-data/ingest/ingest-reference-architectures/agent-kafka-essink.md @@ -32,8 +32,8 @@ Info on {{agent}} and agent integrations: Info on {{ls}} and {{ls}} plugins: * [{{ls}} Reference](https://www.elastic.co/guide/en/logstash/current) -* [{{ls}} {{agent}} input](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-elastic_agent.md) -* [{{ls}} Kafka output](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-outputs-kafka.md) +* [{{ls}} {{agent}} input](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-elastic_agent.md) +* [{{ls}} Kafka output](asciidocalypse://docs/logstash/docs/reference/plugins-outputs-kafka.md) Info on {{es}}: diff --git a/manage-data/ingest/ingest-reference-architectures/agent-kafka-ls.md b/manage-data/ingest/ingest-reference-architectures/agent-kafka-ls.md index c6a4c0f7c..81c8f3ded 100644 --- a/manage-data/ingest/ingest-reference-architectures/agent-kafka-ls.md +++ b/manage-data/ingest/ingest-reference-architectures/agent-kafka-ls.md @@ -32,10 +32,10 @@ Info on {{agent}} and agent integrations: Info on {{ls}} and {{ls}} Kafka plugins: * [{{ls}} Reference](https://www.elastic.co/guide/en/logstash/current) -* [{{ls}} {{agent}} input](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-elastic_agent.md) -* [{{ls}} Kafka input](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-kafka.md) -* [{{ls}} Kafka output](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-outputs-kafka.md) -* [{{ls}} Elasticsearch output](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-outputs-elasticsearch.md) +* [{{ls}} {{agent}} input](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-elastic_agent.md) +* [{{ls}} Kafka input](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-kafka.md) +* [{{ls}} Kafka output](asciidocalypse://docs/logstash/docs/reference/plugins-outputs-kafka.md) +* [{{ls}} Elasticsearch output](asciidocalypse://docs/logstash/docs/reference/plugins-outputs-elasticsearch.md) Info on {{es}}: diff --git a/manage-data/ingest/ingest-reference-architectures/agent-ls-airgapped.md b/manage-data/ingest/ingest-reference-architectures/agent-ls-airgapped.md index 5d5933a0a..e000c2d82 100644 --- a/manage-data/ingest/ingest-reference-architectures/agent-ls-airgapped.md +++ b/manage-data/ingest/ingest-reference-architectures/agent-ls-airgapped.md @@ -30,5 +30,5 @@ Info for air-gapped environments: ## Geoip database management in air-gapped environments [ls-geoip] -The [{{ls}} geoip filter](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-filters-geoip.md) requires regular database updates to remain up-to-date with the latest information. If you are using the {{ls}} geoip filter plugin in an air-gapped environment, you can manage updates through a proxy, a custom endpoint, or manually. Check out [Manage your own database updates](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-filters-geoip.md#plugins-filters-geoip-manage_update) for more info. +The [{{ls}} geoip filter](asciidocalypse://docs/logstash/docs/reference/plugins-filters-geoip.md) requires regular database updates to remain up-to-date with the latest information. If you are using the {{ls}} geoip filter plugin in an air-gapped environment, you can manage updates through a proxy, a custom endpoint, or manually. Check out [Manage your own database updates](asciidocalypse://docs/logstash/docs/reference/plugins-filters-geoip.md#plugins-filters-geoip-manage_update) for more info. diff --git a/manage-data/ingest/ingest-reference-architectures/ls-enrich.md b/manage-data/ingest/ingest-reference-architectures/ls-enrich.md index 6d2f9404f..7a99bcaa4 100644 --- a/manage-data/ingest/ingest-reference-architectures/ls-enrich.md +++ b/manage-data/ingest/ingest-reference-architectures/ls-enrich.md @@ -35,10 +35,10 @@ Info on configuring {{agent}}: For info on {{ls}} for enriching data, check out these sections in the [Logstash Reference](https://www.elastic.co/guide/en/logstash/current): -* [{{ls}} {{agent}} input](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-elastic_agent.md) -* [{{ls}} plugins for enriching data](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/lookup-enrichment.md) -* [Logstash filter plugins](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/filter-plugins.md) -* [{{ls}} {{es}} output](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-outputs-elasticsearch.md) +* [{{ls}} {{agent}} input](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-elastic_agent.md) +* [{{ls}} plugins for enriching data](asciidocalypse://docs/logstash/docs/reference/lookup-enrichment.md) +* [Logstash filter plugins](asciidocalypse://docs/logstash/docs/reference/filter-plugins.md) +* [{{ls}} {{es}} output](asciidocalypse://docs/logstash/docs/reference/plugins-outputs-elasticsearch.md) Info on {{es}}: diff --git a/manage-data/ingest/ingest-reference-architectures/ls-for-input.md b/manage-data/ingest/ingest-reference-architectures/ls-for-input.md index 3b7c63007..1ce70c2da 100644 --- a/manage-data/ingest/ingest-reference-architectures/ls-for-input.md +++ b/manage-data/ingest/ingest-reference-architectures/ls-for-input.md @@ -29,8 +29,8 @@ Info on {{ls}} and {{ls}} input and output plugins: * [{{ls}} plugin support matrix](https://www.elastic.co/support/matrix#logstash_plugins) * [{{ls}} Reference](https://www.elastic.co/guide/en/logstash/current) -* [{{ls}} input plugins](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/input-plugins.md) -* [{{es}} output plugin](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-outputs-elasticsearch.md) +* [{{ls}} input plugins](asciidocalypse://docs/logstash/docs/reference/input-plugins.md) +* [{{es}} output plugin](asciidocalypse://docs/logstash/docs/reference/plugins-outputs-elasticsearch.md) Info on {{es}} and ingest pipelines: diff --git a/manage-data/ingest/ingest-reference-architectures/ls-multi.md b/manage-data/ingest/ingest-reference-architectures/ls-multi.md index a34657703..6507f7022 100644 --- a/manage-data/ingest/ingest-reference-architectures/ls-multi.md +++ b/manage-data/ingest/ingest-reference-architectures/ls-multi.md @@ -62,8 +62,8 @@ Info on configuring {{agent}}: Info on {{ls}} and {{ls}} outputs: * [{{ls}} Reference](https://www.elastic.co/guide/en/logstash/current) -* [{{ls}} {{es}} output plugin](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-outputs-elasticsearch.md) -* [{{ls}} output plugins](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/output-plugins.md) +* [{{ls}} {{es}} output plugin](asciidocalypse://docs/logstash/docs/reference/plugins-outputs-elasticsearch.md) +* [{{ls}} output plugins](asciidocalypse://docs/logstash/docs/reference/output-plugins.md) Info on {{es}}: diff --git a/manage-data/ingest/ingest-reference-architectures/ls-networkbridge.md b/manage-data/ingest/ingest-reference-architectures/ls-networkbridge.md index d44e40b93..0f7bc72ea 100644 --- a/manage-data/ingest/ingest-reference-architectures/ls-networkbridge.md +++ b/manage-data/ingest/ingest-reference-architectures/ls-networkbridge.md @@ -16,7 +16,7 @@ Use when : Agents have network restrictions for connecting to {{es}} on {{stack}} deployed outside of the agent network Example -: You can send data from multiple {{agent}}s through your demilitarized zone (DMZ) to {{ls}}, and then use {{ls}} as a proxy through your firewall to {{ess}}. This approach helps reduce the number of firewall exceptions needed to forward data from large numbers of {{agent}}s. +: You can send data from multiple {{agent}}s through your demilitarized zone (DMZ) to {{ls}}, and then use {{ls}} as a proxy through your firewall to {{ecloud}}. This approach helps reduce the number of firewall exceptions needed to forward data from large numbers of {{agent}}s. ## Resources [ls-networkbridge-resources] @@ -29,7 +29,7 @@ Info on configuring {{agent}}: Info on {{ls}} and {{ls}} plugins: * [{{ls}} Reference](https://www.elastic.co/guide/en/logstash/current) -* [{{es}} output plugin](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-outputs-elasticsearch.md) +* [{{es}} output plugin](asciidocalypse://docs/logstash/docs/reference/plugins-outputs-elasticsearch.md) Info on {{es}}: diff --git a/manage-data/ingest/ingest-reference-architectures/lspq.md b/manage-data/ingest/ingest-reference-architectures/lspq.md index 21b1cf037..c19a5992b 100644 --- a/manage-data/ingest/ingest-reference-architectures/lspq.md +++ b/manage-data/ingest/ingest-reference-architectures/lspq.md @@ -25,12 +25,12 @@ Info on configuring {{agent}}: For info on {{ls}} plugins: -* [{{agent}} input](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-elastic_agent.md) -* [{{es}} output plugin](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-outputs-elasticsearch.md) +* [{{agent}} input](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-elastic_agent.md) +* [{{es}} output plugin](asciidocalypse://docs/logstash/docs/reference/plugins-outputs-elasticsearch.md) For info on using {{ls}} for buffering and data resiliency, check out this section in the [Logstash Reference](https://www.elastic.co/guide/en/logstash/current): -* [{{ls}} Persistent Queues (PQ)](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/persistent-queues.md) +* [{{ls}} Persistent Queues (PQ)](asciidocalypse://docs/logstash/docs/reference/persistent-queues.md) Info on {{es}}: diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md b/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md index 378fe0dfa..f533cf708 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md @@ -122,7 +122,7 @@ If you have multiple servers with metrics data, repeat the following steps to co **About Metricbeat modules** -Metricbeat has [many modules](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/metricbeat-modules.md) available that collect common metrics. You can [configure additional modules](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/configuration-metricbeat.md) as needed. For this example we’re using Metricbeat’s default configuration, which has the [System module](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/metricbeat-module-system.md) enabled. The System module allows you to monitor servers with the default set of metrics: *cpu*, *load*, *memory*, *network*, *process*, *process_summary*, *socket_summary*, *filesystem*, *fsstat*, and *uptime*. +Metricbeat has [many modules](asciidocalypse://docs/beats/docs/reference/metricbeat/metricbeat-modules.md) available that collect common metrics. You can [configure additional modules](asciidocalypse://docs/beats/docs/reference/metricbeat/configuration-metricbeat.md) as needed. For this example we’re using Metricbeat’s default configuration, which has the [System module](asciidocalypse://docs/beats/docs/reference/metricbeat/metricbeat-module-system.md) enabled. The System module allows you to monitor servers with the default set of metrics: *cpu*, *load*, *memory*, *network*, *process*, *process_summary*, *socket_summary*, *filesystem*, *fsstat*, and *uptime*. **Load the Metricbeat Kibana dashboards** @@ -144,7 +144,7 @@ sudo ./metricbeat setup \ 1. Specify the Cloud ID of your {{ech}} or {{ece}} deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. 2. Specify the username and password provided to you when creating the deployment. Make sure to keep the colon between ** and **.::::{important} -Depending on variables including the installation location, environment and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-libbeat/config-file-permissions.md) of the metricbeat.yml. +Depending on variables including the installation location, environment and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/libbeat/config-file-permissions.md) of the metricbeat.yml. You might encounter similar permissions hurdles as you work through multiple sections of this document. These permission requirements are there for a good reason, a security safeguard to prevent unauthorized access and modification of key Elastic files. @@ -193,7 +193,7 @@ The next step is to configure Filebeat to send operational data to Logstash. As **Enable the Filebeat system module** -Filebeat has [many modules](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-modules.md) available that collect common log types. You can [configure additional modules](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/configuration-filebeat-modules.md) as needed. For this example we’re using Filebeat’s [System module](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-module-system.md). This module reads in the various system log files (with information including login successes or failures, sudo command usage, and other key usage details) based on the detected operating system. For this example, a Linux-based OS is used and Filebeat ingests logs from the */var/log/* folder. It’s important to verify that Filebeat is given permission to access your logs folder through standard file and folder permissions. +Filebeat has [many modules](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-modules.md) available that collect common log types. You can [configure additional modules](asciidocalypse://docs/beats/docs/reference/filebeat/configuration-filebeat-modules.md) as needed. For this example we’re using Filebeat’s [System module](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-module-system.md). This module reads in the various system log files (with information including login successes or failures, sudo command usage, and other key usage details) based on the detected operating system. For this example, a Linux-based OS is used and Filebeat ingests logs from the */var/log/* folder. It’s important to verify that Filebeat is given permission to access your logs folder through standard file and folder permissions. 1. Go to */filebeat-/modules.d/* where ** is the directory where Filebeat is installed. 2. Filebeat requires at least one fileset to be enabled. In file */filebeat-/modules.d/system.yml.disabled*, under both `syslog` and `auth` set `enabled` to `true`: @@ -232,7 +232,7 @@ sudo ./filebeat setup \ 1. Specify the Cloud ID of your {{ech}} or {{ece}} deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. 2. Specify the username and password provided to you when creating the deployment. Make sure to keep the colon between ** and **.::::{important} -Depending on variables including the installation location, environment, and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-libbeat/config-file-permissions.md) of the filebeat.yml. +Depending on variables including the installation location, environment, and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/libbeat/config-file-permissions.md) of the filebeat.yml. :::: @@ -295,7 +295,7 @@ Now the Filebeat and Metricbeat are set up, let’s configure a {{ls}} pipeline 1. {{ls}} listens for Beats input on the default port of 5044. Only one line is needed to do this. {{ls}} can handle input from many Beats of the same and also of varying types (Metricbeat, Filebeat, and others). 2. This sends output to the standard output, which displays through your command line interface. This plugin enables you to verify the data before you send it to {{es}}, in a later step. -3. Save the new *beats.conf* file in your Logstash folder. To learn more about the file format and options, check [{{ls}} Configuration Examples](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/config-examples.md). +3. Save the new *beats.conf* file in your Logstash folder. To learn more about the file format and options, check [{{ls}} Configuration Examples](asciidocalypse://docs/logstash/docs/reference/config-examples.md). ## Output {{ls}} data to stdout [ec-beats-logstash-stdout] @@ -437,7 +437,7 @@ In this section, you configure {{ls}} to send the Metricbeat and Filebeat data t ``` 1. Use the Cloud ID of your {{ech}} or {{ece}} deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. - 2. the default usename is `elastic`. It is not recommended to use the `elastic` account for ingesting data as this is a superuser. We recommend using a user with reduced permissions, or an API Key with permissions specific to the indices or data streams that will be written to. Check the [Grant access to secured resources](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/feature-roles.md) for information on the writer role and API Keys. Use the password provided when you created the deployment if using the `elastic` user, or the password used when creating a new ingest user with the roles specified in the [Grant access to secured resources](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/feature-roles.md) documentation. + 2. the default usename is `elastic`. It is not recommended to use the `elastic` account for ingesting data as this is a superuser. We recommend using a user with reduced permissions, or an API Key with permissions specific to the indices or data streams that will be written to. Check the [Grant access to secured resources](asciidocalypse://docs/beats/docs/reference/filebeat/feature-roles.md) for information on the writer role and API Keys. Use the password provided when you created the deployment if using the `elastic` user, or the password used when creating a new ingest user with the roles specified in the [Grant access to secured resources](asciidocalypse://docs/beats/docs/reference/filebeat/feature-roles.md) documentation. Following are some additional details about the configuration file settings: @@ -529,9 +529,9 @@ In this section, you configure {{ls}} to send the Metricbeat and Filebeat data t ::::{note} In this guide, you manually launch each of the Elastic stack applications through the command line interface. In production, you may prefer to configure {{ls}}, Metricbeat, and Filebeat to run as System Services. Check the following pages for the steps to configure each application to run as a service: -* [Running {{ls}} as a service on Debian or RPM](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/running-logstash.md) -* [Metricbeat and systemd](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/running-with-systemd.md) -* [Start filebeat](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-starting.md) +* [Running {{ls}} as a service on Debian or RPM](asciidocalypse://docs/logstash/docs/reference/running-logstash.md) +* [Metricbeat and systemd](asciidocalypse://docs/beats/docs/reference/metricbeat/running-with-systemd.md) +* [Start filebeat](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-starting.md) :::: diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md b/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md index fd78194bf..081af2985 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md @@ -40,7 +40,7 @@ $$$ece-db-logstash-pipeline$$$ $$$ece-db-logstash-prerequisites$$$ -This guide explains how to ingest data from a relational database into {{ess}} through [{{ls}}](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/index.md), using the Logstash [JDBC input plugin](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-jdbc.md). It demonstrates how Logstash can be used to efficiently copy records and to receive updates from a relational database, and then send them into {{es}} in an {{ech}} or {{ece}} deployment. +This guide explains how to ingest data from a relational database into {{ecloud}} through [{{ls}}](asciidocalypse://docs/logstash/docs/reference/index.md), using the Logstash [JDBC input plugin](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-jdbc.md). It demonstrates how Logstash can be used to efficiently copy records and to receive updates from a relational database, and then send them into {{es}} in an {{ech}} or {{ece}} deployment. The code and methods presented here have been tested with MySQL. They should work with other relational databases. @@ -98,7 +98,7 @@ The Logstash JDBC input plugin does not include any database connection drivers. ## Prepare a source MySQL database [ec-db-logstash-database] -Let’s look at a simple database from which you’ll import data and send it to a {{ech}} or {{ece}} deployment. This example uses a MySQL database with timestamped records. The timestamps enable you to determine easily what’s changed in the database since the most recent data transfer. +Let’s look at a simple database from which you’ll import data and send it to an {{ech}} or {{ece}} deployment. This example uses a MySQL database with timestamped records. The timestamps enable you to determine easily what’s changed in the database since the most recent data transfer. ### Consider the database structure and design [ec-db-logstash-database-structure] @@ -234,13 +234,13 @@ Let’s set up a sample Logstash input pipeline to ingest data from your new JDB : The Logstash JDBC plugin does not come packaged with JDBC driver libraries. The JDBC driver library must be passed explicitly into the plugin using the `jdbc_driver_library` configuration option. tracking_column - : This parameter specifies the field `unix_ts_in_secs` that tracks the last document read by Logstash from MySQL, stored on disk in [logstash_jdbc_last_run](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-jdbc.md#plugins-inputs-jdbc-last_run_metadata_path). The parameter determines the starting value for documents that Logstash requests in the next iteration of its polling loop. The value stored in `logstash_jdbc_last_run` can be accessed in a SELECT statement as `sql_last_value`. + : This parameter specifies the field `unix_ts_in_secs` that tracks the last document read by Logstash from MySQL, stored on disk in [logstash_jdbc_last_run](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-jdbc.md#plugins-inputs-jdbc-last_run_metadata_path). The parameter determines the starting value for documents that Logstash requests in the next iteration of its polling loop. The value stored in `logstash_jdbc_last_run` can be accessed in a SELECT statement as `sql_last_value`. unix_ts_in_secs : The field generated by the SELECT statement, which contains the `modification_time` as a standard [Unix timestamp](https://en.wikipedia.org/wiki/Unix_time) (seconds since the epoch). The field is referenced by the `tracking column`. A Unix timestamp is used for tracking progress rather than a normal timestamp, as a normal timestamp may cause errors due to the complexity of correctly converting back and forth between UMT and the local timezone. sql_last_value - : This is a [built-in parameter](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-jdbc.md#_predefined_parameters) containing the starting point of the current iteration of the Logstash polling loop, and it is referenced in the SELECT statement line of the JDBC input configuration. This parameter is set to the most recent value of `unix_ts_in_secs`, which is read from `.logstash_jdbc_last_run`. This value is the starting point for documents returned by the MySQL query that is executed in the Logstash polling loop. Including this variable in the query guarantees that we’re not resending data that is already stored in Elasticsearch. + : This is a [built-in parameter](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-jdbc.md#_predefined_parameters) containing the starting point of the current iteration of the Logstash polling loop, and it is referenced in the SELECT statement line of the JDBC input configuration. This parameter is set to the most recent value of `unix_ts_in_secs`, which is read from `.logstash_jdbc_last_run`. This value is the starting point for documents returned by the MySQL query that is executed in the Logstash polling loop. Including this variable in the query guarantees that we’re not resending data that is already stored in Elasticsearch. schedule : This uses cron syntax to specify how often Logstash should poll MySQL for changes. The specification `*/5 * * * * *` tells Logstash to contact MySQL every 5 seconds. Input from this plugin can be scheduled to run periodically according to a specific schedule. This scheduling syntax is powered by [rufus-scheduler](https://github.com/jmettraux/rufus-scheduler). The syntax is cron-like with some extensions specific to Rufus (for example, timezone support). @@ -330,7 +330,7 @@ In this section, we configure Logstash to send the MySQL data to Elasticsearch. ``` 1. Use the Cloud ID of your {{ech}} or {{ece}} deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. - 2. the default username is `elastic`. It is not recommended to use the `elastic` account for ingesting data as this is a superuser. We recommend using a user with reduced permissions, or an API Key with permissions specific to the indices or data streams that will be written to. Check [Configuring security in Logstash](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/secure-connection.md) for information on roles and API Keys. Use the password provided when you created the deployment if using the `elastic` user, or the password used when creating a new ingest user with the roles specified in the [Configuring security in Logstash](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/secure-connection.md) documentation. + 2. the default username is `elastic`. It is not recommended to use the `elastic` account for ingesting data as this is a superuser. We recommend using a user with reduced permissions, or an API Key with permissions specific to the indices or data streams that will be written to. Check [Configuring security in Logstash](asciidocalypse://docs/logstash/docs/reference/secure-connection.md) for information on roles and API Keys. Use the password provided when you created the deployment if using the `elastic` user, or the password used when creating a new ingest user with the roles specified in the [Configuring security in Logstash](asciidocalypse://docs/logstash/docs/reference/secure-connection.md) documentation. Following are some additional details about the configuration file settings: diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-nodejs-on-elasticsearch-service.md b/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-nodejs-on-elasticsearch-service.md index ae0b05b82..967823fa4 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-nodejs-on-elasticsearch-service.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-nodejs-on-elasticsearch-service.md @@ -168,7 +168,7 @@ async function run() { run().catch(console.log) ``` -When using the [client.index](asciidocalypse://docs/elasticsearch-js/docs/reference/elasticsearch/elasticsearch-client-javascript-api/api-reference.md#_index) API, the request automatically creates the `game-of-thrones` index if it doesn’t already exist, as well as document IDs for each indexed document if they are not explicitly specified. +When using the [client.index](asciidocalypse://docs/elasticsearch-js/docs/reference/api-reference.md#_index) API, the request automatically creates the `game-of-thrones` index if it doesn’t already exist, as well as document IDs for each indexed document if they are not explicitly specified. ## Search and modify data [ec_search_and_modify_data] @@ -215,7 +215,7 @@ async function update() { update().catch(console.log) ``` -This [more comprehensive list of API examples](asciidocalypse://docs/elasticsearch-js/docs/reference/elasticsearch/elasticsearch-client-javascript-api/examples.md) includes bulk operations, checking the existence of documents, updating by query, deleting, scrolling, and SQL queries. To learn more, check the complete [API reference](asciidocalypse://docs/elasticsearch-js/docs/reference/elasticsearch/elasticsearch-client-javascript-api/api-reference.md). +This [more comprehensive list of API examples](asciidocalypse://docs/elasticsearch-js/docs/reference/examples.md) includes bulk operations, checking the existence of documents, updating by query, deleting, scrolling, and SQL queries. To learn more, check the complete [API reference](asciidocalypse://docs/elasticsearch-js/docs/reference/api-reference.md). ## Switch to API key authentication [ec_switch_to_api_key_authentication] @@ -302,11 +302,11 @@ Security Connections ({{ech}} only) -: If your application connecting to {{ech}} runs under the Java security manager, you should at least disable the caching of positive hostname resolutions. To learn more, check the [Java API Client documentation](asciidocalypse://docs/elasticsearch-java/docs/reference/elasticsearch/elasticsearch-client-java-api-client/_others.md). +: If your application connecting to {{ech}} runs under the Java security manager, you should at least disable the caching of positive hostname resolutions. To learn more, check the [Java API Client documentation](asciidocalypse://docs/elasticsearch-java/docs/reference/_others.md). Schema : When the example code was run an index mapping was created automatically. The field types were selected by {{es}} based on the content seen when the first record was ingested, and updated as new fields appeared in the data. It would be more efficient to specify the fields and field types in advance to optimize performance. Refer to the Elastic Common Schema documentation and Field Type documentation when you are designing the schema for your production use cases. Ingest -: For more advanced scenarios, this [bulk ingestion](asciidocalypse://docs/elasticsearch-js/docs/reference/elasticsearch/elasticsearch-client-javascript-api/bulk_examples.md) reference gives an example of the `bulk` API that makes it possible to perform multiple operations in a single call. This bulk example also explicitly specifies document IDs. If you have a lot of documents to index, using bulk to batch document operations is significantly faster than submitting requests individually. +: For more advanced scenarios, this [bulk ingestion](asciidocalypse://docs/elasticsearch-js/docs/reference/bulk_examples.md) reference gives an example of the `bulk` API that makes it possible to perform multiple operations in a single call. This bulk example also explicitly specifies document IDs. If you have a lot of documents to index, using bulk to batch document operations is significantly faster than submitting requests individually. diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-python-on-elasticsearch-service.md b/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-python-on-elasticsearch-service.md index 71109ae44..89996e033 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-python-on-elasticsearch-service.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-data-with-python-on-elasticsearch-service.md @@ -293,7 +293,7 @@ es.get(index='lord-of-the-rings', id='2EkAzngB_pyHD3p65UMt') 'birthplace': 'The Shire'}} ``` -For frequently used API calls with the Python client, check [Examples](asciidocalypse://docs/elasticsearch-py/docs/reference/elasticsearch/elasticsearch-client-python-api/examples.md). +For frequently used API calls with the Python client, check [Examples](asciidocalypse://docs/elasticsearch-py/docs/reference/examples.md). ## Switch to API key authentication [ec_switch_to_api_key_authentication_2] @@ -353,7 +353,7 @@ es = Elasticsearch( Check [Create API key API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key) to learn more about API Keys and [Security privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md) to understand which privileges are needed. If you are not sure what the right combination of privileges for your custom application is, you can enable [audit logging](../../../deploy-manage/monitor/logging-configuration/enabling-audit-logs.md) on {{es}} to find out what privileges are being used. To learn more about how logging works on {{ech}} or {{ece}}, check [Monitoring Elastic Cloud deployment logs and metrics](https://www.elastic.co/blog/monitoring-elastic-cloud-deployment-logs-and-metrics). -For more information on refreshing an index, searching, updating, and deleting, check the [elasticsearch-py examples](asciidocalypse://docs/elasticsearch-py/docs/reference/elasticsearch/elasticsearch-client-python-api/examples.md). +For more information on refreshing an index, searching, updating, and deleting, check the [elasticsearch-py examples](asciidocalypse://docs/elasticsearch-py/docs/reference/examples.md). ### Best practices [ec_best_practices_2] @@ -368,5 +368,5 @@ Schema : When the example code is run, an index mapping is created automatically. The field types are selected by {{es}} based on the content seen when the first record was ingested, and updated as new fields appeared in the data. It would be more efficient to specify the fields and field types in advance to optimize performance. Refer to the Elastic Common Schema documentation and Field Type documentation when you design the schema for your production use cases. Ingest -: For more advanced scenarios, [Bulk helpers](asciidocalypse://docs/elasticsearch-py/docs/reference/elasticsearch/elasticsearch-client-python-api/client-helpers.md#bulk-helpers) gives examples for the `bulk` API that makes it possible to perform multiple operations in a single call. If you have a lot of documents to index, using bulk to batch document operations is significantly faster than submitting requests individually. +: For more advanced scenarios, [Bulk helpers](asciidocalypse://docs/elasticsearch-py/docs/reference/client-helpers.md#bulk-helpers) gives examples for the `bulk` API that makes it possible to perform multiple operations in a single call. If you have a lot of documents to index, using bulk to batch document operations is significantly faster than submitting requests individually. diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md b/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md index 1b17ed3c7..93eba8809 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md @@ -49,7 +49,7 @@ $$$ece-node-logs-send-ess$$$ $$$ece-node-logs-view-kibana$$$ -This guide demonstrates how to ingest logs from a Node.js web application and deliver them securely into an {{ech}} or {{ece}} deployment. You’ll set up Filebeat to monitor a JSON-structured log file that has standard Elastic Common Schema (ECS) formatted fields, and you’ll then view real-time visualizations of the log events in Kibana as requests are made to the Node.js server. While Node.js is used for this example, this approach to monitoring log output is applicable across many client types. Check the list of [available ECS logging plugins](asciidocalypse://docs/ecs-logging/docs/reference/ecs/ecs-logging-overview/intro.md#_get_started). +This guide demonstrates how to ingest logs from a Node.js web application and deliver them securely into an {{ech}} or {{ece}} deployment. You’ll set up Filebeat to monitor a JSON-structured log file that has standard Elastic Common Schema (ECS) formatted fields, and you’ll then view real-time visualizations of the log events in Kibana as requests are made to the Node.js server. While Node.js is used for this example, this approach to monitoring log output is applicable across many client types. Check the list of [available ECS logging plugins](asciidocalypse://docs/ecs-logging/docs/reference/intro.md#_get_started). *Time required: 1.5 hours* @@ -71,7 +71,7 @@ For the three following packages, you can create a working directory to install npm install winston ``` -* The [Elastic Common Schema (ECS) formatter](asciidocalypse://docs/ecs-logging-nodejs/docs/reference/ecs/ecs-logging-nodejs/winston.md) for the Node.js winston logger - This plugin formats your Node.js logs into an ECS structured JSON format ideally suited for ingestion into Elasticsearch. To install the ECS winston logger, run the following command in your working directory so that the package is installed in the same location as the winston package: +* The [Elastic Common Schema (ECS) formatter](asciidocalypse://docs/ecs-logging-nodejs/docs/reference/winston.md) for the Node.js winston logger - This plugin formats your Node.js logs into an ECS structured JSON format ideally suited for ingestion into Elasticsearch. To install the ECS winston logger, run the following command in your working directory so that the package is installed in the same location as the winston package: ```sh npm install @elastic/ecs-winston-format @@ -347,7 +347,7 @@ For this example, Filebeat uses the following four decoding options. json.expand_keys: true ``` -To learn more about these settings, check [JSON input configuration options](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-input-log.md#filebeat-input-log-config-json) and [Decode JSON fields](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/decode-json-fields.md) in the Filebeat Reference. +To learn more about these settings, check [JSON input configuration options](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-input-log.md#filebeat-input-log-config-json) and [Decode JSON fields](asciidocalypse://docs/beats/docs/reference/filebeat/decode-json-fields.md) in the Filebeat Reference. Append the four JSON decoding options to the *Filebeat inputs* section of *filebeat.yml*, so that the section now looks like this: @@ -383,7 +383,7 @@ Filebeat comes with predefined assets for parsing, indexing, and visualizing you ``` ::::{important} -Depending on variables including the installation location, environment, and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-libbeat/config-file-permissions.md) of filebeat.yml. You can also try running the command as *root*: *sudo ./filebeat setup -e* or you can disable strict permission checks by running the command with the `--strict.perms=false` option. +Depending on variables including the installation location, environment, and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/libbeat/config-file-permissions.md) of filebeat.yml. You can also try running the command as *root*: *sudo ./filebeat setup -e* or you can disable strict permission checks by running the command with the `--strict.perms=false` option. :::: @@ -484,7 +484,7 @@ In this command: * The *-c* flag specifies the path to the Filebeat config file. ::::{note} -Just in case the command doesn’t work as expected, check the [Filebeat quick start](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-installation-configuration.md#start) for the detailed command syntax for your operating system. You can also try running the command as *root*: *sudo ./filebeat -e -c filebeat.yml*. +Just in case the command doesn’t work as expected, check the [Filebeat quick start](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-installation-configuration.md#start) for the detailed command syntax for your operating system. You can also try running the command as *root*: *sudo ./filebeat -e -c filebeat.yml*. :::: @@ -567,5 +567,5 @@ You can add titles to the visualizations, resize and position them as you like, 2. As your final step, remember to stop Filebeat, the Node.js web server, and the client. Enter *CTRL + C* in the terminal window for each application to stop them. -You now know how to monitor log files from a Node.js web application, deliver the log event data securely into an {{ech}} or {{ece}} deployment, and then visualize the results in Kibana in real time. Consult the [Filebeat documentation](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-overview.md) to learn more about the ingestion and processing options available for your data. You can also explore our [documentation](../../../manage-data/ingest.md) to learn all about ingesting data. +You now know how to monitor log files from a Node.js web application, deliver the log event data securely into an {{ech}} or {{ece}} deployment, and then visualize the results in Kibana in real time. Consult the [Filebeat documentation](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-overview.md) to learn more about the ingestion and processing options available for your data. You can also explore our [documentation](../../../manage-data/ingest.md) to learn all about ingesting data. diff --git a/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md b/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md index bfd7a25f5..8f622c092 100644 --- a/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md +++ b/manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md @@ -33,13 +33,13 @@ $$$ece-python-logs-send-ess$$$ $$$ece-python-logs-view-kibana$$$ -This guide demonstrates how to ingest logs from a Python application and deliver them securely into an Elasticsearch Service deployment. You’ll set up Filebeat to monitor a JSON-structured log file that has standard Elastic Common Schema (ECS) formatted fields, and you’ll then view real-time visualizations of the log events in {{kib}} as they occur. While Python is used for this example, this approach to monitoring log output is applicable across many client types. Check the list of [available ECS logging plugins](asciidocalypse://docs/ecs-logging/docs/reference/ecs/ecs-logging-overview/intro.md). +This guide demonstrates how to ingest logs from a Python application and deliver them securely into an {{ech}} deployment. You’ll set up Filebeat to monitor a JSON-structured log file that has standard Elastic Common Schema (ECS) formatted fields, and you’ll then view real-time visualizations of the log events in {{kib}} as they occur. While Python is used for this example, this approach to monitoring log output is applicable across many client types. Check the list of [available ECS logging plugins](asciidocalypse://docs/ecs-logging/docs/reference/intro.md). *Time required: 1 hour* ## Prerequisites [ec_prerequisites_2] -To complete these steps you need to have [Python](https://www.python.org/) installed on your system as well as the [Elastic Common Schema (ECS) logger](asciidocalypse://docs/ecs-logging-python/docs/reference/ecs/ecs-logging-python/installation.md) for the Python logging library. +To complete these steps you need to have [Python](https://www.python.org/) installed on your system as well as the [Elastic Common Schema (ECS) logger](asciidocalypse://docs/ecs-logging-python/docs/reference/installation.md) for the Python logging library. To install *ecs-logging-python*, run: @@ -140,7 +140,7 @@ In this step, you’ll create a Python script that generates logs in JSON format Having your logs written in a JSON format with ECS fields allows for easy parsing and analysis, and for standardization with other applications. A standard, easily parsible format becomes increasingly important as the volume and type of data captured in your logs expands over time. - Together with the standard fields included for each log entry is an extra *http.request.body.content* field. This extra field is there just to give you some additional, interesting data to work with, and also to demonstrate how you can add optional fields to your log data. Check the [ECS Field Reference](asciidocalypse://docs/ecs/docs/reference/ecs/ecs-field-reference.md) for the full list of available fields. + Together with the standard fields included for each log entry is an extra *http.request.body.content* field. This extra field is there just to give you some additional, interesting data to work with, and also to demonstrate how you can add optional fields to your log data. Check the [ECS Field Reference](asciidocalypse://docs/ecs/docs/reference/ecs-field-reference.md) for the full list of available fields. 2. Let’s give the Python script a test run. Open a terminal instance in the location where you saved *elvis.py* and run the following: @@ -226,7 +226,7 @@ For this example, Filebeat uses the following four decoding options. json.expand_keys: true ``` -To learn more about these settings, check [JSON input configuration options](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-input-log.md#filebeat-input-log-config-json) and [Decode JSON fields](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/decode-json-fields.md) in the Filebeat Reference. +To learn more about these settings, check [JSON input configuration options](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-input-log.md#filebeat-input-log-config-json) and [Decode JSON fields](asciidocalypse://docs/beats/docs/reference/filebeat/decode-json-fields.md) in the Filebeat Reference. Append the four JSON decoding options to the *Filebeat inputs* section of *filebeat.yml*, so that the section now looks like this: @@ -262,7 +262,7 @@ Filebeat comes with predefined assets for parsing, indexing, and visualizing you ``` ::::{important} -Depending on variables including the installation location, environment, and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-libbeat/config-file-permissions.md) of filebeat.yml. You can also try running the command as *root*: *sudo ./filebeat setup -e* or you can disable strict permission checks by running the command with the `--strict.perms=false` option. +Depending on variables including the installation location, environment, and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/libbeat/config-file-permissions.md) of filebeat.yml. You can also try running the command as *root*: *sudo ./filebeat setup -e* or you can disable strict permission checks by running the command with the `--strict.perms=false` option. :::: @@ -368,7 +368,7 @@ In this command: * The *-c* flag specifies the path to the Filebeat config file. ::::{note} -Just in case the command doesn’t work as expected, check the [Filebeat quick start](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-installation-configuration.md#start) for the detailed command syntax for your operating system. You can also try running the command as *root*: *sudo ./filebeat -e -c filebeat.yml*. +Just in case the command doesn’t work as expected, check the [Filebeat quick start](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-installation-configuration.md#start) for the detailed command syntax for your operating system. You can also try running the command as *root*: *sudo ./filebeat -e -c filebeat.yml*. :::: @@ -446,5 +446,5 @@ You can add titles to the visualizations, resize and position them as you like, 2. As your final step, remember to stop Filebeat and the Python script. Enter *CTRL + C* in both your Filebeat terminal and in your `elvis.py` terminal. -You now know how to monitor log files from a Python application, deliver the log event data securely into an {{ech}} or {{ece}} deployment, and then visualize the results in Kibana in real time. Consult the [Filebeat documentation](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-overview.md) to learn more about the ingestion and processing options available for your data. You can also explore our [documentation](../../../manage-data/ingest.md) to learn all about all about ingesting data. +You now know how to monitor log files from a Python application, deliver the log event data securely into an {{ech}} or {{ece}} deployment, and then visualize the results in Kibana in real time. Consult the [Filebeat documentation](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-overview.md) to learn more about the ingestion and processing options available for your data. You can also explore our [documentation](../../../manage-data/ingest.md) to learn all about all about ingesting data. diff --git a/manage-data/ingest/ingesting-timeseries-data.md b/manage-data/ingest/ingesting-timeseries-data.md index d25a36e38..81b6ace3c 100644 --- a/manage-data/ingest/ingesting-timeseries-data.md +++ b/manage-data/ingest/ingesting-timeseries-data.md @@ -31,7 +31,7 @@ Ready to try [{{agent}}](https://www.elastic.co/guide/en/fleet/current)? Check o ## {{beats}} [ingest-beats] -[Beats](asciidocalypse://docs/beats/docs/reference/ingestion-tools/index.md) are the original Elastic lightweight data shippers, and their capabilities live on in Elastic Agent. When you use Elastic Agent, you’re getting core Beats functionality, but with more added features. +[Beats](asciidocalypse://docs/beats/docs/reference/index.md) are the original Elastic lightweight data shippers, and their capabilities live on in Elastic Agent. When you use Elastic Agent, you’re getting core Beats functionality, but with more added features. Beats require that you install a separate Beat for each type of data you want to collect. A single Elastic Agent installed on a host can collect and transport multiple types of data. @@ -47,10 +47,10 @@ In addition to supporting upstream OTel development, Elastic provides [Elastic D ## Logstash [ingest-logstash] -[{{ls}}](https://www.elastic.co/guide/en/logstash/current) is a versatile open source data ETL (extract, transform, load) engine that can expand your ingest capabilities. {{ls}} can *collect data* from a wide variety of data sources with {{ls}} [input plugins](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/input-plugins.md), *enrich and transform* the data with {{ls}} [filter plugins](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/filter-plugins.md), and *output* the data to {{es}} and other destinations with the {{ls}} [output plugins](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/output-plugins.md). +[{{ls}}](https://www.elastic.co/guide/en/logstash/current) is a versatile open source data ETL (extract, transform, load) engine that can expand your ingest capabilities. {{ls}} can *collect data* from a wide variety of data sources with {{ls}} [input plugins](asciidocalypse://docs/logstash/docs/reference/input-plugins.md), *enrich and transform* the data with {{ls}} [filter plugins](asciidocalypse://docs/logstash/docs/reference/filter-plugins.md), and *output* the data to {{es}} and other destinations with the {{ls}} [output plugins](asciidocalypse://docs/logstash/docs/reference/output-plugins.md). Many users never need to use {{ls}}, but it’s available if you need it for: -* **Data collection** (if an Elastic integration isn’t available). {{agent}} and Elastic [integrations](https://docs.elastic.co/en/integrations/all_integrations) provide many features out-of-the-box, so be sure to search or browse integrations for your data source. If you don’t find an Elastic integration for your data source, check {{ls}} for an [input plugin](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/input-plugins.md) for your data source. -* **Additional processing.** One of the most common {{ls}} use cases is [extending Elastic integrations](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/using-logstash-with-elastic-integrations.md). You can take advantage of the extensive, built-in capabilities of Elastic Agent and Elastic Integrations, and then use {{ls}} for additional data processing before sending the data on to {{es}}. +* **Data collection** (if an Elastic integration isn’t available). {{agent}} and Elastic [integrations](https://docs.elastic.co/en/integrations/all_integrations) provide many features out-of-the-box, so be sure to search or browse integrations for your data source. If you don’t find an Elastic integration for your data source, check {{ls}} for an [input plugin](asciidocalypse://docs/logstash/docs/reference/input-plugins.md) for your data source. +* **Additional processing.** One of the most common {{ls}} use cases is [extending Elastic integrations](asciidocalypse://docs/logstash/docs/reference/using-logstash-with-elastic-integrations.md). You can take advantage of the extensive, built-in capabilities of Elastic Agent and Elastic Integrations, and then use {{ls}} for additional data processing before sending the data on to {{es}}. * **Advanced use cases.** {{ls}} can help with advanced use cases, such as when you need [persistence or buffering](/manage-data/ingest/ingest-reference-architectures/lspq.md), additional [data enrichment](/manage-data/ingest/ingest-reference-architectures/ls-enrich.md), [proxying](/manage-data/ingest/ingest-reference-architectures/ls-networkbridge.md) as a way to bridge network connections, or the ability to route data to [multiple destinations](/manage-data/ingest/ingest-reference-architectures/ls-multi.md). diff --git a/manage-data/ingest/tools.md b/manage-data/ingest/tools.md index f1fcbc913..c9e09b3e5 100644 --- a/manage-data/ingest/tools.md +++ b/manage-data/ingest/tools.md @@ -41,17 +41,17 @@ Depending on the type of data you want to ingest, you have a number of methods a | Tools | Usage | Links to more information | | ------- | --------------- | ------------------------- | -| Integrations | Ingest data using a variety of Elastic integrations. | [Elastic Integrations](asciidocalypse://docs/integration-docs/docs/reference/ingestion-tools/integrations/index.md) | +| Integrations | Ingest data using a variety of Elastic integrations. | [Elastic Integrations](asciidocalypse://docs/integration-docs/docs/reference/index.md) | | File upload | Upload data from a file and inspect it before importing it into {{es}}. | [Upload data files](/manage-data/ingest/upload-data-files.md) | | APIs | Ingest data through code by using the APIs of one of the language clients or the {{es}} HTTP APIs. | [Document APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-document) | | OpenTelemetry | Collect and send your telemetry data to Elastic Observability | [Elastic Distributions of OpenTelemetry](https://github.com/elastic/opentelemetry?tab=readme-ov-file#elastic-distributions-of-opentelemetry) | | Fleet and Elastic Agent | Add monitoring for logs, metrics, and other types of data to a host using Elastic Agent, and centrally manage it using Fleet. | [Fleet and {{agent}} overview](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/index.md)
[{{fleet}} and {{agent}} restrictions (Serverless)](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/fleet-agent-serverless-restrictions.md)
[{{beats}} and {{agent}} capabilities](/manage-data/ingest/tools.md)|| | {{elastic-defend}} | {{elastic-defend}} provides organizations with prevention, detection, and response capabilities with deep visibility for EPP, EDR, SIEM, and Security Analytics use cases across Windows, macOS, and Linux operating systems running on both traditional endpoints and public cloud environments. | [Configure endpoint protection with {{elastic-defend}}](/solutions/security/configure-elastic-defend.md) | -| {{ls}} | Dynamically unify data from a wide variety of data sources and normalize it into destinations of your choice with {{ls}}. | [Logstash (Serverless)](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/index.md)
[Logstash pipelines](/manage-data/ingest/transform-enrich/logstash-pipelines.md) | -| {{beats}} | Use {{beats}} data shippers to send operational data to Elasticsearch directly or through Logstash. | [{{beats}} (Serverless)](asciidocalypse://docs/beats/docs/reference/ingestion-tools/index.md)
[What are {{beats}}?](asciidocalypse://docs/beats/docs/reference/ingestion-tools/index.md)
[{{beats}} and {{agent}} capabilities](/manage-data/ingest/tools.md)| +| {{ls}} | Dynamically unify data from a wide variety of data sources and normalize it into destinations of your choice with {{ls}}. | [Logstash (Serverless)](asciidocalypse://docs/logstash/docs/reference/index.md)
[Logstash pipelines](/manage-data/ingest/transform-enrich/logstash-pipelines.md) | +| {{beats}} | Use {{beats}} data shippers to send operational data to Elasticsearch directly or through Logstash. | [{{beats}} (Serverless)](asciidocalypse://docs/beats/docs/reference/index.md)
[What are {{beats}}?](asciidocalypse://docs/beats/docs/reference/index.md)
[{{beats}} and {{agent}} capabilities](/manage-data/ingest/tools.md)| | APM | Collect detailed performance information on response time for incoming requests, database queries, calls to caches, external HTTP requests, and more. | [Application performance monitoring (APM)](/solutions/observability/apps/application-performance-monitoring-apm.md) | | Application logs | Ingest application logs using Filebeat, {{agent}}, or the APM agent, or reformat application logs into Elastic Common Schema (ECS) logs and then ingest them using Filebeat or {{agent}}. | [Stream application logs](/solutions/observability/logs/stream-application-logs.md)
[ECS formatted application logs](/solutions/observability/logs/ecs-formatted-application-logs.md) | -| Elastic Serverless forwarder for AWS | Ship logs from your AWS environment to cloud-hosted, self-managed Elastic environments, or {{ls}}. | [Elastic Serverless Forwarder](asciidocalypse://docs/elastic-serverless-forwarder/docs/reference/ingestion-tools/esf/index.md) | +| Elastic Serverless forwarder for AWS | Ship logs from your AWS environment to cloud-hosted, self-managed Elastic environments, or {{ls}}. | [Elastic Serverless Forwarder](asciidocalypse://docs/elastic-serverless-forwarder/docs/reference/index.md) | | Connectors | Use connectors to extract data from an original data source and sync it to an {{es}} index. | [Ingest content with Elastic connectors ](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/search-connectors/index.md)
[Connector clients](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/search-connectors/index.md) | | Web crawler | Discover, extract, and index searchable content from websites and knowledge bases using the web crawler. | [Elastic Open Web Crawler](https://github.com/elastic/crawler#readme) | \ No newline at end of file diff --git a/manage-data/ingest/transform-enrich.md b/manage-data/ingest/transform-enrich.md index 9a20e3e42..9245f1233 100644 --- a/manage-data/ingest/transform-enrich.md +++ b/manage-data/ingest/transform-enrich.md @@ -36,7 +36,7 @@ Finally, to help ensure optimal query results, you may want to customize how tex {{ls}} and the {{ls}} `elastic_integration filter` : If you're using {{ls}} as your primary ingest tool, you can take advantage of its built-in pipeline capabilities to transform your data. You configure a pipeline by stringing together a series of input, output, filtering, and optional codec plugins to manipulate all incoming data. -: If you're ingesting using {{agent}} with Elastic {{integrations}}, you can use the {{ls}} [`elastic_integration filter`](https://www.elastic.co/guide/en/logstash/current/) and other [{{ls}} filters](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/filter-plugins.md) to [extend Elastic integrations](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/using-logstash-with-elastic-integrations.md) by transforming data before it goes to {{es}}. +: If you're ingesting using {{agent}} with Elastic {{integrations}}, you can use the {{ls}} [`elastic_integration filter`](https://www.elastic.co/guide/en/logstash/current/) and other [{{ls}} filters](asciidocalypse://docs/logstash/docs/reference/filter-plugins.md) to [extend Elastic integrations](asciidocalypse://docs/logstash/docs/reference/using-logstash-with-elastic-integrations.md) by transforming data before it goes to {{es}}. Index mapping : Index mapping lets you control the structure that incoming data has within an {{es}} index. You can define all of the fields that are included in the index and their respective data types. For example, you can set fields for dates, numbers, or geolocations, and define the fields to have specific formats. diff --git a/manage-data/ingest/transform-enrich/ingest-pipelines-serverless.md b/manage-data/ingest/transform-enrich/ingest-pipelines-serverless.md index 50c7c3608..001008faa 100644 --- a/manage-data/ingest/transform-enrich/ingest-pipelines-serverless.md +++ b/manage-data/ingest/transform-enrich/ingest-pipelines-serverless.md @@ -33,7 +33,7 @@ In **{{project-settings}} → {{manage-app}} → {{ingest-pipelines-app}}**, you To create a pipeline, click **Create pipeline → New pipeline**. For an example tutorial, see [Example: Parse logs](example-parse-logs.md). -The **New pipeline from CSV** option lets you use a file with comma-separated values (CSV) to create an ingest pipeline that maps custom data to the Elastic Common Schema (ECS). Mapping your custom data to ECS makes the data easier to search and lets you reuse visualizations from other data sets. To get started, check [Map custom data to ECS](asciidocalypse://docs/ecs/docs/reference/ecs/ecs-converting.md). +The **New pipeline from CSV** option lets you use a file with comma-separated values (CSV) to create an ingest pipeline that maps custom data to the Elastic Common Schema (ECS). Mapping your custom data to ECS makes the data easier to search and lets you reuse visualizations from other data sets. To get started, check [Map custom data to ECS](asciidocalypse://docs/ecs/docs/reference/ecs-converting.md). ## Test pipelines [ingest-pipelines-test-pipelines] diff --git a/manage-data/ingest/transform-enrich/ingest-pipelines.md b/manage-data/ingest/transform-enrich/ingest-pipelines.md index ef7b1eb01..67d1dc244 100644 --- a/manage-data/ingest/transform-enrich/ingest-pipelines.md +++ b/manage-data/ingest/transform-enrich/ingest-pipelines.md @@ -45,7 +45,7 @@ In {{kib}}, open the main menu and click **Stack Management > Ingest Pipelines** To create a pipeline, click **Create pipeline > New pipeline**. For an example tutorial, see [Example: Parse logs](example-parse-logs.md). ::::{tip} -The **New pipeline from CSV** option lets you use a CSV to create an ingest pipeline that maps custom data to the [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/current). Mapping your custom data to ECS makes the data easier to search and lets you reuse visualizations from other datasets. To get started, check [Map custom data to ECS](asciidocalypse://docs/ecs/docs/reference/ecs/ecs-converting.md). +The **New pipeline from CSV** option lets you use a CSV to create an ingest pipeline that maps custom data to the [Elastic Common Schema (ECS)](https://www.elastic.co/guide/en/ecs/current). Mapping your custom data to ECS makes the data easier to search and lets you reuse visualizations from other datasets. To get started, check [Map custom data to ECS](asciidocalypse://docs/ecs/docs/reference/ecs-converting.md). :::: diff --git a/manage-data/ingest/transform-enrich/logstash-pipelines.md b/manage-data/ingest/transform-enrich/logstash-pipelines.md index 615c50a0e..747becfef 100644 --- a/manage-data/ingest/transform-enrich/logstash-pipelines.md +++ b/manage-data/ingest/transform-enrich/logstash-pipelines.md @@ -28,7 +28,7 @@ After you configure {{ls}} to use centralized pipeline management, you can no lo ## Manage pipelines [logstash-pipelines-manage-pipelines] -1. [Configure centralized pipeline management](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/configuring-centralized-pipelines.md). +1. [Configure centralized pipeline management](asciidocalypse://docs/logstash/docs/reference/configuring-centralized-pipelines.md). 2. To add a new pipeline, go to **{{project-settings}} → {{manage-app}} → {{ls-pipelines-app}}** and click **Create pipeline**. Provide the following details, then click **Create and deploy**. Pipeline ID @@ -61,4 +61,4 @@ After you configure {{ls}} to use centralized pipeline management, you can no lo To delete one or more pipelines, select their checkboxes then click **Delete**. -For more information about pipeline behavior, go to [Centralized Pipeline Management](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/logstash-centralized-pipeline-management.md#_pipeline_behavior). +For more information about pipeline behavior, go to [Centralized Pipeline Management](asciidocalypse://docs/logstash/docs/reference/logstash-centralized-pipeline-management.md#_pipeline_behavior). diff --git a/manage-data/ingest/transform-enrich/set-up-an-enrich-processor.md b/manage-data/ingest/transform-enrich/set-up-an-enrich-processor.md index 4a89084f5..ddf1c3d16 100644 --- a/manage-data/ingest/transform-enrich/set-up-an-enrich-processor.md +++ b/manage-data/ingest/transform-enrich/set-up-an-enrich-processor.md @@ -43,7 +43,7 @@ To begin, add documents to one or more source indices. These documents should co You can manage source indices just like regular {{es}} indices using the [document](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-document) and [index](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-indices) APIs. -You also can set up [{{beats}}](asciidocalypse://docs/beats/docs/reference/ingestion-tools/index.md), such as a [{{filebeat}}](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-installation-configuration.md), to automatically send and index documents to your source indices. See [Getting started with {{beats}}](asciidocalypse://docs/beats/docs/reference/ingestion-tools/index.md). +You also can set up [{{beats}}](asciidocalypse://docs/beats/docs/reference/index.md), such as a [{{filebeat}}](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-installation-configuration.md), to automatically send and index documents to your source indices. See [Getting started with {{beats}}](asciidocalypse://docs/beats/docs/reference/index.md). ## Create an enrich policy [create-enrich-policy] diff --git a/manage-data/lifecycle/curator.md b/manage-data/lifecycle/curator.md index c70191f14..ee168418d 100644 --- a/manage-data/lifecycle/curator.md +++ b/manage-data/lifecycle/curator.md @@ -9,4 +9,4 @@ applies_to: Similar to {{ilm-cap}} ({{ilm-init}}), Elasticsearch Curator can help you manage index lifecycles. **If {{ilm-init}} provides the functionality to manage your index lifecycle and you have at least a Basic license, use {{ilm-init}} instead of Curator.** Many {{stack}} components use {{ilm-init}} by default. -If you're looking for additional functionality for managing your index lifecycle, you can read more about how Elasticsearch Curator may help in [Curator index management](asciidocalypse://docs/curator/docs/reference/elasticsearch/elasticsearch-client-curator/index.md). +If you're looking for additional functionality for managing your index lifecycle, you can read more about how Elasticsearch Curator may help in [Curator index management](asciidocalypse://docs/curator/docs/reference/index.md). diff --git a/manage-data/lifecycle/data-tiers.md b/manage-data/lifecycle/data-tiers.md index 03f67d3b8..43e78eda4 100644 --- a/manage-data/lifecycle/data-tiers.md +++ b/manage-data/lifecycle/data-tiers.md @@ -28,7 +28,7 @@ The data tiers that you use, and the way that you use them, depends on the data * [Frozen tier](/manage-data/lifecycle/data-tiers.md#frozen-tier) nodes hold time series data that is accessed rarely and never updated. The frozen tier stores [partially mounted indices](/deploy-manage/tools/snapshot-and-restore/searchable-snapshots.md#partially-mounted) of [{{search-snaps}}](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/index-lifecycle-actions/ilm-searchable-snapshot.md) exclusively. This extends the storage capacity even further — by up to 20 times compared to the warm tier. ::::{tip} -The performance of an {{es}} node is often limited by the performance of the underlying storage and hardware profile. For example hardware profiles, refer to Elastic Cloud’s [instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/hardware.md). Review our recommendations for optimizing your storage for [indexing](/deploy-manage/production-guidance/optimize-performance/indexing-speed.md#indexing-use-faster-hardware) and [search](/deploy-manage/production-guidance/optimize-performance/search-speed.md#search-use-faster-hardware). +The performance of an {{es}} node is often limited by the performance of the underlying storage and hardware profile. For example hardware profiles, refer to Elastic Cloud’s [instance configurations](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/hardware.md). Review our recommendations for optimizing your storage for [indexing](/deploy-manage/production-guidance/optimize-performance/indexing-speed.md#indexing-use-faster-hardware) and [search](/deploy-manage/production-guidance/optimize-performance/search-speed.md#search-use-faster-hardware). :::: ::::{important} @@ -137,10 +137,10 @@ To make sure that all data can be migrated from the data tier you want to disabl ::::{tab-item} {{ech}} - 1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). + 1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. From the **Deployments** page, select your deployment. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. Filter the list of instances by the Data tier you want to disable. diff --git a/manage-data/lifecycle/index-lifecycle-management/migrate-index-allocation-filters-to-node-roles.md b/manage-data/lifecycle/index-lifecycle-management/migrate-index-allocation-filters-to-node-roles.md index 71be7632e..8fd0b551a 100644 --- a/manage-data/lifecycle/index-lifecycle-management/migrate-index-allocation-filters-to-node-roles.md +++ b/manage-data/lifecycle/index-lifecycle-management/migrate-index-allocation-filters-to-node-roles.md @@ -15,12 +15,12 @@ While we recommend relying on automatic data tier allocation to manage your data :::: -{{ess}} and {{ece}} can perform the migration automatically. For self-managed deployments, you need to manually update your configuration, ILM policies, and indices to switch to node roles. +{{ech}} and {{ece}} can perform the migration automatically. For self-managed deployments, you need to manually update your configuration, ILM policies, and indices to switch to node roles. -## Automatically migrate to node roles on {{ess}} or {{ece}} [cloud-migrate-to-node-roles] +## Automatically migrate to node roles on {{ech}} or {{ece}} [cloud-migrate-to-node-roles] -If you are using node attributes from the default deployment template in {{ess}} or {{ece}}, you will be prompted to switch to node roles when you: +If you are using node attributes from the default deployment template in {{ech}} or {{ece}}, you will be prompted to switch to node roles when you: * Upgrade to {{es}} 7.10 or higher * Deploy a warm, cold, or frozen data tier @@ -50,7 +50,7 @@ To switch to using node roles: Configure the appropriate roles for each data node to assign it to one or more data tiers: `data_hot`, `data_content`, `data_warm`, `data_cold`, or `data_frozen`. A node can also have other [roles](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/configuration-reference/node-settings.md). By default, new nodes are configured with all roles. -When you add a data tier to an {{ess}} deployment, one or more nodes are automatically configured with the corresponding role. To explicitly change the role of a node in an {{ess}} deployment, use the [Update deployment API](../../../deploy-manage/deploy/elastic-cloud/manage-deployments-using-elastic-cloud-api.md#ec_update_a_deployment). Replace the node’s `node_type` configuration with the appropriate `node_roles`. For example, the following configuration adds the node to the hot and content tiers, and enables it to act as an ingest node, remote, and transform node. +When you add a data tier to an {{ech}} deployment, one or more nodes are automatically configured with the corresponding role. To explicitly change the role of a node in an {{ech}} deployment, use the [Update deployment API](../../../deploy-manage/deploy/elastic-cloud/manage-deployments-using-elastic-cloud-api.md#ec_update_a_deployment). Replace the node’s `node_type` configuration with the appropriate `node_roles`. For example, the following configuration adds the node to the hot and content tiers, and enables it to act as an ingest node, remote, and transform node. ```yaml "node_roles": [ @@ -85,7 +85,7 @@ The policy must specify the corresponding phase for each data tier in your archi When you create a data stream, its first backing index is now automatically assigned to `data_hot` nodes. Similarly, when you directly create an index, it is automatically assigned to `data_content` nodes. -On {{ess}} deployments, remove the `cloud-hot-warm-allocation-0` index template that set the hot shard allocation attribute on all indices. +On {{ech}} deployments, remove the `cloud-hot-warm-allocation-0` index template that set the hot shard allocation attribute on all indices. ```console DELETE _template/.cloud-hot-warm-allocation-0 diff --git a/manage-data/lifecycle/index-lifecycle-management/migrate-index-management.md b/manage-data/lifecycle/index-lifecycle-management/migrate-index-management.md index 4c6cd1027..4f923f1d5 100644 --- a/manage-data/lifecycle/index-lifecycle-management/migrate-index-management.md +++ b/manage-data/lifecycle/index-lifecycle-management/migrate-index-management.md @@ -26,10 +26,10 @@ To configure ILM Migration in the console: ::::{tab-set} :::{tab-item} {{ech}} -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. From the **Deployments** page, select your deployment. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. Near the top of the deployment overview, you should get a message to migrate from index curation to index lifecycle management (ILM) along with a **Start migration** button. 4. Select which index curation pattern you wish to migrate. @@ -40,7 +40,7 @@ To configure ILM Migration in the console: 1. [Log into the Cloud UI](../../../deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). 2. From the **Deployments** page, select your deployment. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. Near the top of the deployment overview, you should get a message to migrate from index curation to index lifecycle management (ILM) along with a **Start migration** button. 4. Select which index curation pattern you wish to migrate. diff --git a/manage-data/lifecycle/index-lifecycle-management/tutorial-customize-built-in-policies.md b/manage-data/lifecycle/index-lifecycle-management/tutorial-customize-built-in-policies.md index d728c5a3a..47db1c63f 100644 --- a/manage-data/lifecycle/index-lifecycle-management/tutorial-customize-built-in-policies.md +++ b/manage-data/lifecycle/index-lifecycle-management/tutorial-customize-built-in-policies.md @@ -38,7 +38,7 @@ To complete this tutorial, you’ll need: * An {{es}} cluster with hot and warm data tiers. - * {{ess}}: Elastic Stack deployments on {{ess}} include a hot tier by default. To add a warm tier, edit your deployment and click **Add capacity** for the warm data tier. + * {{ech}}: Elastic Stack deployments on {{ecloud}} include a hot tier by default. To add a warm tier, edit your deployment and click **Add capacity** for the warm data tier. :::{image} ../../../images/elasticsearch-reference-tutorial-ilm-ess-add-warm-data-tier.png :alt: Add a warm data tier to your deployment diff --git a/manage-data/migrate/migrate-from-a-self-managed-cluster-with-a-self-signed-certificate-using-remote-reindex.md b/manage-data/migrate/migrate-from-a-self-managed-cluster-with-a-self-signed-certificate-using-remote-reindex.md index 5190be210..9854e0fa0 100644 --- a/manage-data/migrate/migrate-from-a-self-managed-cluster-with-a-self-signed-certificate-using-remote-reindex.md +++ b/manage-data/migrate/migrate-from-a-self-managed-cluster-with-a-self-signed-certificate-using-remote-reindex.md @@ -10,9 +10,9 @@ navigation_title: Reindex from a self-managed cluster # Migrate from a self-managed cluster with a self-signed certificate using remote reindex [ec-remote-reindex] -The following instructions show you how to configure remote reindex on Elasticsearch Service from a cluster that uses a self-signed CA. +The following instructions show you how to configure remote reindex on {{ech}} from a cluster that uses a self-signed CA. -Let’s assume that the self-managed cluster that uses a self-signed certificate is called `Source`, and you want to migrate data from `Source` to `Destination` on Elasticsearch Service. +Let’s assume that the self-managed cluster that uses a self-signed certificate is called `Source`, and you want to migrate data from `Source` to `Destination` on {{ech}}. ## Step 1: Create the `Source` certificate in a bundle [ec-remote-reindex-step1] @@ -37,14 +37,14 @@ Both the folder and file names must correspond to the settings configured in [St -## Step 2: Upload the zip bundle to your Elasticsearch Service account [ec-remote-reindex-step2] +## Step 2: Upload the zip bundle to your {{ecloud}} account [ec-remote-reindex-step2] To upload your file, follow the steps in the section [Add your extension](../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md#ec-add-your-plugin). Enter wildcard `*` for **Version** in order to be compatible for all future upgrades, and select `A bundle containing dictionary or script` as **Type**. -## Step 3: Create a new deployment on Elasticsearch Service [ec-remote-reindex-step3] +## Step 3: Create a new {{ech}} deployment [ec-remote-reindex-step3] -From the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body) create a new deployment. This will be the `Destination` cluster. +From the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) create a new deployment. This will be the `Destination` cluster. ::::{note} The `Destination` cluster should be the same or newer version as the `Source` cluster. If you already have a cluster available, you can skip this step. @@ -71,7 +71,7 @@ The `Destination` cluster should be the same or newer version as the `Source` cl ## Step 5: Reindex from remote `Source` cluster. [ec-remote-reindex-step5] -You can now run `reindex` on the Elasticsearch Service `Destination` cluster from `Source` cluster: +You can now run `reindex` on the {{ech}} `Destination` cluster from `Source` cluster: ```text POST _reindex diff --git a/manage-data/toc.yml b/manage-data/toc.yml index 1c5e611a7..89b2b5eeb 100644 --- a/manage-data/toc.yml +++ b/manage-data/toc.yml @@ -4,6 +4,7 @@ toc: - file: data-store.md children: - file: data-store/index-basics.md + - file: data-store/near-real-time-search.md - file: data-store/data-streams.md children: - file: data-store/data-streams/set-up-data-stream.md diff --git a/manage-data/use-case-use-elasticsearch-to-manage-time-series-data.md b/manage-data/use-case-use-elasticsearch-to-manage-time-series-data.md index fbf019506..04836910a 100644 --- a/manage-data/use-case-use-elasticsearch-to-manage-time-series-data.md +++ b/manage-data/use-case-use-elasticsearch-to-manage-time-series-data.md @@ -22,15 +22,15 @@ The steps for setting up data tiers vary based on your deployment type: :::::::{tab-set} -::::::{tab-item} Elasticsearch Service -1. Log in to the [{{ess}} Console](https://cloud.elastic.co/registration?page=docs&placement=docs-body). -2. Add or select your deployment from the {{ess}} home page or the deployments page. +::::::{tab-item} {{ech}} +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co/registration?page=docs&placement=docs-body). +2. Add or select your deployment from the {{ecloud}} home page or the **Deployments** page. 3. From your deployment menu, select **Edit deployment**. 4. To enable a data tier, click **Add capacity**. **Enable autoscaling** -[Autoscaling](../deploy-manage/autoscaling.md) automatically adjusts your deployment’s capacity to meet your storage needs. To enable autoscaling, select **Autoscale this deployment** on the **Edit deployment** page. Autoscaling is only available for {{ess}}. +[Autoscaling](../deploy-manage/autoscaling.md) automatically adjusts your deployment’s capacity to meet your storage needs. To enable autoscaling, select **Autoscale this deployment** on the **Edit deployment** page. Autoscaling is only available for {{ech}}. :::::: ::::::{tab-item} Self-managed @@ -76,8 +76,8 @@ To use {{search-snaps}}, you must register a supported snapshot repository. The :::::::{tab-set} -::::::{tab-item} Elasticsearch Service -When you create a cluster, {{ess}} automatically registers a default [`found-snapshots`](../deploy-manage/tools/snapshot-and-restore.md) repository. This repository supports {{search-snaps}}. +::::::{tab-item} {{ech}} +When you create a cluster, {{ech}} automatically registers a default [`found-snapshots`](../deploy-manage/tools/snapshot-and-restore.md) repository. This repository supports {{search-snaps}}. The `found-snapshots` repository is specific to your cluster. To use another cluster’s default repository, refer to the Cloud [Snapshot and restore](../deploy-manage/tools/snapshot-and-restore.md) documentation. diff --git a/raw-migrated-files/apm-agent-android/apm-agent-android/release-notes.md b/raw-migrated-files/apm-agent-android/apm-agent-android/release-notes.md index c01780bcb..ca48fcb62 100644 --- a/raw-migrated-files/apm-agent-android/apm-agent-android/release-notes.md +++ b/raw-migrated-files/apm-agent-android/apm-agent-android/release-notes.md @@ -5,6 +5,6 @@ This functionality is in technical preview and may be changed or removed in a fu :::: -* [Android agent version 0.x](asciidocalypse://docs/apm-agent-android/docs/release-notes/apm-android-agent.md) +* [Android agent version 0.x](asciidocalypse://docs/apm-agent-android/docs/release-notes/index.md) diff --git a/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-rotate-credentials.md b/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-rotate-credentials.md deleted file mode 100644 index a6070796d..000000000 --- a/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-rotate-credentials.md +++ /dev/null @@ -1,30 +0,0 @@ -# Rotate auto-generated credentials [k8s-rotate-credentials] - -When deploying an Elastic Stack application, the operator generates a set of credentials essential for the operation of that application. For example, these generated credentials include the default `elastic` user for Elasticsearch and the security token for APM Server. - -To list all auto-generated credentials in a namespace, run the following command: - -```sh -kubectl get secret -l eck.k8s.elastic.co/credentials=true -``` - -You can force the auto-generated credentials to be regenerated with new values by deleting the appropriate Secret. For example, to change the password for the `elastic` user from the [quickstart example](../../../deploy-manage/deploy/cloud-on-k8s/deploy-an-orchestrator.md), use the following command: - -```sh -kubectl delete secret quickstart-es-elastic-user -``` - -::::{warning} -If you are using the `elastic` user credentials in your own applications, they will fail to connect to Elasticsearch and Kibana after you run this command. It is not recommended to use `elastic` user credentials for production use cases. Always [create your own users with restricted roles](../../../deploy-manage/users-roles/cluster-or-deployment-auth/native.md) to access Elasticsearch. -:::: - - -To regenerate all auto-generated credentials in a namespace, run the following command: - -```sh -kubectl delete secret -l eck.k8s.elastic.co/credentials=true -``` - -::::{warning} -This command regenerates auto-generated credentials of **all** Elastic Stack applications in the namespace. -:::: diff --git a/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-saml-authentication.md b/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-saml-authentication.md deleted file mode 100644 index 8a0c633e8..000000000 --- a/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-saml-authentication.md +++ /dev/null @@ -1,147 +0,0 @@ -# SAML Authentication [k8s-saml-authentication] - -The Elastic Stack supports SAML single sign-on (SSO) into Kibana, using Elasticsearch as a backend service. - -::::{note} -Elastic Stack SSO requires a valid Enterprise license or Enterprise trial license. Check [the license documentation](../../../deploy-manage/license/manage-your-license-in-eck.md) for more details about managing licenses. -:::: - - -::::{tip} -Make sure you check the complete [Configuring SAML single sign-on on the Elastic Stack](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md) guide before setting up SAML SSO for Kibana and Elasticsearch deployments managed by ECK. -:::: - - -## Add a SAML realm to X-Pack security settings [k8s_add_a_saml_realm_to_x_pack_security_settings] - -To enable SAML SSO for the Elastic Stack, you have to configure the SAML realm in Elasticsearch and enable the usage of the SAML realm and authentication provider in Kibana. - -### Elasticsearch [k8s_elasticsearch] - -To add the SAML realm to Elasticsearch, use the `spec` section of the manifest. The SAML realm configuration contains an `idp.metadata.path` field that should be set to the path where your IdP’s SAML metadata file is located in the Elasticsearch pods. - -::::{note} -The `sp.*` SAML settings must point to Kibana endpoints that are accessible from the web browser used to open Kibana. -:::: - - -Check Elastic [Stack SAML documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-guide-idp) for more information on `idp.*` and `sp.*` settings. - -Make sure not to disable Elasticsearch’s file realm set by ECK, as ECK relies on the file realm for its operation. Set the `order` setting of the SAML realm to a greater value than the `order` value set for the file and native realms, which is by default -100 and -99 respectively. We recommend setting the priority of SAML realms to be lower than other realms, as shown in the next example. - -```yaml -apiVersion: elasticsearch.k8s.elastic.co/v1 -kind: Elasticsearch -metadata: - name: elasticsearch-sample -spec: - version: 8.16.1 - nodeSets: - - name: default - count: 1 - config: - xpack.security.authc.realms: - saml: - saml1: - attributes.principal: nameid - idp.entity_id: https://sso.example.com/ - idp.metadata.path: /usr/share/elasticsearch/config/saml/idp-saml-metadata.xml - order: 2 - sp.acs: https://kibana.example.com/api/security/saml/callback - sp.entity_id: https://kibana.example.com/ - sp.logout: https://kibana.example.com/logout -``` - -The `idp.metadata.path` setting should point to your Identity Provider’s metadata file. The metadata file path can either be a path within the Elasticsearch container (full path or relative to Elasticsearch’s config directory), or an HTTPS URL. - -If a path is provided, you need to make the metadata file available in the Elasticsearch container by creating a Kubernetes secret, containing the metadata file, and mounting it to the Elasticsearch container. - -After saving your Identity Provider’s metadata file, create the secret. For example: - -```sh -kubectl create secret generic idp-saml-metadata --from-file=idp-saml-metadata.xml -``` - -Next, create a volume from the secret and mount it for the Elasticsearch containers. For example: - -```yaml -apiVersion: elasticsearch.k8s.elastic.co/v1 -kind: Elasticsearch -metadata: - name: elasticsearch-sample -spec: - version: 8.16.1 - nodeSets: - - name: default - count: 1 - config: - ... - podTemplate: - spec: - containers: - - name: elasticsearch - volumeMounts: - - name: idp-saml-metadata - mountPath: /usr/share/elasticsearch/config/saml - volumes: - - name: idp-saml-metadata - secret: - secretName: idp-saml-metadata -``` - -::::{note} -To configure Elasticsearch for signing messages and/or for encrypted messages, keys and certificates should be mounted from a Kubernetes secret similar to how the SAML metadata file is mounted in the previous example. Passphrases, if needed, should be added to Elasticsearch’s keystore using ECK’s Secure Settings feature. For more information, check [the Secure Settings documentation](../../../deploy-manage/security/secure-settings.md) and [the Encryption and signing section](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-enc-sign) in the Stack SAML guide. -:::: - - - -### Kibana [k8s_kibana] - -To enable SAML authentication in Kibana, you have to add SAML as an authentication provider and specify the SAML realm that you used in your Elasticsearch configuration. - -::::{tip} -You can configure multiple authentication providers in Kibana and let users choose the provider they want to use. For more information, check [the Kibana authentication documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md). -:::: - - -For example: - -```yaml -apiVersion: kibana.k8s.elastic.co/v1 -kind: Kibana -metadata: - name: kibana-sample -spec: - version: 8.16.1 - count: 1 - elasticsearchRef: - name: elasticsearch-sample - config: - xpack.security.authc.providers: - saml.saml1: - order: 0 - realm: "saml1" -``` - -::::{important} -Your SAML users cannot login to Kibana until they are assigned roles. For more information, refer to [the Configuring role mapping section](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-role-mapping) in the Stack SAML guide. -:::: - - - - -## Generating Service Provider metadata [k8s_generating_service_provider_metadata] - -The Elastic Stack supports generating service provider metadata, that can be imported to the identity provider, and configure many of the integration options between the identity provider and the service provider, automatically. For more information, check [the Generating SP metadata section](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-sp-metadata) in the Stack SAML guide. - -To generate the Service Provider metadata using [the elasticsearch-saml-metadata command](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/command-line-tools/saml-metadata.md), you will have to run the command using `kubectl`, and then copy the generated metadata file to your local machine. For example: - -```sh -# Create metadata -kubectl exec -it elasticsearch-sample-es-default-0 -- sh -c "/usr/share/elasticsearch/bin/elasticsearch-saml-metadata --realm saml1" - -# Copy metadata file -kubectl cp elasticsearch-sample-es-default-0:/usr/share/elasticsearch/saml-elasticsearch-metadata.xml saml-elasticsearch-metadata.xml -``` - - diff --git a/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-users-and-roles.md b/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-users-and-roles.md index be2292b9a..b0e98338c 100644 --- a/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-users-and-roles.md +++ b/raw-migrated-files/cloud-on-k8s/cloud-on-k8s/k8s-users-and-roles.md @@ -1,143 +1,5 @@ # Users and roles [k8s-users-and-roles] -## Default elastic user [k8s-default-elastic-user] - -When the Elasticsearch resource is created, a default user named `elastic` is created automatically, and is assigned the `superuser` role. - -Its password can be retrieved in a Kubernetes secret, whose name is based on the Elasticsearch resource name: `-es-elastic-user`. - -For example, the password of the `elastic` user for an Elasticsearch cluster named `quickstart` can be retrieved with: - -```sh -kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}' -``` - -To rotate this password, refer to: [Rotate auto-generated credentials](../../../deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md). - -### Disabling the default `elastic` user [k8s_disabling_the_default_elastic_user] - -If your prefer to manage all users via SSO, for example using [SAML Authentication](../../../deploy-manage/users-roles/cluster-or-deployment-auth/saml.md) or OpenID Connect, you can disable the default `elastic` superuser by setting the `auth.disableElasticUser` field in the Elasticsearch resource to `true`: - -```yaml -apiVersion: elasticsearch.k8s.elastic.co/v1 -kind: Elasticsearch -metadata: - name: elasticsearch-sample -spec: - version: 8.16.1 - auth: - disableElasticUser: true - nodeSets: - - name: default - count: 1 -``` - - - -## Creating custom users [k8s_creating_custom_users] - -::::{warning} -Do not run the `elasticsearch-service-tokens` command inside an Elasticsearch Pod managed by the operator. This would overwrite the service account tokens used internally to authenticate the Elastic stack applications. -:::: - - -### Native realm [k8s_native_realm] - -You can create custom users in the [Elasticsearch native realm](/deploy-manage/users-roles/cluster-or-deployment-auth/native.md) using [Elasticsearch user management APIs](https://www.elastic.co/docs/api/doc/elasticsearch/group/endpoint-security). - - -### File realm [k8s_file_realm] - -Custom users can also be created by providing the desired [file realm content](/deploy-manage/users-roles/cluster-or-deployment-auth/file-based.md) or a username and password in Kubernetes secrets, referenced in the Elasticsearch resource. - -```yaml -apiVersion: elasticsearch.k8s.elastic.co/v1 -kind: Elasticsearch -metadata: - name: elasticsearch-sample -spec: - version: 8.16.1 - auth: - fileRealm: - - secretName: my-filerealm-secret-1 - - secretName: my-filerealm-secret-2 - nodeSets: - - name: default - count: 1 -``` - -You can reference several secrets in the Elasticsearch specification. ECK aggregates their content into a single secret, mounted in every Elasticsearch Pod. - -Referenced secrets may be of one of two types: - -1. a combination of username and password as in [Kubernetes basic authentication secrets](https://kubernetes.io/docs/concepts/configuration/secret/#basic-authentication-secret) -2. a raw file realm content secret - -A basic authentication secret can optionally also contain a `roles` file. It must contain a comma separated list of roles to be associated with the user. The following example illustrates this combination: - -```yaml -apiVersion: v1 -kind: Secret -metadata: - name: secret-basic-auth -type: kubernetes.io/basic-auth -stringData: - username: rdeniro # required field for kubernetes.io/basic-auth - password: mypassword # required field for kubernetes.io/basic-auth - roles: kibana_admin,ingest_admin # optional, not part of kubernetes.io/basic-auth -``` - -::::{note} -If you specify the password for the `elastic` user through such a basic authentication secret then the secret holding the password described in [Default elastic user](../../../deploy-manage/users-roles/cluster-or-deployment-auth/native.md#k8s-default-elastic-user) will not be created by the operator. -:::: - - -The second option, a file realm secret, is composed of 2 entries. You can provide either one entry or both entries in each secret: - -* `users`: content of the `users` file. It specifies user names and password hashes, as described in the [file realm documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/file-based.md). -* `users_roles`: content of the `users_roles` file. It associates each role to a list of users, as described in the [file realm documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/file-based.md). - -If you specify multiple users with the same name in more than one secret, the last one takes precedence. If you specify multiple roles with the same name in more than one secret, a single entry per role is derived from the concatenation of its corresponding users from all secrets. - -The following Secret specifies three users and their respective roles: - -```yaml -kind: Secret -apiVersion: v1 -metadata: - name: my-filerealm-secret -stringData: - users: |- - rdeniro:$2a$10$BBJ/ILiyJ1eBTYoRKxkqbuDEdYECplvxnqQ47uiowE7yGqvCEgj9W - alpacino:$2a$10$cNwHnElYiMYZ/T3K4PvzGeJ1KbpXZp2PfoQD.gfaVdImnHOwIuBKS - jacknich:{PBKDF2}50000$z1CLJt0MEFjkIK5iEfgvfnA6xq7lF25uasspsTKSo5Q=$XxCVLbaKDimOdyWgLCLJiyoiWpA/XDMe/xtVgn1r5Sg= - users_roles: |- - admin:rdeniro - power_user:alpacino,jacknich - user:jacknich -``` - -You can populate the content of both `users` and `users_roles` using the [elasticsearch-users](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch/command-line-tools/users-command.md) tool. - -For example, invoking the tool in a Docker container: - -```sh -# create a folder with the 2 files -mkdir filerealm -touch filerealm/users filerealm/users_roles - -# create user 'myuser' with role 'monitoring_user' -docker run \ - -v $(pwd)/filerealm:/usr/share/elasticsearch/config \ - docker.elastic.co/elasticsearch/elasticsearch:8.16.1 \ - bin/elasticsearch-users useradd myuser -p mypassword -r monitoring_user - -# create a Kubernetes secret with the file realm content -kubectl create secret generic my-file-realm-secret --from-file filerealm -``` - - - ## Creating custom roles [k8s_creating_custom_roles] [Roles](https://www.elastic.co/guide/en/elasticsearch/reference/current/defining-roles.html) can be specified using the [Role management API](https://www.elastic.co/guide/en/elasticsearch/reference/current/defining-roles.html#roles-management-api), or the [Role management UI in Kibana](https://www.elastic.co/guide/en/elasticsearch/reference/current/defining-roles.html#roles-management-ui). diff --git a/raw-migrated-files/cloud/cloud-enterprise/Elastic-Cloud-Enterprise-overview.md b/raw-migrated-files/cloud/cloud-enterprise/Elastic-Cloud-Enterprise-overview.md deleted file mode 100644 index c89107e57..000000000 --- a/raw-migrated-files/cloud/cloud-enterprise/Elastic-Cloud-Enterprise-overview.md +++ /dev/null @@ -1,30 +0,0 @@ -# Introducing Elastic Cloud Enterprise [Elastic-Cloud-Enterprise-overview] - -This page provides a high-level introduction to Elastic Cloud Enterprise (ECE). - -::::{note} -Try one of the [getting started guides](https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions/current/getting-started-guides.html) to discover the core concepts of the Elastic Stack and understand how Elastic can help you. -:::: - - -**What is ECE?** - -ECE evolves from the Elastic hosted Cloud SaaS offering into a standalone product. You can deploy ECE on public or private clouds, virtual machines, or your own premises. - -**Why ECE?** - -* Host your regulated or sensitive data on your internal network. -* Reuse your existing investment in on-premise infrastructure and reduce total cost. -* Maximize the hardware utilization for the various clusters. -* Centralize the management of multiple Elastic deployments across teams or geographies. - -**ECE features** - -* All services are containerized through Docker. -* High Availability through multiple Availability Zones. -* Deployment state coordination using ZooKeeper. -* Easy access for admins through the Cloud UI and API. -* Support for off-line installations. -* Automated restore and snapshot. - -Check the [glossary](asciidocalypse://docs/docs-content/docs/reference/glossary/index.md) to get familiar with the terminology for ECE as well as other Elastic products and solutions. diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-administering-ece.md b/raw-migrated-files/cloud/cloud-enterprise/ece-administering-ece.md deleted file mode 100644 index 7ab3fd8a6..000000000 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-administering-ece.md +++ /dev/null @@ -1,13 +0,0 @@ -# Administering your installation [ece-administering-ece] - -Now that you have Elastic Cloud Enterprise up and running, take a look at the things you can do to keep your installation humming along, from adding more capacity to dealing with hosts that require maintenance or have failed: - -* [Scale Out Your Installation](../../../deploy-manage/maintenance/ece/scale-out-installation.md) - Need to add more capacity? Here’s how. -* [Assign Roles to Hosts](../../../deploy-manage/deploy/cloud-enterprise/assign-roles-to-hosts.md) - Make sure new hosts can be used for their intended purpose after you install ECE on them. -* [Enable Maintenance Mode](../../../deploy-manage/maintenance/ece/enable-maintenance-mode.md) - Perform administrative actions on allocators safely by putting them into maintenance mode first. -* [Move Nodes From Allocators](../../../deploy-manage/maintenance/ece/move-nodes-instances-from-allocators.md) - Moves all Elasticsearch clusters and Kibana instances to another allocator, so that the allocator is no longer used for handling user requests. -* [Delete Hosts](../../../deploy-manage/maintenance/ece/delete-ece-hosts.md) - Remove a host from your ECE installation, either because it is no longer needed or because it is faulty. -* [Perform Host Maintenance](../../../deploy-manage/maintenance/ece/perform-ece-hosts-maintenance.md) - Apply operating system patches and other maintenance to hosts safely without removing them from your ECE installation. -* [Manage Elastic Stack Versions](../../../deploy-manage/deploy/cloud-enterprise/manage-elastic-stack-versions.md) - View, add, or update versions of the Elastic Stack that are available on your ECE installation. -* [Upgrade Your Installation](../../../deploy-manage/upgrade/orchestrator/upgrade-cloud-enterprise.md) - A new version of Elastic Cloud Enterprise is available and you want to upgrade. Here’s how. - diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-api-console.md b/raw-migrated-files/cloud/cloud-enterprise/ece-api-console.md index 1fd58b7d5..c0d20fa1c 100644 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-api-console.md +++ b/raw-migrated-files/cloud/cloud-enterprise/ece-api-console.md @@ -7,10 +7,10 @@ API console is intended for admin purposes. Avoid running normal workload like i :::: -You are unable to make Elastic Cloud Enterprise platform changes from the Elasticsearch API. If you want to work with the platform, check the [Elastic Cloud Enterprise RESTful API](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-enterprise/restful-api.md). +You are unable to make Elastic Cloud Enterprise platform changes from the Elasticsearch API. If you want to work with the platform, check the [Elastic Cloud Enterprise RESTful API](asciidocalypse://docs/cloud/docs/reference/cloud-enterprise/restful-api.md). 1. [Log into the Cloud UI](../../../deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-configuring-keystore.md b/raw-migrated-files/cloud/cloud-enterprise/ece-configuring-keystore.md index b21397ec0..1297b1ebf 100644 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-configuring-keystore.md +++ b/raw-migrated-files/cloud/cloud-enterprise/ece-configuring-keystore.md @@ -14,7 +14,7 @@ There are three types of secrets that you can use: Add keys and secret values to the keystore. 1. [Log into the Cloud UI](../../../deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. @@ -35,7 +35,7 @@ Only some settings are designed to be read from the keystore. However, the keyst When your keys and secret values are no longer needed, delete them from the keystore. 1. [Log into the Cloud UI](../../../deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-node-js.md b/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-node-js.md index 91a0fb265..41e59a552 100644 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-node-js.md +++ b/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-node-js.md @@ -158,7 +158,7 @@ async function run() { run().catch(console.log) ``` -When using the [client.index](asciidocalypse://docs/elasticsearch-js/docs/reference/elasticsearch/elasticsearch-client-javascript-api/api-reference.md#_index) API, the request automatically creates the `game-of-thrones` index if it doesn’t already exist, as well as document IDs for each indexed document if they are not explicitly specified. +When using the [client.index](asciidocalypse://docs/elasticsearch-js/docs/reference/api-reference.md#_index) API, the request automatically creates the `game-of-thrones` index if it doesn’t already exist, as well as document IDs for each indexed document if they are not explicitly specified. ## Search and modify data [ece_search_and_modify_data] @@ -205,7 +205,7 @@ async function update() { update().catch(console.log) ``` -This [more comprehensive list of API examples](asciidocalypse://docs/elasticsearch-js/docs/reference/elasticsearch/elasticsearch-client-javascript-api/examples.md) includes bulk operations, checking the existence of documents, updating by query, deleting, scrolling, and SQL queries. To learn more, check the complete [API reference](asciidocalypse://docs/elasticsearch-js/docs/reference/elasticsearch/elasticsearch-client-javascript-api/api-reference.md). +This [more comprehensive list of API examples](asciidocalypse://docs/elasticsearch-js/docs/reference/examples.md) includes bulk operations, checking the existence of documents, updating by query, deleting, scrolling, and SQL queries. To learn more, check the complete [API reference](asciidocalypse://docs/elasticsearch-js/docs/reference/api-reference.md). ## Switch to API key authentication [ece_switch_to_api_key_authentication] @@ -294,5 +294,5 @@ Schema : When the example code was run an index mapping was created automatically. The field types were selected by {{es}} based on the content seen when the first record was ingested, and updated as new fields appeared in the data. It would be more efficient to specify the fields and field types in advance to optimize performance. Refer to the Elastic Common Schema documentation and Field Type documentation when you are designing the schema for your production use cases. Ingest -: For more advanced scenarios, this [bulk ingestion](asciidocalypse://docs/elasticsearch-js/docs/reference/elasticsearch/elasticsearch-client-javascript-api/bulk_examples.md) reference gives an example of the `bulk` API that makes it possible to perform multiple operations in a single call. This bulk example also explicitly specifies document IDs. If you have a lot of documents to index, using bulk to batch document operations is significantly faster than submitting requests individually. +: For more advanced scenarios, this [bulk ingestion](asciidocalypse://docs/elasticsearch-js/docs/reference/bulk_examples.md) reference gives an example of the `bulk` API that makes it possible to perform multiple operations in a single call. This bulk example also explicitly specifies document IDs. If you have a lot of documents to index, using bulk to batch document operations is significantly faster than submitting requests individually. diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-python.md b/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-python.md index 01e4a6fcc..ac72d9da9 100644 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-python.md +++ b/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-python.md @@ -282,7 +282,7 @@ es.get(index='lord-of-the-rings', id='2EkAzngB_pyHD3p65UMt') 'birthplace': 'The Shire'}} ``` -For frequently used API calls with the Python client, check [Examples](asciidocalypse://docs/elasticsearch-py/docs/reference/elasticsearch/elasticsearch-client-python-api/examples.md). +For frequently used API calls with the Python client, check [Examples](asciidocalypse://docs/elasticsearch-py/docs/reference/examples.md). ## Switch to API key authentication [ece_switch_to_api_key_authentication_2] @@ -342,7 +342,7 @@ es = Elasticsearch( Check [Create API key API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key) to learn more about API Keys and [Security privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md) to understand which privileges are needed. If you are not sure what the right combination of privileges for your custom application is, you can enable [audit logging](../../../deploy-manage/monitor/logging-configuration/enabling-audit-logs.md) on {{es}} to find out what privileges are being used. To learn more about how logging works on {{ece}}, check [Monitoring Elastic Cloud deployment logs and metrics](https://www.elastic.co/blog/monitoring-elastic-cloud-deployment-logs-and-metrics). -For more information on refreshing an index, searching, updating, and deleting, check the [elasticsearch-py examples](asciidocalypse://docs/elasticsearch-py/docs/reference/elasticsearch/elasticsearch-client-python-api/examples.md). +For more information on refreshing an index, searching, updating, and deleting, check the [elasticsearch-py examples](asciidocalypse://docs/elasticsearch-py/docs/reference/examples.md). ### Best practices [ece_best_practices_2] @@ -357,5 +357,5 @@ Schema : When the example code is run, an index mapping is created automatically. The field types are selected by {{es}} based on the content seen when the first record was ingested, and updated as new fields appeared in the data. It would be more efficient to specify the fields and field types in advance to optimize performance. Refer to the Elastic Common Schema documentation and Field Type documentation when you design the schema for your production use cases. Ingest -: For more advanced scenarios, [Bulk helpers](asciidocalypse://docs/elasticsearch-py/docs/reference/elasticsearch/elasticsearch-client-python-api/client-helpers.md#bulk-helpers) gives examples for the `bulk` API that makes it possible to perform multiple operations in a single call. If you have a lot of documents to index, using bulk to batch document operations is significantly faster than submitting requests individually. +: For more advanced scenarios, [Bulk helpers](asciidocalypse://docs/elasticsearch-py/docs/reference/client-helpers.md#bulk-helpers) gives examples for the `bulk` API that makes it possible to perform multiple operations in a single call. If you have a lot of documents to index, using bulk to batch document operations is significantly faster than submitting requests individually. diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-beats-logstash.md b/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-beats-logstash.md index 03be09943..ce0c668c9 100644 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-beats-logstash.md +++ b/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-beats-logstash.md @@ -62,7 +62,7 @@ If you have multiple servers with metrics data, repeat the following steps to co **About Metricbeat modules** -Metricbeat has [many modules](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/metricbeat-modules.md) available that collect common metrics. You can [configure additional modules](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/configuration-metricbeat.md) as needed. For this example we’re using Metricbeat’s default configuration, which has the [System module](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/metricbeat-module-system.md) enabled. The System module allows you to monitor servers with the default set of metrics: *cpu*, *load*, *memory*, *network*, *process*, *process_summary*, *socket_summary*, *filesystem*, *fsstat*, and *uptime*. +Metricbeat has [many modules](asciidocalypse://docs/beats/docs/reference/metricbeat/metricbeat-modules.md) available that collect common metrics. You can [configure additional modules](asciidocalypse://docs/beats/docs/reference/metricbeat/configuration-metricbeat.md) as needed. For this example we’re using Metricbeat’s default configuration, which has the [System module](asciidocalypse://docs/beats/docs/reference/metricbeat/metricbeat-module-system.md) enabled. The System module allows you to monitor servers with the default set of metrics: *cpu*, *load*, *memory*, *network*, *process*, *process_summary*, *socket_summary*, *filesystem*, *fsstat*, and *uptime*. **Load the Metricbeat Kibana dashboards** @@ -89,7 +89,7 @@ sudo ./metricbeat setup \ 1. Specify the Cloud ID of your Elastic Cloud Enterprise deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. 2. Specify the username and password provided to you when creating the deployment. Make sure to keep the colon between ** and **. 3. The four lines related to `ssl` are only used when you have a self signed certificate for your Elastic Cloud Enterprise proxy. If needed, specify the full path to the PEM formatted root cetificate (Root CA) used for the Elastic Cloud Enterprise proxy. You can retrieve the certificate chain from your ECE system by following the instructions in [Get existing ECE security certificates](../../../deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md#ece-existing-security-certificates). Save the final certificate in the chain to a file. In this command example, the file is named `elastic-ece-ca-cert.pem`.::::{important} -Depending on variables including the installation location, environment and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-libbeat/config-file-permissions.md) of the metricbeat.yml. +Depending on variables including the installation location, environment and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/libbeat/config-file-permissions.md) of the metricbeat.yml. You might encounter similar permissions hurdles as you work through multiple sections of this document. These permission requirements are there for a good reason, a security safeguard to prevent unauthorized access and modification of key Elastic files. @@ -138,7 +138,7 @@ The next step is to configure Filebeat to send operational data to Logstash. As **Enable the Filebeat system module** -Filebeat has [many modules](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-modules.md) available that collect common log types. You can [configure additional modules](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/configuration-filebeat-modules.md) as needed. For this example we’re using Filebeat’s [System module](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-module-system.md). This module reads in the various system log files (with information including login successes or failures, sudo command usage, and other key usage details) based on the detected operating system. For this example, a Linux-based OS is used and Filebeat ingests logs from the */var/log/* folder. It’s important to verify that Filebeat is given permission to access your logs folder through standard file and folder permissions. +Filebeat has [many modules](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-modules.md) available that collect common log types. You can [configure additional modules](asciidocalypse://docs/beats/docs/reference/filebeat/configuration-filebeat-modules.md) as needed. For this example we’re using Filebeat’s [System module](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-module-system.md). This module reads in the various system log files (with information including login successes or failures, sudo command usage, and other key usage details) based on the detected operating system. For this example, a Linux-based OS is used and Filebeat ingests logs from the */var/log/* folder. It’s important to verify that Filebeat is given permission to access your logs folder through standard file and folder permissions. 1. Go to */filebeat-/modules.d/* where ** is the directory where Filebeat is installed. 2. Filebeat requires at least one fileset to be enabled. In file */filebeat-/modules.d/system.yml.disabled*, under both `syslog` and `auth` set `enabled` to `true`: @@ -182,7 +182,7 @@ sudo ./filebeat setup \ 1. Specify the Cloud ID of your Elastic Cloud Enterprise deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. 2. Specify the username and password provided to you when creating the deployment. Make sure to keep the colon between ** and **. 3. The four lines related to `ssl` are only needed if you are using self-signed certificates.::::{important} -Depending on variables including the installation location, environment, and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-libbeat/config-file-permissions.md) of the filebeat.yml. +Depending on variables including the installation location, environment, and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/libbeat/config-file-permissions.md) of the filebeat.yml. :::: @@ -245,7 +245,7 @@ Now the Filebeat and Metricbeat are set up, let’s configure a {{ls}} pipeline 1. {{ls}} listens for Beats input on the default port of 5044. Only one line is needed to do this. {{ls}} can handle input from many Beats of the same and also of varying types (Metricbeat, Filebeat, and others). 2. This sends output to the standard output, which displays through your command line interface. This plugin enables you to verify the data before you send it to {{es}}, in a later step. -3. Save the new *beats.conf* file in your Logstash folder. To learn more about the file format and options, check [{{ls}} Configuration Examples](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/config-examples.md). +3. Save the new *beats.conf* file in your Logstash folder. To learn more about the file format and options, check [{{ls}} Configuration Examples](asciidocalypse://docs/logstash/docs/reference/config-examples.md). ## Output {{ls}} data to stdout [ece-beats-logstash-stdout] @@ -388,7 +388,7 @@ In this section, you configure {{ls}} to send the Metricbeat and Filebeat data t ``` 1. Use the Cloud ID of your Elastic Cloud Enterprise deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. - 2. the default usename is `elastic`. It is not recommended to use the `elastic` account for ingesting data as this is a superuser. We recommend using a user with reduced permissions, or an API Key with permissions specific to the indices or data streams that will be written to. Check the [Grant access to secured resources](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/feature-roles.md) for information on the writer role and API Keys. Use the password provided when you created the deployment if using the `elastic` user, or the password used when creating a new ingest user with the roles specified in the [Grant access to secured resources](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/feature-roles.md) documentation. + 2. the default usename is `elastic`. It is not recommended to use the `elastic` account for ingesting data as this is a superuser. We recommend using a user with reduced permissions, or an API Key with permissions specific to the indices or data streams that will be written to. Check the [Grant access to secured resources](asciidocalypse://docs/beats/docs/reference/filebeat/feature-roles.md) for information on the writer role and API Keys. Use the password provided when you created the deployment if using the `elastic` user, or the password used when creating a new ingest user with the roles specified in the [Grant access to secured resources](asciidocalypse://docs/beats/docs/reference/filebeat/feature-roles.md) documentation. 3. The cacert line is only needed if you are using a self-signed certificate. @@ -481,9 +481,9 @@ In this section, you configure {{ls}} to send the Metricbeat and Filebeat data t ::::{note} In this guide, you manually launch each of the Elastic stack applications through the command line interface. In production, you may prefer to configure {{ls}}, Metricbeat, and Filebeat to run as System Services. Check the following pages for the steps to configure each application to run as a service: -* [Running {{ls}} as a service on Debian or RPM](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/running-logstash.md) -* [Metricbeat and systemd](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/running-with-systemd.md) -* [Start filebeat](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-starting.md) +* [Running {{ls}} as a service on Debian or RPM](asciidocalypse://docs/logstash/docs/reference/running-logstash.md) +* [Metricbeat and systemd](asciidocalypse://docs/beats/docs/reference/metricbeat/running-with-systemd.md) +* [Start filebeat](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-starting.md) :::: diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-db-logstash.md b/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-db-logstash.md index eacecd40b..95c1b5816 100644 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-db-logstash.md +++ b/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-db-logstash.md @@ -1,6 +1,6 @@ # Ingest data from a relational database into Elastic Cloud Enterprise [ece-getting-started-search-use-cases-db-logstash] -This guide explains how to ingest data from a relational database into Elastic Cloud Enterprise through [Logstash](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/index.md), using the Logstash [JDBC input plugin](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-jdbc.md). It demonstrates how Logstash can be used to efficiently copy records and to receive updates from a relational database, and then send them into {{es}} in an Elastic Cloud Enterprise deployment. +This guide explains how to ingest data from a relational database into Elastic Cloud Enterprise through [Logstash](asciidocalypse://docs/logstash/docs/reference/index.md), using the Logstash [JDBC input plugin](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-jdbc.md). It demonstrates how Logstash can be used to efficiently copy records and to receive updates from a relational database, and then send them into {{es}} in an Elastic Cloud Enterprise deployment. The code and methods presented here have been tested with MySQL. They should work with other relational databases. @@ -189,13 +189,13 @@ Let’s set up a sample Logstash input pipeline to ingest data from your new JDB : The Logstash JDBC plugin does not come packaged with JDBC driver libraries. The JDBC driver library must be passed explicitly into the plugin using the `jdbc_driver_library` configuration option. tracking_column - : This parameter specifies the field `unix_ts_in_secs` that tracks the last document read by Logstash from MySQL, stored on disk in [logstash_jdbc_last_run](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-jdbc.md#plugins-inputs-jdbc-last_run_metadata_path). The parameter determines the starting value for documents that Logstash requests in the next iteration of its polling loop. The value stored in `logstash_jdbc_last_run` can be accessed in a SELECT statement as `sql_last_value`. + : This parameter specifies the field `unix_ts_in_secs` that tracks the last document read by Logstash from MySQL, stored on disk in [logstash_jdbc_last_run](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-jdbc.md#plugins-inputs-jdbc-last_run_metadata_path). The parameter determines the starting value for documents that Logstash requests in the next iteration of its polling loop. The value stored in `logstash_jdbc_last_run` can be accessed in a SELECT statement as `sql_last_value`. unix_ts_in_secs : The field generated by the SELECT statement, which contains the `modification_time` as a standard [Unix timestamp](https://en.wikipedia.org/wiki/Unix_time) (seconds since the epoch). The field is referenced by the `tracking column`. A Unix timestamp is used for tracking progress rather than a normal timestamp, as a normal timestamp may cause errors due to the complexity of correctly converting back and forth between UMT and the local timezone. sql_last_value - : This is a [built-in parameter](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-jdbc.md#_predefined_parameters) containing the starting point of the current iteration of the Logstash polling loop, and it is referenced in the SELECT statement line of the JDBC input configuration. This parameter is set to the most recent value of `unix_ts_in_secs`, which is read from `.logstash_jdbc_last_run`. This value is the starting point for documents returned by the MySQL query that is executed in the Logstash polling loop. Including this variable in the query guarantees that we’re not resending data that is already stored in Elasticsearch. + : This is a [built-in parameter](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-jdbc.md#_predefined_parameters) containing the starting point of the current iteration of the Logstash polling loop, and it is referenced in the SELECT statement line of the JDBC input configuration. This parameter is set to the most recent value of `unix_ts_in_secs`, which is read from `.logstash_jdbc_last_run`. This value is the starting point for documents returned by the MySQL query that is executed in the Logstash polling loop. Including this variable in the query guarantees that we’re not resending data that is already stored in Elasticsearch. schedule : This uses cron syntax to specify how often Logstash should poll MySQL for changes. The specification `*/5 * * * * *` tells Logstash to contact MySQL every 5 seconds. Input from this plugin can be scheduled to run periodically according to a specific schedule. This scheduling syntax is powered by [rufus-scheduler](https://github.com/jmettraux/rufus-scheduler). The syntax is cron-like with some extensions specific to Rufus (for example, timezone support). @@ -286,7 +286,7 @@ In this section, we configure Logstash to send the MySQL data to Elasticsearch. ``` 1. Use the Cloud ID of your Elastic Cloud Enterprise deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. - 2. the default username is `elastic`. It is not recommended to use the `elastic` account for ingesting data as this is a superuser. We recommend using a user with reduced permissions, or an API Key with permissions specific to the indices or data streams that will be written to. Check [Configuring security in Logstash](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/secure-connection.md) for information on roles and API Keys. Use the password provided when you created the deployment if using the `elastic` user, or the password used when creating a new ingest user with the roles specified in the [Configuring security in Logstash](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/secure-connection.md) documentation. + 2. the default username is `elastic`. It is not recommended to use the `elastic` account for ingesting data as this is a superuser. We recommend using a user with reduced permissions, or an API Key with permissions specific to the indices or data streams that will be written to. Check [Configuring security in Logstash](asciidocalypse://docs/logstash/docs/reference/secure-connection.md) for information on roles and API Keys. Use the password provided when you created the deployment if using the `elastic` user, or the password used when creating a new ingest user with the roles specified in the [Configuring security in Logstash](asciidocalypse://docs/logstash/docs/reference/secure-connection.md) documentation. 3. This line is only used when you have a self signed certificate for your Elastic Cloud Enterprise proxy. If needed, specify the full path to the PEM formatted root certificate (Root CA) used for the Elastic Cloud Enterprise proxy. You can retrieve the certificate chain from your ECE system by following the instructions in [Get existing ECE security certificates](../../../deploy-manage/security/secure-your-elastic-cloud-enterprise-installation/manage-security-certificates.md#ece-existing-security-certificates). Save the final certificate in the chain to a file. In this example, the file is named `elastic-ece-ca-cert.pem`. diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-node-logs.md b/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-node-logs.md index 6f93dbf7b..60a66bd43 100644 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-node-logs.md +++ b/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-node-logs.md @@ -1,6 +1,6 @@ # Ingest logs from a Node.js web application using Filebeat [ece-getting-started-search-use-cases-node-logs] -This guide demonstrates how to ingest logs from a Node.js web application and deliver them securely into an Elastic Cloud Enterprise deployment. You’ll set up Filebeat to monitor a JSON-structured log file that has standard Elastic Common Schema (ECS) formatted fields, and you’ll then view real-time visualizations of the log events in Kibana as requests are made to the Node.js server. While Node.js is used for this example, this approach to monitoring log output is applicable across many client types. Check the list of [available ECS logging plugins](asciidocalypse://docs/ecs-logging/docs/reference/ecs/ecs-logging-overview/intro.md#_get_started). +This guide demonstrates how to ingest logs from a Node.js web application and deliver them securely into an Elastic Cloud Enterprise deployment. You’ll set up Filebeat to monitor a JSON-structured log file that has standard Elastic Common Schema (ECS) formatted fields, and you’ll then view real-time visualizations of the log events in Kibana as requests are made to the Node.js server. While Node.js is used for this example, this approach to monitoring log output is applicable across many client types. Check the list of [available ECS logging plugins](asciidocalypse://docs/ecs-logging/docs/reference/intro.md#_get_started). This guide presents: @@ -33,7 +33,7 @@ For the three following packages, you can create a working directory to install npm install winston ``` -* The [Elastic Common Schema (ECS) formatter](asciidocalypse://docs/ecs-logging-nodejs/docs/reference/ecs/ecs-logging-nodejs/winston.md) for the Node.js winston logger - This plugin formats your Node.js logs into an ECS structured JSON format ideally suited for ingestion into Elasticsearch. To install the ECS winston logger, run the following command in your working directory so that the package is installed in the same location as the winston package: +* The [Elastic Common Schema (ECS) formatter](asciidocalypse://docs/ecs-logging-nodejs/docs/reference/winston.md) for the Node.js winston logger - This plugin formats your Node.js logs into an ECS structured JSON format ideally suited for ingestion into Elasticsearch. To install the ECS winston logger, run the following command in your working directory so that the package is installed in the same location as the winston package: ```sh npm install @elastic/ecs-winston-format @@ -302,7 +302,7 @@ For this example, Filebeat uses the following four decoding options. json.expand_keys: true ``` -To learn more about these settings, check [JSON input configuration options](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-input-log.md#filebeat-input-log-config-json) and [Decode JSON fields](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/decode-json-fields.md) in the Filebeat Reference. +To learn more about these settings, check [JSON input configuration options](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-input-log.md#filebeat-input-log-config-json) and [Decode JSON fields](asciidocalypse://docs/beats/docs/reference/filebeat/decode-json-fields.md) in the Filebeat Reference. Append the four JSON decoding options to the *Filebeat inputs* section of *filebeat.yml*, so that the section now looks like this: @@ -338,7 +338,7 @@ Filebeat comes with predefined assets for parsing, indexing, and visualizing you ``` ::::{important} -Depending on variables including the installation location, environment, and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-libbeat/config-file-permissions.md) of filebeat.yml. You can also try running the command as *root*: *sudo ./filebeat setup -e* or you can disable strict permission checks by running the command with the `--strict.perms=false` option. +Depending on variables including the installation location, environment, and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/libbeat/config-file-permissions.md) of filebeat.yml. You can also try running the command as *root*: *sudo ./filebeat setup -e* or you can disable strict permission checks by running the command with the `--strict.perms=false` option. :::: @@ -439,7 +439,7 @@ In this command: * The *-c* flag specifies the path to the Filebeat config file. ::::{note} -Just in case the command doesn’t work as expected, check the [Filebeat quick start](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-installation-configuration.md#start) for the detailed command syntax for your operating system. You can also try running the command as *root*: *sudo ./filebeat -e -c filebeat.yml*. +Just in case the command doesn’t work as expected, check the [Filebeat quick start](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-installation-configuration.md#start) for the detailed command syntax for your operating system. You can also try running the command as *root*: *sudo ./filebeat -e -c filebeat.yml*. :::: @@ -522,5 +522,5 @@ You can add titles to the visualizations, resize and position them as you like, 2. As your final step, remember to stop Filebeat, the Node.js web server, and the client. Enter *CTRL + C* in the terminal window for each application to stop them. -You now know how to monitor log files from a Node.js web application, deliver the log event data securely into an Elastic Cloud Enterprise deployment, and then visualize the results in Kibana in real time. Consult the [Filebeat documentation](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-overview.md) to learn more about the ingestion and processing options available for your data. You can also explore our [documentation](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-cloud-ingest-data.html#ece-ingest-methods) to learn all about working in Elastic Cloud Enterprise. +You now know how to monitor log files from a Node.js web application, deliver the log event data securely into an Elastic Cloud Enterprise deployment, and then visualize the results in Kibana in real time. Consult the [Filebeat documentation](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-overview.md) to learn more about the ingestion and processing options available for your data. You can also explore our [documentation](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-cloud-ingest-data.html#ece-ingest-methods) to learn all about working in Elastic Cloud Enterprise. diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-python-logs.md b/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-python-logs.md index 8e6bd2c79..b2f41b287 100644 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-python-logs.md +++ b/raw-migrated-files/cloud/cloud-enterprise/ece-getting-started-search-use-cases-python-logs.md @@ -1,6 +1,6 @@ # Ingest logs from a Python application using Filebeat [ece-getting-started-search-use-cases-python-logs] -This guide demonstrates how to ingest logs from a Python application and deliver them securely into an Elastic Cloud Enterprise deployment. You’ll set up Filebeat to monitor a JSON-structured log file that has standard Elastic Common Schema (ECS) formatted fields, and you’ll then view real-time visualizations of the log events in {{kib}} as they occur. While Python is used for this example, this approach to monitoring log output is applicable across many client types. Check the list of [available ECS logging plugins](asciidocalypse://docs/ecs-logging/docs/reference/ecs/ecs-logging-overview/intro.md). +This guide demonstrates how to ingest logs from a Python application and deliver them securely into an Elastic Cloud Enterprise deployment. You’ll set up Filebeat to monitor a JSON-structured log file that has standard Elastic Common Schema (ECS) formatted fields, and you’ll then view real-time visualizations of the log events in {{kib}} as they occur. While Python is used for this example, this approach to monitoring log output is applicable across many client types. Check the list of [available ECS logging plugins](asciidocalypse://docs/ecs-logging/docs/reference/intro.md). You are going to learn how to: @@ -14,7 +14,7 @@ You are going to learn how to: ## Prerequisites [ece_prerequisites_2] -To complete these steps you need to have [Python](https://www.python.org/) installed on your system as well as the [Elastic Common Schema (ECS) logger](asciidocalypse://docs/ecs-logging-python/docs/reference/ecs/ecs-logging-python/installation.md) for the Python logging library. +To complete these steps you need to have [Python](https://www.python.org/) installed on your system as well as the [Elastic Common Schema (ECS) logger](asciidocalypse://docs/ecs-logging-python/docs/reference/installation.md) for the Python logging library. To install *ecs-logging-python*, run: @@ -99,7 +99,7 @@ In this step, you’ll create a Python script that generates logs in JSON format Having your logs written in a JSON format with ECS fields allows for easy parsing and analysis, and for standardization with other applications. A standard, easily parsible format becomes increasingly important as the volume and type of data captured in your logs expands over time. - Together with the standard fields included for each log entry is an extra *http.request.body.content* field. This extra field is there just to give you some additional, interesting data to work with, and also to demonstrate how you can add optional fields to your log data. Check the [ECS Field Reference](asciidocalypse://docs/ecs/docs/reference/ecs/ecs-field-reference.md) for the full list of available fields. + Together with the standard fields included for each log entry is an extra *http.request.body.content* field. This extra field is there just to give you some additional, interesting data to work with, and also to demonstrate how you can add optional fields to your log data. Check the [ECS Field Reference](asciidocalypse://docs/ecs/docs/reference/ecs-field-reference.md) for the full list of available fields. 2. Let’s give the Python script a test run. Open a terminal instance in the location where you saved *elvis.py* and run the following: @@ -193,7 +193,7 @@ For this example, Filebeat uses the following four decoding options. json.expand_keys: true ``` -To learn more about these settings, check [JSON input configuration options](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-input-log.md#filebeat-input-log-config-json) and [Decode JSON fields](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/decode-json-fields.md) in the Filebeat Reference. +To learn more about these settings, check [JSON input configuration options](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-input-log.md#filebeat-input-log-config-json) and [Decode JSON fields](asciidocalypse://docs/beats/docs/reference/filebeat/decode-json-fields.md) in the Filebeat Reference. Append the four JSON decoding options to the *Filebeat inputs* section of *filebeat.yml*, so that the section now looks like this: @@ -229,7 +229,7 @@ Filebeat comes with predefined assets for parsing, indexing, and visualizing you ``` ::::{important} -Depending on variables including the installation location, environment, and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-libbeat/config-file-permissions.md) of filebeat.yml. You can also try running the command as *root*: *sudo ./filebeat setup -e* or you can disable strict permission checks by running the command with the `--strict.perms=false` option. +Depending on variables including the installation location, environment, and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/libbeat/config-file-permissions.md) of filebeat.yml. You can also try running the command as *root*: *sudo ./filebeat setup -e* or you can disable strict permission checks by running the command with the `--strict.perms=false` option. :::: @@ -335,7 +335,7 @@ In this command: * The *-c* flag specifies the path to the Filebeat config file. ::::{note} -Just in case the command doesn’t work as expected, check the [Filebeat quick start](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-installation-configuration.md#start) for the detailed command syntax for your operating system. You can also try running the command as *root*: *sudo ./filebeat -e -c filebeat.yml*. +Just in case the command doesn’t work as expected, check the [Filebeat quick start](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-installation-configuration.md#start) for the detailed command syntax for your operating system. You can also try running the command as *root*: *sudo ./filebeat -e -c filebeat.yml*. :::: @@ -413,5 +413,5 @@ You can add titles to the visualizations, resize and position them as you like, 2. As your final step, remember to stop Filebeat and the Python script. Enter *CTRL + C* in both your Filebeat terminal and in your `elvis.py` terminal. -You now know how to monitor log files from a Python application, deliver the log event data securely into an Elastic Cloud Enterprise deployment, and then visualize the results in Kibana in real time. Consult the [Filebeat documentation](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-overview.md) to learn more about the ingestion and processing options available for your data. You can also explore our [documentation](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-cloud-ingest-data.html#ece-ingest-methods) to learn all about working in Elastic Cloud Enterprise. +You now know how to monitor log files from a Python application, deliver the log event data securely into an Elastic Cloud Enterprise deployment, and then visualize the results in Kibana in real time. Consult the [Filebeat documentation](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-overview.md) to learn more about the ingestion and processing options available for your data. You can also explore our [documentation](https://www.elastic.co/guide/en/cloud-enterprise/current/ece-cloud-ingest-data.html#ece-ingest-methods) to learn all about working in Elastic Cloud Enterprise. diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-manage-enterprise-search-settings.md b/raw-migrated-files/cloud/cloud-enterprise/ece-manage-enterprise-search-settings.md index e8245f20f..461f2a4a6 100644 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-manage-enterprise-search-settings.md +++ b/raw-migrated-files/cloud/cloud-enterprise/ece-manage-enterprise-search-settings.md @@ -11,7 +11,7 @@ Refer to the [Configuration settings reference](https://www.elastic.co/guide/en/ To add user settings: 1. [Log into the Cloud UI](../../../deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-secure-clusters-kerberos.md b/raw-migrated-files/cloud/cloud-enterprise/ece-secure-clusters-kerberos.md deleted file mode 100644 index 913f67cb9..000000000 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-secure-clusters-kerberos.md +++ /dev/null @@ -1,58 +0,0 @@ -# Secure your clusters with Kerberos [ece-secure-clusters-kerberos] - -You can secure your Elasticsearch clusters and Kibana instances in a deployment by using the Kerberos-5 protocol to authenticate users. - -::::{note} -The Kerberos credentials are valid against the deployment, not the ECE platform. -:::: - - - -## Before you begin [ece_before_you_begin_20] - -The steps in this section require an understanding of Kerberos. To learn more about Kerberos, check our documentation on [configuring Elasticsearch for Kerberos authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md). - - -## Configure the cluster to use Kerberos [ece-configure-kerberos-settings] - -With a custom bundle containing the Kerberos files and changes to the cluster configuration, you can enforce user authentication through the Kerberos protocol. - -1. Create or use an existing deployment that includes a Kibana instance. -2. Create a [custom bundle](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch-plugins/cloud-enterprise/ece-add-plugins.md) that contains your `krb5.conf` and `keytab` files, and add it to your cluster. - - ::::{tip} - You should use these exact filenames for Elastic Cloud Enterprise to recognize the file in the bundle. - :::: - -3. Edit your cluster configuration, sometimes also referred to as the deployment plan, to define Kerberos settings as described in [Elasticsearch documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md). - - ```sh - xpack.security.authc.realms.kerberos.cloud-krb: - order: 2 - keytab.path: es.keytab - remove_realm_name: false - ``` - -4. Update Kibana in the [user settings configuration](../../../deploy-manage/deploy/cloud-enterprise/edit-stack-settings.md) to use Kerberos as the authentication provider: - - ```sh - xpack.security.authc.providers: - kerberos.kerberos1: - order: 0 - ``` - - This configuration disables all other realms and only allows users to authenticate with Kerberos. If you wish to allow your native realm users to authenticate, you need to also enable the `basic` `provider` like this: - - ```sh - xpack.security.authc.providers: - kerberos.kerberos1: - order: 0 - description: "Log in with Kerberos" <1> - basic.basic1: - order: 1 - ``` - - 1. This arbitrary string defines how Kerberos login is titled in the Login Selector UI that is shown when you enable multiple authentication providers in Kibana. You can also configure the optional `icon` and `hint` settings for any authentication provider. - -5. Use the Kibana endpoint URL to log in. - diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-secure-clusters-oidc.md b/raw-migrated-files/cloud/cloud-enterprise/ece-secure-clusters-oidc.md deleted file mode 100644 index 70c5a3c38..000000000 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-secure-clusters-oidc.md +++ /dev/null @@ -1,257 +0,0 @@ -# Secure your clusters with OpenID Connect [ece-secure-clusters-oidc] - -You can secure your deployment using OpenID Connect for single sign-on. OpenID Connect is an identity layer on top of the OAuth 2.0 protocol. The end user identity gets verified by an authorization server and basic profile information is sent back to the client. - -::::{note} -The OpenID Connect credentials are valid against the deployment, not the ECE platform. You can configure [role-based access control](../../../deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md) for the platform separately. -:::: - - - -## Before you begin [ece_before_you_begin_19] - -To prepare for using OpenID Connect for authentication for deployments: - -* Create or use an existing deployment. Make note of the Kibana endpoint URL, it will be referenced as `` in the following steps. -* The steps in this section required a moderate understanding of [OpenID Connect](https://openid.net/specs/openid-connect-core-1_0.md#Authentication) in general and the Authorization Code Grant flow specifically. For more information about OpenID Connect and how it works with the Elastic Stack check: - - * Our [configuration guide for Elasticsearch](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#oidc-elasticsearch-authentication). - - - -## Configure the OpenID Connect Provider [ece-configure-oidc-provider] - -The OpenID *Connect Provider* (OP) is the entity in OpenID Connect that is responsible for authenticating the user and for granting the necessary tokens with the authentication and user information to be consumed by the *Relying Parties* (RP). - -In order for Elastic Cloud Enterprise (acting as an RP) to be able use your OpenID Connect Provider for authentication, a trust relationship needs to be established between the OP and the RP. In the OpenID Connect Provider, this means registering the RP as a client. - -The process for registering the Elastic Cloud Enterprise RP will be different from OP to OP and following the provider’s relevant documentation is prudent. The information for the RP that you commonly need to provide for registration are the following: - -`Relying Party Name` -: An arbitrary identifier for the relying party. Neither the specification nor our implementation impose any constraints on this value. - -`Redirect URI` -: This is the URI where the OP will redirect the user’s browser after authentication. The appropriate value for this is `/api/security/oidc/callback`. This can also be called the `Callback URI`. - -At the end of the registration process, the OP assigns a Client Identifier and a Client Secret for the RP (Elastic Cloud Enterprise) to use. Note these two values as they are used in the cluster configuration. - - -## Configure your cluster to use OpenID Connect [ece-secure-deployment-oidc] - -You’ll need to [add the client secret](../../../deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#ece-oidc-client-secret) to the keystore and then [update the Elasticsearch user settings](../../../deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#ece-oidc-user-settings) to refer to that secret and use the OpenID Connect realm. - - -### Configure the Client Secret [ece-oidc-client-secret] - -Configure the Client Secret that was assigned to the PR by the OP during registration to the Elasticsearch keystore. - -This is a sensitive setting, it won’t be stored in plaintext in the cluster configuration but rather as a secure setting. In order to do so, follow these steps: - -1. On the deployments page, select your deployment. - - Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. - -2. From your deployment menu, select **Security**. -3. Under the **Elasticsearch keystore** section, select **Add settings**. -4. On the **Create setting** window, select the secret **Type** to be `Single string`. -5. Set the **Setting name**` to `xpack.security.authc.realms.oidc..rp.client_secret` and add the Client Secret you received from the OP during registration in the `Secret` field. - - ::::{note} - `` refers to the name of the OpenID Connect Realm. You can select any name that contains alphanumeric characters, underscores and hyphens. Replace `` with the realm name you selected. - :::: - - - ::::{note} - After you configure the Client Secret, any attempt to restart the deployment will fail until you complete the rest of the configuration steps. If you wish to rollback the OpenID Connect related configuration effort, you need to remove the `xpack.security.authc.realms.oidc..rp.client_secret` that was just added by using the "remove" button by the setting name under `Security keys`. - :::: - -6. You must also edit your cluster configuration, sometimes also referred to as the deployment plan, in order to add the appropriate settings. - - -### Configure the user settings [ece-oidc-user-settings] - -The Elasticsearch cluster needs to be configured to use the OpenID Connect realm for user authentication and to map the applicable roles to the users. If you are using machine learning or a deployment with hot-warm architecture, you must include this OpenID Connect related configuration in the user settings section for each node type. - -1. [Update your Elasticsearch user settings](../../../deploy-manage/deploy/cloud-enterprise/edit-stack-settings.md) for the `oidc` realm and specify the relevant configuration: - - ```sh - xpack: - security: - authc: - realms: - oidc: - oidc1: <1> - order: 2 <2> - rp.client_id: "client-id" <3> - rp.response_type: "code" - rp.redirect_uri: "/api/security/oidc/callback" <4> - op.issuer: "" <5> - op.authorization_endpoint: "" <6> - op.token_endpoint: "" <7> - op.userinfo_endpoint: "" <8> - op.jwkset_path: "" <9> - claims.principal: sub <10> - claims.groups: "http://example.info/claims/groups" <11> - ``` - - 1. The `oidc` realm name: `cloud-oidc` is reserved for internal use only and can’t be used. Please select another name, as shown here. - 2. The order of the OpenID Connect realm in your authentication chain. Allowed values are between `2` and `100`. Set to `2` unless you plan on configuring multiple SSO realms for this cluster. - 3. This, usually opaque, arbitrary string, is the Client Identifier that was assigned to the Elastic Cloud Enterprise RP by the OP upon registration. - 4. Replace `` with the value noted in the previous step - 5. A url, used as a unique identifier for the OP. The value for this setting should be provided by your OpenID Connect Provider. - 6. The URL for the Authorization Endpoint in the OP. This is where the user’s browser will be redirected to start the authentication process. The value for this setting should be provided by your OpenID Connect Provider. - 7. The URL for the Token Endpoint in the OpenID Connect Provider. This is the endpoint where Elastic Cloud Enterprise will send a request to exchange the code for an ID Token, as part of the Authorization Code flow. The value for this setting should be provided by your OpenID Connect Provider. - 8. (Optional) The URL for the UserInfo Endpoint in the OpenID Connect Provider. This is the endpoint of the OP that can be queried to get further user information, if required. The value for this setting should be provided by your OpenID Connect Provider. - 9. The path to a file or an HTTPS URL pointing to a JSON Web Key Set with the key material that the OpenID Connect Provider uses for signing tokens and claims responses. Your OpenID Connect Provider should provide you with this file. - 10. Defines the OpenID Connect claim that is going to be mapped to the principal (username) of the authenticated user in Kibana. In this example, we map the value of the `sub` claim, but this is not a requirement, other claims can be used too. Check [the claims mapping documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#oidc-claims-mapping) for details and available options. - 11. Defines the OpenID Connect claim that is going to be used for role mapping. Note that the value `"http://example.info/claims/groups"` that is used here, is an arbitrary example. Check [the claims mapping documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#oidc-claims-mapping) for a very detailed description of how the claim mapping works and how can these be used for role mapping. The name of this claim should be determined by the configuration of your OpenID Connect Provider. NOTE: According to the OpenID Connect specification, the OP should also make their configuration available at a well known URL, which is the concatenation of their `Issuer` value with the `.well-known/openid-configuration` string. To configure the OpenID Connect realm, refer to the `https://op.org.com/.well-known/openid-configuration` documentation. - -2. By default, users authenticating through OpenID Connect have no roles assigned to them. For example, if you want all your users authenticating with OpenID Connect to get access to Kibana, issue the following request to Elasticsearch: - - ```sh - POST /_security/role_mapping/CLOUD_OIDC_TO_KIBANA_ADMIN <1> - { - "enabled": true, - "roles": [ "kibana_admin" ], <2> - "rules": { <3> - "field": { "realm.name": "oidc-realm-name" } <4> - }, - "metadata": { "version": 1 } - } - ``` - - 1. The name of the new role mapping. - 2. The role mapped to the users. - 3. The fields to match against. - 4. The name of the OpenID Connect realm. This needs to be the same value as the one used in the cluster configuration. - -3. Update Kibana in the [advanced configuration editor](../../../deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md) to use OpenID Connect as the authentication provider: - - ```sh - xpack.security.authc.providers: - oidc.oidc1: - order: 0 - realm: oidc-realm-name <1> - ``` - - 1. The name of the OpenID Connect realm. This needs to be the same value as the one used in the cluster configuration. - - - This configuration disables all other realms and only allows users to authenticate with OpenID Connect. If you wish to allow your native realm users to authenticate, you need to also enable the `basic` `provider` like this: - - ```sh - xpack.security.authc.providers: - oidc.oidc1: - order: 0 - realm: oidc-realm-name - description: "Log in with my OpenID Connect" <1> - basic.basic1: - order: 1 - ``` - - 1. This arbitrary string defines how OpenID Connect login is titled in the Login Selector UI that is shown when you enable multiple authentication providers in Kibana. If you have a Kibana instance, you can also configure the optional `icon` and `hint` settings for any authentication provider. - -4. Optional: If your OpenID Connect Provider doesn’t publish its JWKS at an https URL, or if you want to use a local copy, you can upload the JWKS as a file. - - 1. Prepare a ZIP file with a [custom bundle](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch-plugins/cloud-enterprise/ece-add-plugins.md) that contains your OpenID Connect Provider’s JWKS file (`op_jwks.json`) inside of an `oidc` folder. - - This bundle allows all Elasticsearch containers to access the metadata file. - - 2. Update your Elasticsearch cluster configuration using the [advanced configuration editor](../../../deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md) to use the bundle you prepared in the previous step. You need to modify the `user_bundles` JSON attribute similar to the following example snippet: - - ```sh - { - "cluster_name": "REPLACE_WITH_YOUR_CLUSTER_NAME", - "plan": { - - ... - - "elasticsearch": { - "version": "8.13.1", - "user_bundles": [ - { - "name": "oidc-keys", - "url": "https://www.MYURL.com/oidc-keys.zip", - "elasticsearch_version": "8.*" - } - ] - } - } - ``` - - ::::{note} - The URLs that point to the ZIP file containing the bundle must be accessible to the deployment. Custom bundles are unzipped under the path `/app/config/BUNDLE_DIRECTORY_STRUCTURE`, where `BUNDLE_DIRECTORY_STRUCTURE` is the directory structure in the ZIP file. Make sure to save the file location where custom bundles get unzipped, as you will need it in the next step. - :::: - - - In our example, the OpenID Connect Provider JWK set file will be located in the path `/app/config/oidc/op_jwks.json`: - - ```sh - $ tree . - . - └── oidc - └── op_jwks.json - ``` - - 3. Adjust your `oidc` realm configuration accordingly: - - - -## Configure SSL [ece-oidc-ssl-configuration] - -OpenID Connect depends on TLS to provider security properties such as encryption in transit and endpoint authentication. The RP is required to establish back-channel communication with the OP in order to exchange the code for an ID Token during the Authorization code grant flow and in order to get additional user information from the UserInfo endpoint. As such, it is important that Elastic Cloud Enterprise can validate and trust the server certificate that the OP uses for TLS. Since the system truststore is used for the client context of outgoing https connections, if your OP is using a certificate from a trusted CA, no additional configuration is needed. - -However, if your OP uses a certificate that is issued for instance, by a CA used only in your Organization, you must configure Elastic Cloud Enterprise to trust that CA. - -1. Prepare a ZIP file with a [custom bundle](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch-plugins/cloud-enterprise/ece-add-plugins.md) that contains the CA certificate (`company-ca.pem`) that signed the certificate your OpenID Connect Provider uses for TLS inside of an `oidc-tls` folder -2. Update your Elasticsearch cluster configuration using the [advanced configuration editor](../../../deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md) to use the bundle you prepared in the previous step. You need to modify the `user_bundles` JSON attribute similar to the following example snippet: - - ```sh - { - "cluster_name": "REPLACE_WITH_YOUR_CLUSTER_NAME", - "plan": { - - ... - - "elasticsearch": { - "version": "8.13.1", - "user_bundles": [ - { - "name": "oidc-tls-ca", - "url": "https://www.MYURL.com/oidc-tls-ca.zip", - "elasticsearch_version": "8.*" - } - ] - } - } - ``` - - ::::{note} - The URLs that point to the ZIP file containing the bundle must be accessible to the deployment. Custom bundles are unzipped under the path `/app/config/BUNDLE_DIRECTORY_STRUCTURE`, where `BUNDLE_DIRECTORY_STRUCTURE` is the directory structure in the ZIP file. Make sure to save the file location where custom bundles get unzipped, as you will need it in the next step. - :::: - - - In our example, the CA certificate file will be located in the path `/app/config/oidc-tls/company-ca.pem`: - - ```sh - $ tree . - . - └── oidc-tls - └── company-ca.pem - ``` - -3. Adjust your `oidc` realm configuration accordingly: - - - - - -## Optional Settings [ece-oidc-optional-settings] - -The following optional oidc realm settings are supported and can be set if needed: - -* `op.endsession_endpoint` The URL to the End Session Endpoint in the OpenID Connect Provider. This is the endpoint where the user’s browser will be redirected after local logout, if the realm is configured for RP initiated Single Logout and the OP supports it. The value for this setting should be provided by your OpenID Connect Provider. -* `rp.post_logout_redirect_uri` The Redirect URL where the OpenID Connect Provider should redirect the user after a successful Single Logout. This should be set to a value that will not trigger a new OpenID Connect Authentication, `/security/logged_out` is a good choice for this parameter. -* `rp.signature_algorithm` The signature algorithm that will be used by {{es}} in order to verify the signature of the ID tokens it will receive from the OpenID Connect Provider. Defaults to `RSA256`. -* `rp.requested_scopes` The scope values that will be requested by the OpenID Connect Provider as part of the Authentication Request. Defaults to `openid`, which is the only required scope for authentication. If your use case requires that you receive additional claims, you might need to request additional scopes, one of `profile`, `email`, `address`, `phone`. Note that `openid` should always be included in the list of requested scopes. - diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters-JWT.md b/raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters-JWT.md deleted file mode 100644 index f3382884b..000000000 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters-JWT.md +++ /dev/null @@ -1,140 +0,0 @@ -# Secure your clusters with JWT [ece-securing-clusters-JWT] - -These steps show how you can secure your Elasticsearch clusters instances in a deployment by using a JSON Web Token (JWT) realm for authentication. - -::::{note} -The JWT credentials are valid against the deployment, not the ECE platform. You can configure [role-based access control](../../../deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md) for the platform separately. -:::: - - - -### Before you begin [ece_before_you_begin_21] - -Elastic Cloud Enterprise supports JWT of ID Token format with Elastic Stack version 8.2 and later. Support for JWT of certain access token format is available since 8.7. - - -### Configure your 8.2 or above cluster to use JWT of ID Token format [ece_configure_your_8_2_or_above_cluster_to_use_jwt_of_id_token_format] - -```sh -xpack: - security: - authc: - realms: - jwt: <1> - jwt-realm-name: <2> - order: 2 <3> - client_authentication.type: "shared_secret" <4> - allowed_signature_algorithms: "HS256,HS384,HS512,RS256,RS384,RS512,ES256,ES384,ES512,PS256,PS384,PS512" <5> - allowed_issuer: "issuer1" <6> - allowed_audiences: "elasticsearch1,elasticsearch2" <7> - claims.principal: "sub" <8> - claims.groups: "groups" <9> -``` - -1. Specifies the authentication realm service. -2. Defines the JWT realm name. -3. The order of the JWT realm in your authentication chain. Allowed values are between `2` and `100`, inclusive. -4. Defines the client authenticate type. -5. Defines the JWT `alg` header values allowed by the realm. -6. Defines the JWT `iss` claim value allowed by the realm. -7. Defines the JWT `aud` claim values allowed by the realm. -8. Defines the JWT claim name used for the principal (username). No default. -9. Defines the JWT claim name used for the groups. No default. - - -By default, users authenticating through JWT have no roles assigned to them. If you want all users in the group `elasticadmins` in your identity provider to be assigned the `superuser` role in your Elasticsearch cluster, issue the following request to Elasticsearch: - -```sh -POST /_security/role_mapping/CLOUD_JWT_ELASTICADMIN_TO_SUPERUSER <1> -{ - "enabled": true, - "roles": [ "superuser" ], <2> - "rules": { "all" : [ <3> - { "field": { "realm.name": "jwt-realm-name" } }, <4> - { "field": { "groups": "elasticadmins" } } - ]}, - "metadata": { "version": 1 } -} -``` - -1. The mapping name. -2. The Elastic Stack role to map to. -3. A rule specifying the JWT role to map from. -4. `realm.name` can be any string containing only alphanumeric characters, underscores, and hyphens. - - -::::{note} -In order to use the field `groups` in the mapping rule, you need to have mapped the JWT Attribute that conveys the group membership to `claims.groups` in the previous step. -:::: - - - -### Configure your 8.7 or above cluster to use JWT of access token format [ece_configure_your_8_7_or_above_cluster_to_use_jwt_of_access_token_format] - -```sh -xpack: - security: - authc: - realms: - jwt: - jwt-realm-name: - order: 2 - token_type: "access_token" <1> - client_authentication.type: "shared_secret" - allowed_signature_algorithms: [ "RS256", "HS256" ] - allowed_subjects: [ "123456-compute@developer.example.com" ] <2> - allowed_issuer: "issuer1" - allowed_audiences: [ "elasticsearch1", "elasticsearch2" ] - required_claims: <3> - token_use: "access" - fallback_claims.sub: "client_id" <4> - fallback_claims.aud: "scope" <5> - claims.principal: "sub" <6> - claims.groups: "groups" -``` - -1. Specifies token type accepted by this JWT realm -2. Specifies subjects allowed by the realm. This setting is mandatory for `access_token` JWT realms. -3. Additional claims required for successful authentication. The claim name can be any valid variable names and the claim values must be either string or array of strings. -4. The name of the JWT claim to extract the subject information if the `sub` claim does not exist. This setting is only available for `access_token` JWT realms. -5. The name of the JWT claim to extract the audiences information if the `aud` claim does not exist. This setting is only available for `access_token` JWT realms. -6. Since the fallback claim for `sub` is defined as `client_id`, the principal will also be extracted from `client_id` if the `sub` claim does not exist - - -::::{note} -Refer to [JWT authentication documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/jwt.md) for more details and examples. -:::: - - - -## Update the trust settings of a deployment [ece_update_the_trust_settings_of_a_deployment] - -A deployment can be configured to trust all, specific, or no deployments in the same ECE environment, other remote ECE environments, {{ecloud}}, or self-managed environments. - -This can be done in the Security page of your deployment: - -1. [Log into the Cloud UI](../../../deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). -2. On the deployments page, select your deployment. - - Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. - -3. From the **Security** menu, find the **Trust Management** section. - -:::{image} ../../../images/cloud-enterprise-ce-deployment-trusted-environments.png -:alt: Trusted Environments at the Deployment Level -:class: screenshot -::: - -The page shows a list of all the deployments that this deployment trusts, grouped by environment. Initially only the **Local Environment** appears, which represents the current ECE environment, but you can trust deployments in [other ECE environments](../../../deploy-manage/remote-clusters/ece-remote-cluster-other-ece.md#ece-trust-remote-environments), in [{{ecloud}}](../../../deploy-manage/remote-clusters/ece-remote-cluster-ece-ess.md#ece-trust-ec), or any [self-managed environment](../../../deploy-manage/remote-clusters/ece-remote-cluster-self-managed.md#ece-trust-self-managed). - -``` -page, under *Security* > *Trust Management*: -``` -1. Select **Add trusted environment** to configure trust with deployments in another ECE environment whose trust relationship has been created in the previous step. -2. For each trusted ECE environment you can edit the trust level to trust all deployments or just specific ones. For the specific ones option, you can introduce a list of Elasticsearch cluster IDs to trust from that ECE environment. The Elasticsearch `Cluster ID` can be found in the deployment overview page under **Applications**. - -:::{image} ../../../images/cloud-enterprise-ce-deployment-trusted-environments.png -:alt: Trusted Environments at the Deployment Level -:class: screenshot -::: - diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters-SAML.md b/raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters-SAML.md deleted file mode 100644 index 7a2a19aaf..000000000 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters-SAML.md +++ /dev/null @@ -1,195 +0,0 @@ -# Secure your clusters with SAML [ece-securing-clusters-SAML] - -These steps show how you can secure your Elasticsearch clusters and Kibana instances in a deployment by using a Security Assertion Markup Language (SAML) identity provider (IdP) for cross-domain, single sign-on authentication. - -::::{note} -The SAML credentials are valid against the deployment, not the ECE platform. You can configure [role-based access control](../../../deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md) for the platform separately. -:::: - - - -### Configure your 8.0 or above cluster to use SAML [ece_configure_your_8_0_or_above_cluster_to_use_saml] - -You must edit your cluster configuration, sometimes also referred to as the deployment plan, to point to the SAML IdP before you can complete the configuration in Kibana. If you are using machine learning or a deployment with hot-warm architecture, you must include this SAML IdP configuration in the user settings section for each node type. - -1. Create or use an existing deployment that includes a Kibana instance. -2. Copy the Kibana endpoint URL. -3. $$$step-3$$$[Update your Elasticsearch user settings](../../../deploy-manage/deploy/cloud-enterprise/edit-stack-settings.md) for the `saml` realm and specify your IdP provider configuration: - - ```sh - xpack: - security: - authc: - realms: - saml: <1> - saml-realm-name: <2> - order: 2 <3> - attributes.principal: "nameid:persistent" <4> - attributes.groups: "groups" <5> - idp.metadata.path: "" <6> - idp.entity_id: "" <7> - sp.entity_id: "KIBANA_ENDPOINT_URL/" <8> - sp.acs: "KIBANA_ENDPOINT_URL/api/security/saml/callback" - sp.logout: "KIBANA_ENDPOINT_URL/logout" - ``` - - 1. Specifies the authentication realm service. - 2. Defines the SAML realm name. The SAML realm name can only contain alphanumeric characters, underscores, and hyphens. - 3. The order of the SAML realm in your authentication chain. Allowed values are between `2` and `100`. Set to `2` unless you plan on configuring multiple SSO realms for this cluster. - 4. Defines the SAML attribute that is going to be mapped to the principal (username) of the authenticated user in Kibana. In this non-normative example, `nameid:persistent` maps the `NameID` with the `urn:oasis:names:tc:SAML:2.0:nameid-format:persistent` format from the Subject of the SAML Assertion. You can use any SAML attribute that carries the necessary value for your use case in this setting, such as `uid` or `mail`. Refer to [the attribute mapping documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-attributes-mapping) for details and available options. - 5. Defines the SAML attribute used for role mapping when configured in Kibana. Common choices are `groups` or `roles`. The values for both `attributes.principal` and `attributes.groups` depend on the IdP provider, so be sure to review their documentation. Refer to [the attribute mapping documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-attributes-mapping) for details and available options. - 6. The file path or the HTTPS URL where your IdP metadata is available, such as `https://idpurl.com/sso/saml/metadata`. If you configure a URL you need to make ensure that your Elasticsearch cluster can access it. - 7. The SAML EntityID of your IdP. This can be read from the configuration page of the IdP, or its SAML metadata, such as `https://idpurl.com/entity_id`. - 8. Replace `KIBANA_ENDPOINT_URL` with the one noted in the previous step, such as `sp.entity_id: https://eddac6b924f5450c91e6ecc6d247b514.us-east-1.aws.found.io:443/` including the slash at the end. - -4. By default, users authenticating through SAML have no roles assigned to them. For example, if you want all your users authenticating with SAML to get access to Kibana, issue the following request to Elasticsearch: - - ```sh - POST /_security/role_mapping/CLOUD_SAML_TO_KIBANA_ADMIN <1> - { - "enabled": true, - "roles": [ "kibana_admin" ], <2> - "rules": { <3> - "field": { "realm.name": "saml-realm-name" } <4> - }, - "metadata": { "version": 1 } - } - ``` - - 1. The mapping name. - 2. The Elastic Stack role to map to. - 3. A rule specifying the SAML role to map from. - 4. `realm.name` can be any string containing only alphanumeric characters, underscores, and hyphens. - -5. Alternatively, if you want the users that belong to the group `elasticadmins` in your identity provider to be assigned the `superuser` role in your Elasticsearch cluster, issue the following request to Elasticsearch: - - ```sh - POST /_security/role_mapping/CLOUD_SAML_ELASTICADMIN_TO_SUPERUSER <1> - { - "enabled": true, - "roles": [ "superuser" ], <2> - "rules": { "all" : [ <3> - { "field": { "realm.name": "saml-realm-name" } }, <4> - { "field": { "groups": "elasticadmins" } } - ]}, - "metadata": { "version": 1 } - } - ``` - - 1. The mapping name. - 2. The Elastic Stack role to map to. - 3. A rule specifying the SAML role to map from. - 4. `realm.name` can be any string containing only alphanumeric characters, underscores, and hyphens. - - - ::::{note} - In order to use the field `groups` in the mapping rule, you need to have mapped the SAML Attribute that conveys the group membership to `attributes.groups` in the previous step. - :::: - -6. Update Kibana in the [user settings configuration](../../../deploy-manage/deploy/cloud-enterprise/edit-stack-settings.md) to use SAML as the authentication provider: - - ```sh - xpack.security.authc.providers: - saml.saml1: - order: 0 - realm: saml-realm-name <1> - ``` - - 1. The name of the SAML realm that you have configured earlier, for instance `saml-realm-name`. The SAML realm name can only contain alphanumeric characters, underscores, and hyphens. - - - This configuration disables all other realms and only allows users to authenticate with SAML. If you wish to allow your native realm users to authenticate, you need to also enable the `basic` `provider` like this: - - ```sh - xpack.security.authc.providers: - saml.saml1: - order: 0 - realm: saml-realm-name - description: "Log in with my SAML" <1> - basic.basic1: - order: 1 - ``` - - 1. This arbitrary string defines how SAML login is titled in the Login Selector UI that is shown when you enable multiple authentication providers in Kibana. You can also configure the optional `icon` and `hint` settings for any authentication provider. - -7. Optional: Generate SAML metadata for the Service Provider. - - The SAML 2.0 specification provides a mechanism for Service Providers to describe their capabilities and configuration using a metadata file. If your SAML Identity Provider requires or allows you to configure it to trust the Elastic Stack Service Provider through the use of a metadata file, you can generate the SAML metadata by issuing the following request to Elasticsearch: - - ```console - GET /_security/saml/metadata/realm_name <1> - ``` - - 1. The name of the SAML realm in Elasticsearch. - - - You can generate the SAML metadata by issuing the API request to Elasticsearch and storing metadata as an XML file using tools like `jq`. - - The following command, for example, generates the metadata for the SAML realm `saml1` and saves it to `metadata.xml` file: - - ```console - curl -X GET -H "Content-Type: application/json" -u user_name:password https://:443/_security/saml/metadata/saml1 <1> - |jq -r '.[]' > metadata.xml - ``` - - 1. The elasticsearch endpoint for the given deployment where the `saml1` realm is configured. - -8. Optional: If your Identity Provider doesn’t publish its SAML metadata at an HTTP URL, or if your Elasticsearch cluster cannot reach that URL, you can upload the SAML metadata as a file. - - 1. Prepare a ZIP file with a [custom bundle](../../../solutions/search/full-text/search-with-synonyms.md) that contains your Identity Provider’s metadata (`metadata.xml`) inside of a `saml` folder. - - This bundle allows all Elasticsearch containers to access the metadata file. - - 2. Update your Elasticsearch cluster configuration using the [advanced configuration editor](../../../deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md) to use the bundle you prepared in the previous step. You need to modify the `user_bundles` JSON attribute similar to the following example snippet: - - ```sh - { - "cluster_name": "REPLACE_WITH_YOUR_CLUSTER_NAME", - "plan": { - - ... - - "elasticsearch": { - "version": "8.13.1", - "user_bundles": [ - { - "name": "saml-metadata", - "url": "https://www.MYURL.com/saml-metadata.zip", - "elasticsearch_version": "8.13.1" - } - ] - } - } - ``` - - ::::{note} - The URLs that point to the ZIP file containing the bundle must be accessible to the deployment. Custom bundles are unzipped under the path `/app/config/BUNDLE_DIRECTORY_STRUCTURE`, where `BUNDLE_DIRECTORY_STRUCTURE` is the directory structure in the ZIP file. Make sure to save the file location where custom bundles get unzipped, as you will need it in the next step. - :::: - - - In our example, the SAML metadata file will be located in the path `/app/config/saml/metadata.xml`: - - ```sh - $ tree . - . - └── saml - └── metadata.xml - ``` - - 3. Adjust your `saml` realm configuration accordingly: - - ```sh - idp.metadata.path: /app/config/saml/metadata.xml <1> - ``` - - 1. The path to the SAML metadata file that was uploaded. - -9. Use the Kibana endpoint URL to log in. - - -## Configure your 7.x cluster to use SAML [ece-7x-saml] - -For 7.x deployments, the instructions are similar to those for 8.x, but your Elasticsearch request should use `POST /_security/role_mapping/CLOUD_SAML_TO_KIBANA_ADMIN` (for Step 4) or `POST /_security/role_mapping/CLOUD_SAML_ELASTICADMIN_TO_SUPERUSER` (for Step 5). - -All of the other steps are the same. - diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters-ad.md b/raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters-ad.md deleted file mode 100644 index 9366cae24..000000000 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters-ad.md +++ /dev/null @@ -1,333 +0,0 @@ -# Secure your clusters with Active Directory [ece-securing-clusters-ad] - -These steps show how you can secure your {{es}} clusters and Kibana instances with the Lightweight Directory Access Protocol (LDAP) using an Active Directory. - - -## Before you begin [ece_before_you_begin_18] - -To learn more about how securing {{es}} clusters with Active Directory works, check [Active Directory user authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/active-directory.md). - -::::{note} -The AD credentials are valid against the deployment, not the ECE platform. You can configure [role-based access control](../../../deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md) for the platform separately. -:::: - - - -## Configure authentication with Active Directory [ece-securing-clusters-ad-configuration] - -You can configure the deployment to authenticate users by communicating with an Active Directory Domain Controller. To integrate with Active Directory, you need to configure an `active_directory` realm and map Active Directory groups to user roles in {{es}}. - -Contrary to the `ldap` realm, the `active_directory` realm only supports a user search mode, but you can choose whether to use a bind user. - - -### Configure an Active Directory realm without a bind user [ece-ad-configuration-without-bind-user] - -The Active Directory realm authenticates users using an LDAP bind request. By default, all LDAP operations run as the authenticated user if you don’t specify a `bind_dn`. Alternatively, you can choose to [configure your realm with a bind user](../../../deploy-manage/users-roles/cluster-or-deployment-auth/active-directory.md#ece-ad-configuration-with-bind-user). - -1. [Add your user settings](../../../deploy-manage/deploy/cloud-enterprise/edit-stack-settings.md) for the `active_directory` realm as follows: - - ```yaml - xpack: - security: - authc: - realms: - active_directory: - my_ad: - order: 2 <1> - domain_name: ad.example.com <2> - url: ldap://ad.example.com:389 <3> - ``` - - 1. The order in which the `active_directory` realm is consulted during an authentication attempt. - 2. The primary domain in Active Directory. Binding to Active Directory fails if the domain name is not mapped in DNS. - 3. The LDAP URL pointing to the Active Directory Domain Controller that should handle authentication. If your Domain Controller is configured to use LDAP over TLS and it uses a self-signed certificate or a certificate that is signed by your organization’s CA, refer to [using self-signed certificates](../../../deploy-manage/users-roles/cluster-or-deployment-auth/active-directory.md#ece-ad-configuration-encrypt-communications). - - - -### Configure an Active Directory realm with a bind user [ece-ad-configuration-with-bind-user] - -You can choose to configure an Active Directory realm using a bind user. When you specify a `bind_dn`, this specific user is used to search for the Distinguished Name (`DN`) of the authenticating user based on the provided username and an LDAP attribute. If found, this user is authenticated by attempting to bind to the LDAP server using the found `DN` and the provided password. - -1. [Add your user settings](../../../deploy-manage/deploy/cloud-enterprise/edit-stack-settings.md) for the `active_directory` realm as follows: - - ::::{important} - You must apply the user settings to each [deployment template](../../../deploy-manage/deploy/cloud-enterprise/configure-deployment-templates.md). - :::: - - - ```sh - xpack: - security: - authc: - realms: - active_directory: - my_ad: - order: 2 <1> - domain_name: ad.example.com <2> - url: ldap://ad.example.com:389 <3> - bind_dn: es_svc_user@ad.example.com <4> - ``` - - 1. The order in which the `active_directory` realm is consulted during an authentication attempt. - 2. The primary domain in Active Directory. Binding to Active Directory fails if the domain name is not mapped in DNS. - 3. The LDAP URL pointing to the Active Directory Domain Controller that should handle authentication. If your Domain Controller is configured to use LDAP over TLS and it uses a self-signed certificate or a certificate that is signed by your organization’s CA, refer to [using self-signed certificates](../../../deploy-manage/users-roles/cluster-or-deployment-auth/active-directory.md#ece-ad-configuration-encrypt-communications). - 4. The user to run as for all Active Directory search requests. - -2. Configure the password for the `bind_dn` user by adding the appropriate `secure_bind_password` setting to the {{es}} keystore. - - 1. From the **Deployments** page, select your deployment. - - Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. - - 2. From your deployment menu, select **Security**. - 3. Under the **{{es}} keystore** section, select **Add settings**. - 4. On the **Create setting** window, select the secret **Type** to be `Secret String`. - 5. Set the **Setting name** to `xpack.security.authc.realms.active_directory..secure_bind_password` and add the password for the `bind_dn` user in the `secret` field. - - ::::{warning} - After you configure `secure_bind_password`, any attempt to restart the deployment will fail until you complete the rest of the configuration steps. If you wish to rollback the Active Directory realm related configuration effort, you need to remove the `xpack.security.authc.realms.active_directory.my_ad.secure_bind_password` that was just added by clicking **Remove** by the setting name under `Existing Keystores`. - :::: - - - -### Using self-signed certificates [ece-ad-configuration-encrypt-communications] - -If your LDAP server uses a self-signed certificate or a certificate that is signed by your organization’s CA, you need to enable the deployment to trust this certificate. These steps are required only if TLS is enabled and the Active Directory controller is using self-signed certificates. - -You’ll prepare a custom bundle that contains your certificate [in the same way that you would on {{ess}}](/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md). Custom bundles are extracted in the path `/app/config/BUNDLE_DIRECTORY_STRUCTURE`, where `BUNDLE_DIRECTORY_STRUCTURE` is the directory structure within the bundle ZIP file itself. For example: - -```sh -$ tree . -. -└── adcert - └── ca.crt -``` - -In the following example, the keystore file would be extracted to `/app/config/adcert/ca.crt`, where `ca.crt` is the name of the certificate. - -::::{admonition} Certificate formats -The following example uses a PEM encoded certificate. If your CA certificate is available as a `JKS` or `PKCS#12` keystore, you can upload that file in a ZIP bundle and reference it in the user settings. For example, you can create a ZIP file from a `truststore` folder that contains a keystore named `ca.p12` and reference that file: - -```yaml -xpack.security.authc.realms.active_directory.my_ad.ssl.truststore.path: -"/app/config/truststore/ca.p12" -``` - -If the keystore is also password protected (which isn’t typical for keystores that only contain CA certificates), you can also provide the password for the keystore by adding `xpack.security.authc.realms.active_directory.my_ad.ssl.truststore.password: password` in the user settings. - -:::: - - -1. Create a ZIP file that contains your CA certificate file, such as `adcert.zip`. -2. Update your plan in the [advanced configuration editor](../../../deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md) so that it uses the bundle you prepared in the previous step. You need to modify the `user_bundles` JSON attribute similar to the following example: - - ::::{note} - You must specify the `user_bundles` attribute for each [deployment template](../../../deploy-manage/deploy/cloud-enterprise/configure-deployment-templates.md). You can alter `7.*` to `8.*` when needed. - :::: - - - ```json - { - "cluster_name": "REPLACE_WITH_YOUR_CLUSTER_NAME", - "plan": { - - ... - - "elasticsearch": { - "version": "7.*", - "user_bundles": [ - { - "name": "adcert", - "url": "https://www.myurl.com/adcert.zip", <1> - "elasticsearch_version": "7.*" <2> - } - ] - } - } - ``` - - 1. The URL that points to the `adcert.zip` file must be accessible to the cluster. Uploaded files are stored using Amazon’s highly-available S3 service. - 2. This bundle is compatible with any {{es}} `7.*` version.::::{tip} - Using a wildcard for the minor version ensures that the bundle is compatible with the specified {{es}} major version, and eliminates the need to upload a new bundle when upgrading to a new minor version. - :::: - -3. Update [your user settings](../../../deploy-manage/deploy/cloud-enterprise/edit-stack-settings.md) for the `active_directory` realm as follows: - - ```yaml - xpack: - security: - authc: - realms: - active_directory: - my_ad: - order: 2 - domain_name: ad.example.com - url: /app/config/adcert/ca.crt <1> - bind_dn: es_svc_user@ad.example.com - ssl: - certificate_authorities: ["/app/config/cacerts/ca.crt"] - ``` - - 1. The `ldaps` URL pointing to the Active Directory Domain Controller. - - - The `ssl.verification_mode` setting (not shown) indicates the type of verification to use when using `ldaps` to protect against man-in-the-middle attacks and certificate forgery. The value for this property defaults to `full`. When you configure {{es}} to connect to a Domain Controller using TLS, it attempts to verify the hostname or IP address specified by the `url` attribute in the realm configuration with the Subject Alternative Names (SAN) in the certificate. If the SAN values in the certificate and realm configuration don’t match, {{es}} does not allow a connection to the Domain Controller. You can disable this behavior by setting the `ssl. verification_mode` property to `certificate`. - - - -## Mapping Active Directory groups to roles [ece-securing-clusters-ad-role-mapping] - -You have two ways of mapping Active Directory groups to roles for your users. The preferred one is to use the [role mapping API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-put-role-mapping). If for some reason this is not possible, you can use a [role mapping file](/deploy-manage/users-roles/cluster-or-deployment-auth/mapping-users-groups-to-roles.md) to specify the mappings instead. - -::::{important} -Only Active Directory security groups are supported. You cannot map distribution groups to roles. -:::: - - - -### Using the Role Mapping API [ece_using_the_role_mapping_api_2] - -Let’s assume that you want all your users that authenticate through AD to have read-only access to a certain index `my-index` and the AD users that are members of the `cn=administrators, dc=example, dc=com` group in LDAP, to become superusers in your deployment: - -1. Create the read-only role - - ```sh - POST /_security/role/read-only-my-index <1> - { - "indices": [ - { - "names": [ "my-index" ], - "privileges": [ "read" ] - } - ] - } - ``` - - 1. The name of the role. - -2. Create the relevant role mapping rule for read only users - - ```sh - POST /_security/role_mapping/ad-read-only <1> - { - "enabled": true, - "roles": [ "read-only-my-index" ], <2> - "rules": { - "field": { "realm.name": "my_ad" } <3> - }, - "metadata": { "version": 1 } - } - ``` - - 1. The name of the role mapping. - 2. The name of the role we created earlier. - 3. The name of our Active Directory realm. - -3. Create the relevant role mapping rule for superusers - - ```sh - POST /_security/role_mapping/ldap-superuser <1> - { - "enabled": true, - "roles": [ "superuser" ], <2> - "rules": { - "all" : [ - { "field": { "realm.name": "my_ad" } },<3> - { "field": { "groups": "cn=administrators, dc=example, dc=com" } }<4> - ] - }, - "metadata": { "version": 1 } - } - ``` - - 1. The name of the role mapping. - 2. The name of the role we want to assign, in this case `superuser`. - 3. The name of our active_directory realm. - 4. The DN of the AD group whose members should get the `superuser` role in the deployment. - - - -### Using the Role Mapping files [ece_using_the_role_mapping_files_2] - -Let’s assume that you want all your users that authenticate through AD and are members of the `cn=my-users,dc=example, dc=com` group in AD to have read-only access to a certain index `my-index` and only the users `cn=Senior Manager, cn=management, dc=example, dc=com` and `cn=Senior Admin, cn=management, dc=example, dc=com` to become superusers in your deployment: - -1. Create a file named `role-mappings.yml` with the following contents: - - ```sh - superuser: - - cn=Senior Manager, cn=management, dc=example, dc=com - - cn=Senior Admin, cn=management, dc=example, dc=com - read-only-user: - - cn=my-users, dc=example, dc=com - ``` - -2. Prepare the custom bundle ZIP file `mappings.zip`, that contains the `role-mappings.yml` file [in the same way that you would on Elastic Cloud](/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md). -3. Custom bundles get unzipped under the path `/app/config/BUNDLE_DIRECTORY_STRUCTURE`, where `BUNDLE_DIRECTORY_STRUCTURE` is the directory structure within the bundle ZIP file itself. For example: - - ```sh - $ tree . - . - └── mappings - └── role-mappings.yml - ``` - - In our example, the role mappings file is extracted to `/app/config/mappings/role-mappings.yml` - -4. Update your plan in the [advanced configuration editor](../../../deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md) so that it uses the bundle you prepared in the previous step. Modify the `user_bundles` JSON attribute as shown in the following example: - - ::::{note} - You must specify the `user_bundles` attribute for each [deployment template](../../../deploy-manage/deploy/cloud-enterprise/configure-deployment-templates.md). You can alter `7.*` to `8.*` when needed. - :::: - - - ```sh - { - "cluster_name": "REPLACE_WITH_YOUR_CLUSTER_NAME", - "plan": { - - ... - - "elasticsearch": { - "version": "7.*", - "user_bundles": [ - { - "name": "role-mappings", - "url": "https://www.myurl.com/mappings.zip", <1> - "elasticsearch_version": "7.*" <2> - } - ] - } - } - ``` - - 1. The URL that points to `mappings.zip` must be accessible to the cluster. - 2. The bundle is compatible with any {{es}} `7.*` version. - - - ::::{tip} - Using a wildcard for the minor version ensures bundles are compatible with the stated {{es}} major version to avoid the need to re-upload a new bundle with minor versions upgrades. - :::: - -5. Update [your user settings](../../../deploy-manage/deploy/cloud-enterprise/edit-stack-settings.md) for the `ldap` realm as follows: - - ```yaml - xpack: - security: - authc: - realms: - active_directory: - my_ad: - order: 2 - domain_name: ad.example.com - url: ldaps://ad.example.com:636 <1> - bind_dn: es_svc_user@ad.example.com - ssl: - certificate_authorities: ["/app/config/cacerts/ca.crt"] - verification_mode: certificate - files: - role_mapping: "/app/config/mappings/role-mappings.yml" <1> - ``` - - 1. The path where our role mappings file is unzipped. - - diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters-ldap.md b/raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters-ldap.md deleted file mode 100644 index 7c0fb4a6f..000000000 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters-ldap.md +++ /dev/null @@ -1,329 +0,0 @@ -# Secure your clusters with LDAP [ece-securing-clusters-ldap] - -These steps show how you can secure your {{es}} clusters and Kibana instances with the Lightweight Directory Access Protocol (LDAP) using an LDAP server. - - -## Before you begin [ece_before_you_begin_17] - -To learn more about how securing {{es}} clusters with LDAP works, check [LDAP user authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/ldap.md). - -::::{note} -The LDAP credentials are valid against the deployment, not the ECE platform. You can configure [role-based access control](../../../deploy-manage/users-roles/cloud-enterprise-orchestrator/manage-users-roles.md) for the platform separately. -:::: - - - -## Configure authentication with LDAP [ece-securing-clusters-ldap-configuration] - -You can configure the deployment to authenticate users by communicating with an LDAP server. To integrate with LDAP, you need to configure an `ldap` realm and map LDAP groups to user roles in {{es}}. - -1. Determine which mode you want to use. The `ldap` realm supports two modes of operation, a user search mode and and a mode with specific templates for user DNs. - - LDAP user search is the most common mode of operation. In this mode, a specific user with permission to search the LDAP directory is used to search for the DN of the authenticating user based on the provided username and an LDAP attribute. Once found, the user is authenticated by attempting to bind to the LDAP server using the found DN and the provided password. - - If your LDAP environment uses a few specific standard naming conditions for users, you can use user DN templates to configure the realm. The advantage of this method is that a search does not have to be performed to find the user DN. However, multiple bind operations might be needed to find the correct user DN. - -2. To configure an LDAP realm with user search, [add your user settings](../../../deploy-manage/deploy/cloud-enterprise/edit-stack-settings.md) for the `ldap` realm as follows: - - ```yaml - xpack: - security: - authc: - realms: - ldap: - ldap1: - order: 2 <1> - url: "ldap://ldap.example.com:389" <2> - bind_dn: "cn=ldapuser, ou=users, o=services, dc=example, dc=com" <3> - user_search: - base_dn: "ou=users, o=services, dc=example, dc=com" <4> - filter: "(cn=\{0})" <5> - group_search: - base_dn: "ou=groups, o=services, dc=example, dc=com" <6> - ``` - - 1. The order in which the LDAP realm will be consulted during an authentication attempt. - 2. The LDAP URL pointing to the LDAP server that should handle authentication. If your LDAP server is configured to use LDAP over TLS and it uses a self-signed certificate or a certificate that is signed by your organization’s CA, refer to the following configuration instructions. - 3. The DN of the bind user. - 4. The base DN under which your users are located in LDAP. - 5. Optionally specify an additional LDAP filter used to search the directory in attempts to match an entry with the username provided by the user. Defaults to `(uid={{0}})`. `{{0}}` is substituted with the username provided by the user for authentication. - 6. The base DN under which groups are located in LDAP. - - -::::{warning} -You must apply the user settings to each [deployment template](../../../deploy-manage/deploy/cloud-enterprise/configure-deployment-templates.md). -:::: - - -1. The password for the `bind_dn` user should be configured by adding the appropriate `secure_bind_password` setting to the {{es}} keystore. - - 1. From the **Deployments** page, select your deployment. - - Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. - - 2. From your deployment menu, select **Security**. - 3. Under the **Elasticsearch keystore** section, select **Add settings**. - 4. On the **Create setting** window, select the secret **Type** to be `Secret String`. - 5. Set the **Setting name**` to `xpack.security.authc.realms.ldap.ldap1.secure_bind_password` and add the password for the `bind_dn` user in the `secret` field. - - ::::{note} - After you configure secure_bind_password, any attempt to restart the deployment will fail until you complete the rest of the configuration steps. If you wish to rollback the LDAP realm related configuration effort, you need to remove the `xpack.security.authc.realms.ldap.ldap1.secure_bind_password` that was just added by using the "remove" button by the setting name under `Existing Keystores`. - :::: - -2. Alternatively, to configure an LDAP realm with user user DN templates, [add your user settings](../../../deploy-manage/deploy/cloud-enterprise/edit-stack-settings.md) for the `ldap` realm as follows: - - ```yaml - xpack: - security: - authc: - realms: - ldap: - ldap1: - order: 2 <1> - url: "ldap://ldap.example.com:389" <2> - user_dn_templates: <3> - - "uid={0}, ou=users, o=engineering, dc=example, dc=com" - - "uid={0}, ou=users, o=marketing, dc=example, dc=com" - group_search: - base_dn: ou=groups, o=services, dc=example, dc=com" <4> - ``` - - 1. The order in which the LDAP realm will be consulted during an authentication attempt. - 2. The LDAP URL pointing to the LDAP server that should handle authentication. If your LDAP server is configured to use LDAP over TLS and it uses a self-signed certificate or a certificate that is signed by your organization’s CA, refer to the following configuration instructions. - 3. The templates that should be tried for constructing the user DN and authenticating to LDAP. If a user attempts to authenticate with username `user1` and password `password1`, authentication will be attempted with the DN `uid=user1, ou=users, o=engineering, dc=example, dc=com` and if not successful, also with `uid=user1, ou=users, o=marketing, dc=example, dc=com` and the given password. If authentication with one of the constructed DNs is successful, all subsequent LDAP operations are run with this user. - 4. The base DN under which groups are located in LDAP. - -3. (Optional) Encrypt communications between the deployment and the LDAP Server. If your LDAP server uses a self-signed certificate or a certificate that is signed by your organization’s CA, you need to enable the deployment to trust this certificate. - - 1. Prepare the custom bundle ZIP file `ldapcert.zip`, that contains the CA certificate file (for example `ca.crt`) [in the same way that you would on Elastic Cloud](/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md). - 2. Custom bundles are unzipped under the path `/app/config/BUNDLE_DIRECTORY_STRUCTURE`, where `BUNDLE_DIRECTORY_STRUCTURE` is the directory structure within the bundle ZIP file itself. For example: - - ```sh - $ tree . - . - └── ldapcert - └── ca.crt - ``` - - In our example, the unzipped keystore file is extracted to `/app/config/ldapcert/ca.crt`, where `ca.cert` is the name of the certificate. - - 3. Update your plan in the [advanced configuration editor](../../../deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md) so that it uses the bundle you prepared in the previous step. Modify the `user_bundles` JSON attribute as shown in the following example: - - ::::{note} - You must specify the `user_bundles` attribute for each [deployment template](../../../deploy-manage/deploy/cloud-enterprise/configure-deployment-templates.md). Switch version `7.*` to the version `8.*` if needed. - :::: - - - ```yaml - { - "cluster_name": "REPLACE_WITH_YOUR_CLUSTER_NAME", - "plan": { - - ... - - "elasticsearch": { - "version": "7.*", - "user_bundles": [ - { - "name": "ldap-cert", - "url": "https://www.myurl.com/ldapcert.zip", <1> - "elasticsearch_version": "7.*" <2> - } - ] - } - } - ``` - - 1. The URL that points to `ldapcert.zip` must be accessible to the cluster. - 2. The bundle is compatible with any {{es}} `7.*` version. - - - ::::{tip} - Using a wildcard for the minor version ensures bundles are compatible with the stated {{es}} major version to avoid a need to re-upload a new bundle with minor versions upgrades. - :::: - - 4. Update [your user settings](../../../deploy-manage/deploy/cloud-enterprise/edit-stack-settings.md) for the `ldap` realm as follows: - - ```yaml - xpack: - security: - authc: - realms: - ldap: - ldap1: - order: 2 - url: "ldaps://ldap.example.com:636" <1> - bind_dn: "cn=ldapuser, ou=users, o=services, dc=example, dc=com" - user_search: - base_dn: "ou=users, o=services, dc=example, dc=com" - group_search: - base_dn: ou=groups, o=services, dc=example, dc=com" - ssl: - verification_mode: certificate <2> - certificate_authorities: ["/app/config/cacert/ca.crt"] - ``` - - 1. The `ldaps` URL pointing to the LDAP server. - 2. (Optional) By default, when you configure {{es}} to connect to an LDAP server using SSL/TLS, it attempts to verify the hostname or IP address specified with the url attribute in the realm configuration with the values in the certificate. If the values in the certificate and realm configuration do not match, {{es}} does not allow a connection to the LDAP server. This is done to protect against man-in-the-middle attacks. If necessary, you can disable this behavior by setting the `ssl.verification_mode` property to `certificate`. - - -::::{note} -If your CA certificate is available as a `JKS` or `PKCS#12` keystore, you can upload that file in the ZIP bundle (for example create a ZIP archive from a `truststore` folder that contains a file named `ca.jks`) and then reference it in the user settings with `xpack.security.authc.realms.ldap.ldap1.ssl.truststore.path: "/app/config/truststore/ca.jks"`. If the keystore is also password protected which is unusual for keystores that contain only CA certificates, you can also provide the password for the keystore by adding `xpack.security.authc.realms.ldap.ldap1.ssl.truststore.password: password` in the user settings. -:::: - - - -## Mapping LDAP groups to roles [ece-securing-clusters-ldap-role-mapping] - -You have two ways of mapping LDAP groups to roles for your users. The preferred one is to use the [role mapping API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-put-role-mapping). If for some reason this is not possible, you can use a [role mapping file](/deploy-manage/users-roles/cluster-or-deployment-auth/mapping-users-groups-to-roles.md) to specify the mappings instead. - - -### Using the Role Mapping API [ece_using_the_role_mapping_api] - -Let’s assume that you want all your users that authenticate through LDAP to have read-only access to a certain index `my-index` and the LDAP users that are members of the `cn=administrators, ou=groups, o=services, dc=example, dc=com` group in LDAP, to become superusers in your deployment: - -1. Create the read-only role. - - ```sh - POST /_security/role/read-only-my-index <1> - { - "indices": [ - { - "names": [ "my-index" ], - "privileges": [ "read" ] - } - ] - } - ``` - - 1. The name of the role. - -2. Create the relevant role mapping rule for read-only users. - - ```sh - POST /_security/role_mapping/ldap-read-only <1> - { - "enabled": true, - "roles": [ "read-only-my-index" ], <2> - "rules": { - "field": { "realm.name": "ldap1" } <3> - }, - "metadata": { "version": 1 } - } - ``` - - 1. The name of the role mapping. - 2. The name of the role we created earlier. - 3. The name of our LDAP realm. - -3. Create the relevant role mapping rule for superusers. - - ```sh - POST /_security/role_mapping/ldap-superuser <1> - { - "enabled": true, - "roles": [ "superuser" ], <2> - "rules": { - "all" : [ - { "field": { "realm.name": "ldap1" } },<3> - { "field": { "groups": "cn=administrators, ou=groups, o=services, dc=example, dc=com" } }<4> - ] - }, - "metadata": { "version": 1 } - } - ``` - - 1. The name of the role mapping. - 2. The name of the role we want to assign, in this case `superuser`. - 3. The name of our LDAP realm. - 4. The DN of the LDAP group whose members should get the `superuser` role in the deployment. - - - -### Using the Role Mapping files [ece_using_the_role_mapping_files] - -Let’s assume that you want all your users that authenticate through LDAP and are members of the `cn=my-users, ou=groups, o=services, dc=example, dc=com` group in LDAP to have read-only access to a certain index `my-index` and only the users `cn=Senior Manager, ou=users, o=services, dc=example, dc=com` and `cn=Senior Admin, ou=users, o=services, dc=example, dc=com` to become superusers in your deployment: - -1. Create a file named `role-mappings.yml` with the following contents: - - ```sh - superuser: - - cn=Senior Manager, ou=users, o=services, dc=example, dc=com - - cn=Senior Admin, ou=users, o=services, dc=example, dc=com - read-only-user: - - cn=my-users, ou=groups, o=services, dc=example, dc=com - ``` - -2. Prepare the custom bundle ZIP file `mappings.zip`, that contains the `role-mappings.yml` file [in the same way that you would on Elastic Cloud](/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md). -3. Custom bundles are unzipped under the path `/app/config/BUNDLE_DIRECTORY_STRUCTURE`, where `BUNDLE_DIRECTORY_STRUCTURE` is the directory structure within the bundle ZIP file itself. For example: - - ```sh - $ tree . - . - └── mappings - └── role-mappings.yml - ``` - - In our example, the file is extracted to `/app/config/mappings/role-mappings.yml`. - -4. Update your plan in the [advanced configuration editor](../../../deploy-manage/deploy/cloud-enterprise/advanced-cluster-configuration.md) so that it uses the bundle you prepared in the previous step. Modify the `user_bundles` JSON attribute as shown in the following example: - - ::::{note} - You must specify the `user_bundles` attribute for each [deployment template](../../../deploy-manage/deploy/cloud-enterprise/configure-deployment-templates.md). Switch `7.*` to `8.*` if needed. - :::: - - - ```sh - { - "cluster_name": "REPLACE_WITH_YOUR_CLUSTER_NAME", - "plan": { - - ... - - "elasticsearch": { - "version": "7.*", - "user_bundles": [ - { - "name": "role-mappings", - "url": "https://www.myurl.com/mappings.zip", <1> - "elasticsearch_version": "7.*" <2> - } - ] - } - } - ``` - - 1. The URL that points to `mappings.zip` must be accessible to the cluster. - 2. The bundle is compatible with any {{es}} `7.*` version. - - - ::::{tip} - Using a wildcard for the minor version ensures bundles are compatible with the stated {{es}} major version to avoid the need to re-upload a new bundle with minor versions upgrades. - :::: - -5. Update [your user settings](../../../deploy-manage/deploy/cloud-enterprise/edit-stack-settings.md) for the `ldap` realm as follows: - - ```yaml - xpack: - security: - authc: - realms: - ldap: - ldap1: - order: 2 - url: "ldaps://ldap.example.com:636" - bind_dn: "cn=ldapuser, ou=users, o=services, dc=example, dc=com" - user_search: - base_dn: "ou=users, o=services, dc=example, dc=com" - group_search: - base_dn: ou=groups, o=services, dc=example, dc=com" - ssl: - verification_mode: certificate - certificate_authorities: ["/app/config/cacerts/ca.crt"] - files: - role_mapping: "/app/config/mappings/role-mappings.yml" <1> - ``` - - 1. The path where our role mappings file is unzipped. - - diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters.md b/raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters.md index 83899715d..37a85c65a 100644 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters.md +++ b/raw-migrated-files/cloud/cloud-enterprise/ece-securing-clusters.md @@ -7,7 +7,7 @@ Elastic Cloud Enterprise supports most of the security features that are part of * Reset the [`elastic` user password](../../../deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md). * Use third-party authentication providers like [SAML](../../../deploy-manage/users-roles/cluster-or-deployment-auth/saml.md), [LDAP](../../../deploy-manage/users-roles/cluster-or-deployment-auth/ldap.md), [Active Directory](../../../deploy-manage/users-roles/cluster-or-deployment-auth/active-directory.md), [OpenID Connect](../../../deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md), or [Kerberos](../../../deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md) to provide dynamic [role mappings](../../../deploy-manage/users-roles/cluster-or-deployment-auth/mapping-users-groups-to-roles.md) for role based or attribute based access control. * Use {{kib}} Spaces and roles to [secure access to {{kib}}](../../../deploy-manage/users-roles/cluster-or-deployment-auth/quickstart.md). - * Authorize and authenticate service accounts for {{beats}} by [granting access using API keys](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/beats-api-keys.md). + * Authorize and authenticate service accounts for {{beats}} by [granting access using API keys](asciidocalypse://docs/beats/docs/reference/filebeat/beats-api-keys.md). * Block unwanted traffic with [traffic filter](../../../deploy-manage/security/traffic-filtering.md). diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-traffic-filtering-deployment-configuration.md b/raw-migrated-files/cloud/cloud-enterprise/ece-traffic-filtering-deployment-configuration.md index b6c7e97dc..0f5cb7449 100644 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-traffic-filtering-deployment-configuration.md +++ b/raw-migrated-files/cloud/cloud-enterprise/ece-traffic-filtering-deployment-configuration.md @@ -101,13 +101,13 @@ Follow the instructions that match your use case: ## Troubleshooting [ece-traffic-filter-troubleshooting] -This section offers suggestions on how to troubleshoot your traffic filters. Before you start make sure you check the [Limitations and known problems](asciidocalypse://docs/cloud/docs/release-notes/known-issues/cloud-enterprise.md). +This section offers suggestions on how to troubleshoot your traffic filters. Before you start make sure you check the [Limitations and known problems](asciidocalypse://docs/cloud/docs/release-notes/cloud-enterprise/known-issues.md). ### Review the rule sets associated with a deployment [ece-review-rule-sets] 1. [Log into the Cloud UI](../../../deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece-upgrade-deployment.md b/raw-migrated-files/cloud/cloud-enterprise/ece-upgrade-deployment.md index 21603a73d..4d57619e6 100644 --- a/raw-migrated-files/cloud/cloud-enterprise/ece-upgrade-deployment.md +++ b/raw-migrated-files/cloud/cloud-enterprise/ece-upgrade-deployment.md @@ -35,7 +35,7 @@ When upgrading from one recent major Elasticsearch version to the next, we recom To upgrade a cluster in Elastic Cloud Enterprise: 1. [Log into the Cloud UI](../../../deploy-manage/deploy/cloud-enterprise/log-into-cloud-ui.md). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. Narrow the list by name, ID, or choose from several other filters. To further define the list, use a combination of filters. @@ -43,7 +43,7 @@ To upgrade a cluster in Elastic Cloud Enterprise: 4. Select one of the available software versions. Let the user interface guide you through the steps for upgrading a deployment. When you save your changes, your deployment configuration is updated to the new version. ::::{tip} - You cannot downgrade after upgrading, so plan ahead to make sure that your applications still work after upgrading. For more information on changes that might affect your applications, check [Breaking changes](asciidocalypse://docs/elasticsearch/docs/release-notes/breaking-changes/elasticsearch.md). + You cannot downgrade after upgrading, so plan ahead to make sure that your applications still work after upgrading. For more information on changes that might affect your applications, check [Breaking changes](asciidocalypse://docs/elasticsearch/docs/release-notes/breaking-changes.md). :::: 5. If you are upgrading to version 6.6 and earlier, major upgrades require a full cluster restart to complete the upgrade process. diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece_optional_settings.md b/raw-migrated-files/cloud/cloud-enterprise/ece_optional_settings.md deleted file mode 100644 index feecd44dc..000000000 --- a/raw-migrated-files/cloud/cloud-enterprise/ece_optional_settings.md +++ /dev/null @@ -1,9 +0,0 @@ -# Optional settings [ece_optional_settings] - -The following optional realm settings are supported: - -* `force_authn` Specifies whether to set the `ForceAuthn` attribute when requesting that the IdP authenticate the current user. If set to `true`, the IdP is required to verify the user’s identity, irrespective of any existing sessions they might have. Defaults to `false`. -* `idp.use_single_logout` Indicates whether to utilise the Identity Provider’s `` (if one exists in the IdP metadata file). Defaults to `true`. - -After completing these steps, you can log in to Kibana by authenticating against your SAML IdP. If you encounter any issues with the configuration, refer to the [SAML troubleshooting page](/troubleshoot/elasticsearch/security/trb-security-saml.md) which contains information about common issues and suggestions for their resolution. - diff --git a/raw-migrated-files/cloud/cloud-enterprise/ece_sign_outgoing_saml_message.md b/raw-migrated-files/cloud/cloud-enterprise/ece_sign_outgoing_saml_message.md deleted file mode 100644 index 7425b5efb..000000000 --- a/raw-migrated-files/cloud/cloud-enterprise/ece_sign_outgoing_saml_message.md +++ /dev/null @@ -1,55 +0,0 @@ -# Sign outgoing SAML messages [ece-sign-outgoing-saml-message] - -If configured, Elastic Stack will sign outgoing SAML messages. - -As a prerequisite, you need to generate a signing key and a self-signed certificate. You need to share this certificate with your SAML Identity Provider so that it can verify the received messages. The key needs to be unencrypted. The exact procedure is system dependent, you can use for example `openssl`: - -```sh -openssl req -new -x509 -days 3650 -nodes -sha256 -out saml-sign.crt -keyout saml-sign.key -``` - -Place the files under the `saml` folder and add them to the existing SAML bundle, or [create a new one](ece-add-custom-bundle-plugin.md). - -In our example, the certificate and the key will be located in the path `/app/config/saml/saml-sign.{crt,key}`: - -```sh -$ tree . -. -└── saml - ├── saml-sign.crt - └── saml-sign.key -``` - -Make sure that the bundle is included with your deployment. - -Adjust your realm configuration accordingly: - -```sh - signing.certificate: /app/config/saml/saml-sign.crt <1> - signing.key: /app/config/saml/saml-sign.key <2> -``` - -1. The path to the SAML signing certificate that was uploaded. -2. The path to the SAML signing key that was uploaded. - - -When configured with a signing key and certificate, Elastic Stack will sign all outgoing messages (SAML Authentication Requests, SAML Logout Requests, SAML Logout Responses) by default. This behavior can be altered by configuring `signing.saml_messages` appropriately with the comma separated list of messages to sign. Supported values are `AuthnRequest`, `LogoutRequest` and `LogoutResponse` and the default value is `*`. - -For example: - -```sh -xpack: - security: - authc: - realms: - saml-realm-name: - order: 2 - ... - signing.saml_messages: AuthnRequest <1> - ... -``` - -1. This configuration ensures that only SAML authentication requests will be sent signed to the Identity Provider. - - - diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-add-user-settings.md b/raw-migrated-files/cloud/cloud-heroku/ech-add-user-settings.md index 20aa2ba01..11c447e1b 100644 --- a/raw-migrated-files/cloud/cloud-heroku/ech-add-user-settings.md +++ b/raw-migrated-files/cloud/cloud-heroku/ech-add-user-settings.md @@ -12,7 +12,7 @@ You can also update [dynamic cluster settings](../../../deploy-manage/deploy/sel To add or edit user settings: 1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-adding-elastic-plugins.md b/raw-migrated-files/cloud/cloud-heroku/ech-adding-elastic-plugins.md deleted file mode 100644 index 5ef4c99dc..000000000 --- a/raw-migrated-files/cloud/cloud-heroku/ech-adding-elastic-plugins.md +++ /dev/null @@ -1,25 +0,0 @@ -# Add plugins provided with Elasticsearch Add-On for Heroku [ech-adding-elastic-plugins] - -You can use a variety of official plugins that are compatible with your version of {{es}}. When you upgrade to a new {{es}} version, these plugins are simply upgraded with the rest of your deployment. - -## Before you begin [echbefore_you_begin_4] - -Some restrictions apply when adding plugins. To learn more, check [Restrictions for {{es}} and {{kib}} plugins](../../../deploy-manage/deploy/elastic-cloud/ech-restrictions.md#ech-restrictions-plugins). - -Only Gold, Platinum, Enterprise and Private subscriptions, running version 2.4.6 or later, have access to uploading custom plugins. All subscription levels, including Standard, can upload scripts and dictionaries. - -To enable a plugin for a deployment: - -1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the deployments page, select your deployment. - - Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. From the **Actions** dropdown, select **Edit deployment**. -4. Select **Manage user settings and extensions**. -5. Select the **Extensions** tab. -6. Select the plugins that you want to enable. -7. Select **Back**. -8. Select **Save**. The {{es}} cluster is then updated with new nodes that have the plugin installed. - - diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-adding-plugins.md b/raw-migrated-files/cloud/cloud-heroku/ech-adding-plugins.md deleted file mode 100644 index 9b5c01d9d..000000000 --- a/raw-migrated-files/cloud/cloud-heroku/ech-adding-plugins.md +++ /dev/null @@ -1,23 +0,0 @@ -# Add plugins and extensions [ech-adding-plugins] - -Plugins extend the core functionality of {{es}}. There are many suitable plugins, including: - -* Discovery plugins, such as the cloud AWS plugin that allows discovering nodes on EC2 instances. -* Analysis plugins, to provide analyzers targeted at languages other than English. -* Scripting plugins, to provide additional scripting languages. - -Plugins can come from different sources: the official ones created or at least maintained by Elastic, community-sourced plugins from other users, and plugins that you provide. Some of the official plugins are always provided with our service, and can be [enabled per deployment](../../../deploy-manage/deploy/elastic-cloud/add-plugins-provided-with-elastic-cloud-hosted.md). - -There are two ways to add plugins to a deployment in Elasticsearch Add-On for Heroku: - -* [Enable one of the official plugins already available in Elasticsearch Add-On for Heroku](../../../deploy-manage/deploy/elastic-cloud/add-plugins-provided-with-elastic-cloud-hosted.md). -* [Upload a custom plugin and then enable it per deployment](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md). - -Custom plugins can include the official {{es}} plugins not provided with Elasticsearch Add-On for Heroku, any of the community-sourced plugins, or [plugins that you write yourself](asciidocalypse://docs/elasticsearch/docs/extend/create-elasticsearch-plugins/index.md). Uploading custom plugins is available only to Gold, Platinum, and Enterprise subscriptions. For more information, check [Upload custom plugins and bundles](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md). - -To learn more about the official and community-sourced plugins, refer to [{{es}} Plugins and Integrations](asciidocalypse://docs/elasticsearch/docs/reference/elasticsearch-plugins/index.md). - -Plugins are not supported for {{kib}}. To learn more, check [Restrictions for {{es}} and {{kib}} plugins](../../../deploy-manage/deploy/elastic-cloud/ech-restrictions.md#ech-restrictions-plugins). - - - diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-configure-settings.md b/raw-migrated-files/cloud/cloud-heroku/ech-configure-settings.md deleted file mode 100644 index 23f101ef5..000000000 --- a/raw-migrated-files/cloud/cloud-heroku/ech-configure-settings.md +++ /dev/null @@ -1,47 +0,0 @@ -# Configure your deployment [ech-configure-settings] - -You might want to change the configuration of your deployment to: - -* Add features, such as machine learning or APM (application performance monitoring). -* Increase or decrease capacity by changing the amount of reserved memory and storage for different parts of your deployment. -* Enable [autoscaling](../../../deploy-manage/autoscaling.md) so that the available resources for deployment components, such as data tiers and machine learning nodes, adjust automatically as the demands on them change over time. -* Enable high availability by adjusting the number of availability zones that parts of your deployment run on. -* Upgrade to new versions of {{es}}. You can upgrade from one major version to another, such as from 7.17.27 to 8.17.1, or from one minor version to another, such as 8.6 to 8.7. You can’t downgrade versions. -* Change what plugins are available on your {{es}} cluster. - -::::{note} -During the free trial, {{ess}} deployments are restricted to a fixed size. You can resize your deployments when your trial is converted into a paid subscription. -:::: - - -You can change the configuration of a running deployment from the **Configuration** pane in the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). - -With the exception of major version upgrades for Elastic Stack products, Elasticsearch Add-On for Heroku can perform configuration changes without having to interrupt your deployment. You can continue searching and indexing. The changes can also be done in bulk. For example: in one action you can add more memory, upgrade, adjust the number of {{es}} plugins and adjust the number of availability zones. - -We perform all of these changes by creating instances with the new configurations that join your existing deployment before removing the old ones. For example: if you are changing your {{es}} cluster configuration, we create new {{es}} nodes, recover your indexes, and start routing requests to the new nodes. Only when all new {{es}} nodes are ready, do we bring down the old ones. - -By doing it this way, we reduce the risk of making configuration changes. If any of the new instances have a problems, the old ones are still there, processing requests. - -::::{note} -If you use a Platform-as-a-Service provider like Heroku, the administration console is slightly different and does not allow you to make changes that will affect the price. That must be done in the platform provider’s add-on system. You can still do things like change {{es}} version or plugins. -:::: - - -To change the {{es}} cluster in your deployment: - -1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the deployments page, select your deployment. - - Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. From your deployment menu, select **{{es}}** and then **Edit**. -4. Let the user interface guide you through the cluster configuration for your cluster. For a full list of the supported settings, check [What Deployment Settings Are Available?](../../../deploy-manage/deploy/elastic-cloud/ech-configure-deployment-settings.md) - - If you are changing an existing deployment, you can make multiple changes to your {{es}} cluster with a single configuration update, such as changing the capacity and upgrading to a new {{es}} version in one step. - -5. Save your changes. The new configuration takes a few moments to create. - -Review the changes to your configuration on the **Activity** page, with a tab for {{es}} and one for {{kib}}. - - - diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-configure.md b/raw-migrated-files/cloud/cloud-heroku/ech-configure.md deleted file mode 100644 index 87dadfc13..000000000 --- a/raw-migrated-files/cloud/cloud-heroku/ech-configure.md +++ /dev/null @@ -1,11 +0,0 @@ -# Configuring your deployment [ech-configure] - -The information in this section covers: - -* [Plan for production](../../../deploy-manage/production-guidance/plan-for-production-elastic-cloud.md) - Plan for a highly available and scalable deployment. -* [Configure your deployment](../../../deploy-manage/deploy/elastic-cloud/ech-configure-settings.md) - Customize your cluster through a full list of settings. -* [Enable Kibana](../../../deploy-manage/deploy/elastic-cloud/access-kibana.md) - Explore your data with the Elastic Stack visualization platform. -* [Enable Logging and Monitoring](../../../deploy-manage/monitor/stack-monitoring/elastic-cloud-stack-monitoring.md) - Monitor your cluster’s health and performance and ingest your deployment’s logs. -* [Upgrade versions](../../../deploy-manage/upgrade/deployment-or-cluster.md) - Stay current with the latest Elastic Stack versions. -* [Delete your deployment](../../../deploy-manage/uninstall/delete-a-cloud-deployment.md) - No undo. Data is lost and billing stops. - diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-configuring-keystore.md b/raw-migrated-files/cloud/cloud-heroku/ech-configuring-keystore.md index 9499f91c0..d1c979f76 100644 --- a/raw-migrated-files/cloud/cloud-heroku/ech-configuring-keystore.md +++ b/raw-migrated-files/cloud/cloud-heroku/ech-configuring-keystore.md @@ -14,7 +14,7 @@ There are three types of secrets that you can use: Add keys and secret values to the keystore. 1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. @@ -35,7 +35,7 @@ Only some settings are designed to be read from the keystore. However, the keyst When your keys and secret values are no longer needed, delete them from the keystore. 1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-custom-bundles.md b/raw-migrated-files/cloud/cloud-heroku/ech-custom-bundles.md deleted file mode 100644 index c4c6dccf1..000000000 --- a/raw-migrated-files/cloud/cloud-heroku/ech-custom-bundles.md +++ /dev/null @@ -1,167 +0,0 @@ -# Upload custom plugins and bundles [ech-custom-bundles] - -There are several cases where you might need your own files to be made available to your {{es}} cluster’s nodes: - -* Your own custom plugins, or third-party plugins that are not amongst the [officially available plugins](../../../deploy-manage/deploy/elastic-cloud/add-plugins-provided-with-elastic-cloud-hosted.md). -* Custom dictionaries, such as synonyms, stop words, compound words, and so on. -* Cluster configuration files, such as an Identity Provider metadata file used when you [secure your clusters with SAML](../../../deploy-manage/users-roles/cluster-or-deployment-auth/saml.md). - -To facilitate this, we make it possible to upload a ZIP file that contains the files you want to make available. Uploaded files are stored using Amazon’s highly-available S3 service. This is necessary so we do not have to rely on the availability of third-party services, such as the official plugin repository, when provisioning nodes. - -Custom plugins and bundles are collectively referred to as extensions. - -## Before you begin [echbefore_you_begin_5] - -The selected plugins/bundles are downloaded and provided when a node starts. Changing a plugin does not change it for nodes already running it. Refer to [Updating Plugins and Bundles](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md#ech-update-bundles-and-plugins). - -With great power comes great responsibility: your plugins can extend your deployment with new functionality, but also break it. Be careful. We obviously cannot guarantee that your custom code works. - -::::{important} -You cannot edit or delete a custom extension after it has been used in a deployment. To remove it from your deployment, you can disable the extension and update your deployment configuration. -:::: - - -Uploaded files cannot be bigger than 20MB for most subscription levels, for Platinum and Enterprise the limit is 8GB. - -It is important that plugins and dictionaries that you reference in mappings and configurations are available at all times. For example, if you try to upgrade {{es}} and de-select a dictionary that is referenced in your mapping, the new nodes will be unable to recover the cluster state and function. This is true even if the dictionary is referenced by an empty index you do not actually use. - - -## Prepare your files for upload [ech-prepare-custom-bundles] - -Plugins are uploaded as ZIP files. You need to choose whether your uploaded file should be treated as a *plugin* or as a *bundle*. Bundles are not installed as plugins. If you need to upload both a custom plugin and custom dictionaries, upload them separately. - -To prepare your files, create one of the following: - -Plugins -: A plugin is a ZIP file that contains a plugin descriptor file and binaries. - - The plugin descriptor file is called either `stable-plugin-descriptor.properties` for plugins built against the stable plugin API, or `plugin-descriptor.properties` for plugins built against the classic plugin API. A plugin ZIP file should only contain one plugin descriptor file. - - {{es}} assumes that the uploaded ZIP file contains binaries. If it finds any source code, it fails with an error message, causing provisioning to fail. Make sure you upload binaries, and not source code. - - ::::{note} - Plugins larger than 5GB should have the plugin descriptor file at the top of the archive. This order can be achieved by specifying at time of creating the ZIP file: - - ```sh - zip -r name-of-plugin.zip name-of-descriptor-file.properties * - ``` - - :::: - - -Bundles -: The entire content of a bundle is made available to the node by extracting to the {{es}} container’s `/app/config` directory. This is useful to make custom dictionaries available. Dictionaries should be placed in a `/dictionaries` folder in the root path of your ZIP file. - - Here are some examples of bundles: - - **Script** - - ```text - $ tree . - . - └── scripts - └── test.js - ``` - - The script `test.js` can be referred in queries as `"script": "test"`. - - **Dictionary of synonyms** - - ```text - $ tree . - . - └── dictionaries - └── synonyms.txt - ``` - - The dictionary `synonyms.txt` can be used as `synonyms.txt` or using the full path `/app/config/synonyms.txt` in the `synonyms_path` of the `synonym-filter`. - - To learn more about analyzing with synonyms, check [Synonym token filter](asciidocalypse://docs/elasticsearch/docs/reference/data-analysis/text-analysis/analysis-synonym-tokenfilter.md) and [Formatting Synonyms](https://www.elastic.co/guide/en/elasticsearch/guide/2.x/synonym-formats.html). - - **GeoIP database bundle** - - ```text - $ tree . - . - └── ingest-geoip - └── MyGeoLite2-City.mmdb - ``` - - Note that the extension must be `-(City|Country|ASN).mmdb`, and it must be a different name than the original file name `GeoLite2-City.mmdb` which already exists in Elasticsearch Add-On for Heroku. To use this bundle, you can refer it in the GeoIP ingest pipeline as `MyGeoLite2-City.mmdb` under `database_file`. - - - -## Add your extension [ech-add-your-plugin] - -You must upload your files before you can apply them to your cluster configuration: - -1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the deployments page, select your deployment. -3. Under **Features**, select **Extensions**. -4. Select **Upload extension**. -5. Complete the extension fields, including the {{es}} version. - - * Plugins must use full version notation down to the patch level, such as `7.10.1`. You cannot use wildcards. This version notation should match the version in your plugin’s plugin descriptor file. For classic plugins, it should also match the target deployment version. - * Bundles should specify major or minor versions with wildcards, such as `7.*` or `*`. Wildcards are recommended to ensure the bundle is compatible across all versions of these releases. - -6. Select the extension **Type**. -7. Under **Plugin file**, choose the file to upload. -8. Select **Create extension**. - -After creating your extension, you can [enable them for existing {{es}} deployments](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md#ech-update-bundles) or enable them when creating new deployments. - -::::{note} -Creating extensions larger than 200MB should be done through the extensions API. - -:::: - - - -## Update your deployment configuration [ech-update-bundles] - -After uploading your files, you can select to enable them when creating a new {{es}} deployment. For existing deployments, you must update your deployment configuration to use the new files: - -1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the deployments page, select your deployment. - - Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. From the **Actions** dropdown, select **Edit deployment**. -4. Select **Manage user settings and extensions**. -5. Select the **Extensions** tab. -6. Select the custom extension. -7. Select **Back**. -8. Select **Save**. The {{es}} cluster is then updated with new nodes that have the plugin installed. - - -## Update your extension [ech-update-bundles-and-plugins] - -While you can update the ZIP file for any plugin or bundle, these are downloaded and made available only when a node is started. - -You should be careful when updating an extension. If you update an existing extension with a new file, and if the file is broken for some reason, all the nodes could be in trouble, as a restart or move node could make even HA clusters non-available. - -If the extension is not in use by any deployments, then you are free to update the files or extension details as much as you like. However, if the extension is in use, and if you need to update it with a new file, it is recommended to [create a new extension](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md#ech-add-your-plugin) rather than updating the existing one that is in use. - -By following this method, only the one node would be down even if the extension file is faulty. This would ensure that HA clusters remain available. - -This method also supports having a test/staging deployment to test out the extension changes before applying them on a production deployment. - -You may delete the old extension after updating the deployment successfully. - -To update an extension with a new file version, - -1. Prepare a new plugin or bundle. -2. On the **Extensions** page, [upload a new extension](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md#ech-add-your-plugin). -3. Make your new files available by uploading them. -4. On the deployments page, select your deployment. - - Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -5. From the **Actions** dropdown, select **Edit deployment**. -6. Select **Manage user settings and extensions**. -7. Select the **Extensions** tab. -8. Select the new extension and de-select the old one. -9. Select **Back**. -10. Select **Save**. - - diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-enable-kibana2.md b/raw-migrated-files/cloud/cloud-heroku/ech-enable-kibana2.md index 2ee7b3f1b..c199b85b1 100644 --- a/raw-migrated-files/cloud/cloud-heroku/ech-enable-kibana2.md +++ b/raw-migrated-files/cloud/cloud-heroku/ech-enable-kibana2.md @@ -5,7 +5,7 @@ If your deployment didn’t include a Kibana instance initially, use these instr To enable Kibana on your deployment: 1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-enable-logging-and-monitoring.md b/raw-migrated-files/cloud/cloud-heroku/ech-enable-logging-and-monitoring.md index 34dc5ff16..50c6b171d 100644 --- a/raw-migrated-files/cloud/cloud-heroku/ech-enable-logging-and-monitoring.md +++ b/raw-migrated-files/cloud/cloud-heroku/ech-enable-logging-and-monitoring.md @@ -115,7 +115,7 @@ Elasticsearch Add-On for Heroku manages the installation and configuration of th To enable monitoring on your deployment: 1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. @@ -146,7 +146,7 @@ Enabling logs and monitoring requires some extra resource on a deployment. For p With monitoring enabled for your deployment, you can access the [logs](https://www.elastic.co/guide/en/kibana/current/observability.html) and [stack monitoring](../../../deploy-manage/monitor/monitoring-data/visualizing-monitoring-data.md) through Kibana. 1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. @@ -214,7 +214,7 @@ With logging and monitoring enabled for a deployment, metrics are collected for Audit logs are useful for tracking security events on your {{es}} and/or {{kib}} clusters. To enable {{es}} audit logs on your deployment: 1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-getting-started.md b/raw-migrated-files/cloud/cloud-heroku/ech-getting-started.md index 17e9ef97f..b17a27b6d 100644 --- a/raw-migrated-files/cloud/cloud-heroku/ech-getting-started.md +++ b/raw-migrated-files/cloud/cloud-heroku/ech-getting-started.md @@ -2,7 +2,7 @@ This documentation applies to Heroku users who want to make use of the Elasticsearch Add-On for Heroku that is available from the [Heroku Dashboard](https://dashboard.heroku.com/) or that can be installed from the CLI. -The add-on runs on the Elasticsearch Service and provides access to [Elasticsearch](https://www.elastic.co/products/elasticsearch), the open source, distributed, RESTful search engine. Many other features of the Elastic Stack are also readily available to Heroku users through the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body) after you install the add-on. For example, you can use Kibana to visualize your Elasticsearch data. +The add-on runs on {{ecloud}} and provides access to [Elasticsearch](https://www.elastic.co/products/elasticsearch), the open source, distributed, RESTful search engine. Many other features of the Elastic Stack are also readily available to Heroku users through the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body) after you install the add-on. For example, you can use Kibana to visualize your Elasticsearch data. [Elasticsearch Machine Learning](/explore-analyze/machine-learning.md), [Elastic Enterprise Search](https://www.elastic.co/guide/en/enterprise-search/current/index.html), [Elastic APM](/solutions/observability/apps/application-performance-monitoring-apm.md) and [Elastic Fleet Server](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/index.md) are not supported by the Elasticsearch Add-On for Heroku. diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-manage-apm-settings.md b/raw-migrated-files/cloud/cloud-heroku/ech-manage-apm-settings.md index ac3621048..e9ba2386e 100644 --- a/raw-migrated-files/cloud/cloud-heroku/ech-manage-apm-settings.md +++ b/raw-migrated-files/cloud/cloud-heroku/ech-manage-apm-settings.md @@ -23,7 +23,7 @@ User settings are appended to the `apm-server.yml` configuration file for your i To add user settings: 1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-manage-kibana-settings.md b/raw-migrated-files/cloud/cloud-heroku/ech-manage-kibana-settings.md index 69908a57a..4a944696f 100644 --- a/raw-migrated-files/cloud/cloud-heroku/ech-manage-kibana-settings.md +++ b/raw-migrated-files/cloud/cloud-heroku/ech-manage-kibana-settings.md @@ -10,7 +10,7 @@ Be aware that some settings that could break your cluster if set incorrectly and To change Kibana settings: 1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. @@ -304,7 +304,7 @@ To learn more, check [configuring Kibana to use OpenID Connect](/deploy-manage/u ### Anonymous authentication [echanonymous_authentication] -If you want to allow anonymous authentication in Kibana, these settings are supported in Elasticsearch Add-On for Heroku. To learn more on how to enable anonymous access, check [Enabling anonymous access](/deploy-manage/users-roles/cluster-or-deployment-auth/anonymous-access.md) and [Configuring Kibana to use anonymous authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md#anonymous-authentication). +If you want to allow anonymous authentication in Kibana, these settings are supported in Elasticsearch Add-On for Heroku. To learn more on how to enable anonymous access, check [Enabling anonymous access](/deploy-manage/users-roles/cluster-or-deployment-auth/anonymous-access.md) and [Configuring Kibana to use anonymous authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md#anonymous-authentication). #### Supported versions before 8.0.0 [echsupported_versions_before_8_0_0] diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-password-reset.md b/raw-migrated-files/cloud/cloud-heroku/ech-password-reset.md deleted file mode 100644 index 07a1fc5a3..000000000 --- a/raw-migrated-files/cloud/cloud-heroku/ech-password-reset.md +++ /dev/null @@ -1,36 +0,0 @@ -# Reset the `elastic` user password [ech-password-reset] - -You might need to reset the password for the `elastic` superuser if you cannot authenticate with the `elastic` user ID and are effectively locked out from an Elasticsearch cluster or Kibana. - -::::{note} -Elastic does not manage the `elastic` user and does not have access to the account or its credentials. If you lose the password, you have to reset it. -:::: - - -::::{note} -Resetting the `elastic` user password does not interfere with Marketplace integrations. -:::: - - -::::{note} -The `elastic` user should be not be used unless you have no other way to access your deployment. [Create API keys for ingesting data](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/beats-api-keys.md), and create user accounts with [appropriate roles for user access](../../../deploy-manage/users-roles/cluster-or-deployment-auth/quickstart.md). -:::: - - -To reset the password: - -1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the deployments page, select your deployment. - - Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. From your deployment menu, go to **Security**. -4. Select **Reset password**. -5. Copy down the auto-generated a password for the `elastic` user: - - ![The password for the elastic user after resetting](../../../images/cloud-heroku-reset-password.png "") - -6. Close the window. - -The password is not accessible after you close the window, so if you lose it, you need to reset the password again. - diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-saas-metrics-accessing.md b/raw-migrated-files/cloud/cloud-heroku/ech-saas-metrics-accessing.md index b1003f569..73a5e27d7 100644 --- a/raw-migrated-files/cloud/cloud-heroku/ech-saas-metrics-accessing.md +++ b/raw-migrated-files/cloud/cloud-heroku/ech-saas-metrics-accessing.md @@ -7,7 +7,7 @@ For advanced views or production monitoring, [enable logging and monitoring](../ To access cluster performance metrics: 1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. For example, you might want to select **Is unhealthy** and **Has master problems** to get a short list of deployments that need attention. @@ -116,7 +116,7 @@ Cluster performance metrics are shown per node and are color-coded to indicate w For clusters that suffer out-of-memory failures, it can be difficult to determine whether the clusters are in a completely healthy state afterwards. For this reason, Elasticsearch Add-On for Heroku automatically reboots clusters that suffer out-of-memory failures. -You will receive an email notification to let you know that a restart occurred. For repeated alerts, the emails are aggregated so that you do not receive an excessive number of notifications. Either [resizing your cluster to reduce memory pressure](../../../deploy-manage/deploy/elastic-cloud/ech-customize-deployment-components.md#ech-cluster-size) or reducing the workload that a cluster is being asked to handle can help avoid these cluster restarts. +You will receive an email notification to let you know that a restart occurred. For repeated alerts, the emails are aggregated so that you do not receive an excessive number of notifications. Either [resizing your cluster to reduce memory pressure](../../../deploy-manage/deploy/elastic-cloud/configure.md) or reducing the workload that a cluster is being asked to handle can help avoid these cluster restarts. diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-secure-clusters-kerberos.md b/raw-migrated-files/cloud/cloud-heroku/ech-secure-clusters-kerberos.md deleted file mode 100644 index a412bf9d0..000000000 --- a/raw-migrated-files/cloud/cloud-heroku/ech-secure-clusters-kerberos.md +++ /dev/null @@ -1,57 +0,0 @@ -# Secure your clusters with Kerberos [ech-secure-clusters-kerberos] - -You can secure your Elasticsearch clusters and Kibana instances in a deployment by using the Kerberos-5 protocol to authenticate users. - - -## Before you begin [echbefore_you_begin_10] - -The steps in this section require an understanding of Kerberos. To learn more about Kerberos, check our documentation on [configuring Elasticsearch for Kerberos authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md). - - -## Configure the cluster to use Kerberos [ech-configure-kerberos-settings] - -With a custom bundle containing the Kerberos files and changes to the cluster configuration, you can enforce user authentication through the Kerberos protocol. - -1. Create or use an existing deployment that includes a Kibana instance. -2. Create a [custom bundle](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md) that contains your `krb5.conf` and `keytab` files, and add it to your cluster. - - ::::{tip} - You should use these exact filenames for Elasticsearch Add-On for Heroku to recognize the file in the bundle. - :::: - -3. Edit your cluster configuration, sometimes also referred to as the deployment plan, to define Kerberos settings as described in [Elasticsearch documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md). - - ```sh - xpack.security.authc.realms.kerberos.cloud-krb: - order: 2 - keytab.path: es.keytab - remove_realm_name: false - ``` - - ::::{important} - The name of the realm must be `cloud-krb`, and the order must be 2: `xpack.security.authc.realms.kerberos.cloud-krb.order: 2` - :::: - -4. Update Kibana in the [user settings configuration](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) to use Kerberos as the authentication provider: - - ```sh - xpack.security.authc.providers: - kerberos.kerberos1: - order: 0 - ``` - - This configuration disables all other realms and only allows users to authenticate with Kerberos. If you wish to allow your native realm users to authenticate, you need to also enable the `basic` `provider` like this: - - ```sh - xpack.security.authc.providers: - kerberos.kerberos1: - order: 0 - description: "Log in with Kerberos" <1> - basic.basic1: - order: 1 - ``` - - 1. This arbitrary string defines how Kerberos login is titled in the Login Selector UI that is shown when you enable multiple authentication providers in Kibana. You can also configure the optional `icon` and `hint` settings for any authentication provider. - -5. Use the Kibana endpoint URL to log in. - diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-secure-clusters-oidc.md b/raw-migrated-files/cloud/cloud-heroku/ech-secure-clusters-oidc.md deleted file mode 100644 index 860f44920..000000000 --- a/raw-migrated-files/cloud/cloud-heroku/ech-secure-clusters-oidc.md +++ /dev/null @@ -1,232 +0,0 @@ -# Secure your clusters with OpenID Connect [ech-secure-clusters-oidc] - -You can secure your deployment using OpenID Connect for single sign-on. OpenID Connect is an identity layer on top of the OAuth 2.0 protocol. The end user identity gets verified by an authorization server and basic profile information is sent back to the client. - - -## Before you begin [echbefore_you_begin_9] - -To prepare for using OpenID Connect for authentication for deployments: - -* Create or use an existing deployment. Make note of the Kibana endpoint URL, it will be referenced as `` in the following steps. -* The steps in this section required a moderate understanding of [OpenID Connect](https://openid.net/specs/openid-connect-core-1_0.md#Authentication) in general and the Authorization Code Grant flow specifically. For more information about OpenID Connect and how it works with the Elastic Stack check: - - * Our [configuration guide for Elasticsearch](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#oidc-elasticsearch-authentication). - - - -## Configure the OpenID Connect Provider [ech-configure-oidc-provider] - -The OpenID *Connect Provider* (OP) is the entity in OpenID Connect that is responsible for authenticating the user and for granting the necessary tokens with the authentication and user information to be consumed by the *Relying Parties* (RP). - -In order for Elasticsearch Add-On for Heroku (acting as an RP) to be able use your OpenID Connect Provider for authentication, a trust relationship needs to be established between the OP and the RP. In the OpenID Connect Provider, this means registering the RP as a client. - -The process for registering the Elasticsearch Add-On for Heroku RP will be different from OP to OP and following the provider’s relevant documentation is prudent. The information for the RP that you commonly need to provide for registration are the following: - -`Relying Party Name` -: An arbitrary identifier for the relying party. Neither the specification nor our implementation impose any constraints on this value. - -`Redirect URI` -: This is the URI where the OP will redirect the user’s browser after authentication. The appropriate value for this is `/api/security/oidc/callback`. This can also be called the `Callback URI`. - -At the end of the registration process, the OP assigns a Client Identifier and a Client Secret for the RP (Elasticsearch Add-On for Heroku) to use. Note these two values as they are used in the cluster configuration. - - -## Configure your cluster to use OpenID Connect [ech-secure-deployment-oidc] - -You’ll need to [add the client secret](../../../deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#ech-oidc-client-secret) to the keystore and then [update the Elasticsearch user settings](../../../deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#ech-oidc-user-settings) to refer to that secret and use the OpenID Connect realm. - - -### Configure the Client Secret [ech-oidc-client-secret] - -Configure the Client Secret that was assigned to the PR by the OP during registration to the Elasticsearch keystore. - -This is a sensitive setting, it won’t be stored in plaintext in the cluster configuration but rather as a secure setting. In order to do so, follow these steps: - -1. On the deployments page, select your deployment. - - Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -2. From your deployment menu, select **Security**. -3. Under the **Elasticsearch keystore** section, select **Add settings**. -4. On the **Create setting** window, select the secret **Type** to be `Single string`. -5. Set the **Setting name**` to `xpack.security.authc.realms.oidc..rp.client_secret` and add the Client Secret you received from the OP during registration in the `Secret` field. - - ::::{note} - `` refers to the name of the OpenID Connect Realm. You can select any name that contains alphanumeric characters, underscores and hyphens. Replace `` with the realm name you selected. - :::: - - - ::::{note} - After you configure the Client Secret, any attempt to restart the deployment will fail until you complete the rest of the configuration steps. If you wish to rollback the OpenID Connect related configuration effort, you need to remove the `xpack.security.authc.realms.oidc..rp.client_secret` that was just added by using the "remove" button by the setting name under `Security keys`. - :::: - -6. You must also edit your cluster configuration, sometimes also referred to as the deployment plan, in order to add the appropriate settings. - - -### Configure the user settings [ech-oidc-user-settings] - -The Elasticsearch cluster needs to be configured to use the OpenID Connect realm for user authentication and to map the applicable roles to the users. If you are using machine learning or a deployment with hot-warm architecture, you must include this OpenID Connect related configuration in the user settings section for each node type. - -1. [Update your Elasticsearch user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) for the `oidc` realm and specify the relevant configuration: - - ```sh - xpack: - security: - authc: - realms: - oidc: - oidc-realm-name: <1> - order: 2 <2> - rp.client_id: "client-id" <3> - rp.response_type: "code" - rp.redirect_uri: "/api/security/oidc/callback" <4> - op.issuer: "" <5> - op.authorization_endpoint: "" <6> - op.token_endpoint: "" <7> - op.userinfo_endpoint: "" <8> - op.jwkset_path: "" <9> - claims.principal: sub <10> - claims.groups: "http://example.info/claims/groups" <11> - ``` - - 1. Defines the OpenID Connect realm name. The realm name can only contain alphanumeric characters, underscores, and hyphens - 2. The order of the OpenID Connect realm in your authentication chain. Allowed values are between `2` and `100`. Set to `2` unless you plan on configuring multiple SSO realms for this cluster. - 3. This, usually opaque, arbitrary string, is the Client Identifier that was assigned to the Elasticsearch Add-On for Heroku RP by the OP upon registration. - 4. Replace `` with the value noted in the previous step - 5. A url, used as a unique identifier for the OP. The value for this setting should be provided by your OpenID Connect Provider. - 6. The URL for the Authorization Endpoint in the OP. This is where the user’s browser will be redirected to start the authentication process. The value for this setting should be provided by your OpenID Connect Provider. - 7. The URL for the Token Endpoint in the OpenID Connect Provider. This is the endpoint where Elasticsearch Add-On for Heroku will send a request to exchange the code for an ID Token, as part of the Authorization Code flow. The value for this setting should be provided by your OpenID Connect Provider. - 8. (Optional) The URL for the UserInfo Endpoint in the OpenID Connect Provider. This is the endpoint of the OP that can be queried to get further user information, if required. The value for this setting should be provided by your OpenID Connect Provider. - 9. The path to a file or an HTTPS URL pointing to a JSON Web Key Set with the key material that the OpenID Connect Provider uses for signing tokens and claims responses. Your OpenID Connect Provider should provide you with this file. - 10. Defines the OpenID Connect claim that is going to be mapped to the principal (username) of the authenticated user in Kibana. In this example, we map the value of the `sub` claim, but this is not a requirement, other claims can be used too. See [the claims mapping documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#oidc-elasticsearch-authentication) for details and available options. - 11. Defines the OpenID Connect claim that is going to be used for role mapping. Note that the value `"http://example.info/claims/groups"` that is used here, is an arbitrary example. Check [the claims mapping documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#oidc-elasticsearch-authentication) for a very detailed description of how the claim mapping works and how can these be used for role mapping. The name of this claim should be determined by the configuration of your OpenID Connect Provider. NOTE: According to the OpenID Connect specification, the OP should also make their configuration available at a well known URL, which is the concatenation of their `Issuer` value with the `.well-known/openid-configuration` string. To configure the OpenID Connect realm, refer to the `https://op.org.com/.well-known/openid-configuration` documentation. - -2. By default, users authenticating through OpenID Connect have no roles assigned to them. For example, if you want all your users authenticating with OpenID Connect to get access to Kibana, issue the following request to Elasticsearch: - - ```sh - POST /_security/role_mapping/CLOUD_OIDC_TO_KIBANA_ADMIN <1> - { - "enabled": true, - "roles": [ "kibana_admin" ], <2> - "rules": { <3> - "field": { "realm.name": "oidc-realm-name" } <4> - }, - "metadata": { "version": 1 } - } - ``` - - 1. The name of the new role mapping. - 2. The role mapped to the users. - 3. The fields to match against. - 4. The name of the OpenID Connect realm. This needs to be the same value as the one used in the cluster configuration. - -3. Update Kibana in the [user settings configuration](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) to use OpenID Connect as the authentication provider: - - ```sh - xpack.security.authc.providers: - oidc.oidc1: - order: 0 - realm: oidc-realm-name <1> - ``` - - 1. The name of the OpenID Connect realm. This needs to be the same value as the one used in the cluster configuration. - - - This configuration disables all other realms and only allows users to authenticate with OpenID Connect. If you wish to allow your native realm users to authenticate, you need to also enable the `basic` `provider` like this: - - ```sh - xpack.security.authc.providers: - oidc.oidc1: - order: 0 - realm: oidc-realm-name - description: "Log in with my OpenID Connect" <1> - basic.basic1: - order: 1 - ``` - - 1. This arbitrary string defines how OpenID Connect login is titled in the Login Selector UI that is shown when you enable multiple authentication providers in Kibana. If you have a Kibana instance, you can also configure the optional `icon` and `hint` settings for any authentication provider. - -4. Optional: If your OpenID Connect Provider doesn’t publish its JWKS at an https URL, or if you want to use a local copy, you can upload the JWKS as a file. - - 1. Prepare a ZIP file with a [custom bundle](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md) that contains your OpenID Connect Provider’s JWKS file (`op_jwks.json`) inside of an `oidc` folder. - - This bundle allows all Elasticsearch containers to access the metadata file. - - 2. Update your Elasticsearch cluster on the [deployments page](../../../deploy-manage/deploy/elastic-cloud/add-plugins-provided-with-elastic-cloud-hosted.md) to use the bundle you prepared in the previous step. - - - Custom bundles are unzipped under the path `/app/config/BUNDLE_DIRECTORY_STRUCTURE`, where `BUNDLE_DIRECTORY_STRUCTURE` is the directory structure in the ZIP file. Make sure to save the file location where custom bundles get unzipped, as you will need it in the next step. - - In our example, the OpenID Connect Provider JWK set file will be located in the path `/app/config/oidc/op_jwks.json`: - - ```sh - $ tree . - . - └── oidc - └── op_jwks.json - ``` - - 3. Adjust your `oidc` realm configuration accordingly: - - ```sh - xpack: - security: - authc: - realms: - oidc: - oidc-realm-name: - ... - op.jwks_path: /app/config/oidc/op_jwks.json <1> - ``` - - 1. The path to the JWKS file that was uploaded - - - -## Configure SSL [ech-oidc-ssl-configuration] - -OpenID Connect depends on TLS to provider security properties such as encryption in transit and endpoint authentication. The RP is required to establish back-channel communication with the OP in order to exchange the code for an ID Token during the Authorization code grant flow and in order to get additional user information from the UserInfo endpoint. As such, it is important that Elasticsearch Add-On for Heroku can validate and trust the server certificate that the OP uses for TLS. Since the system truststore is used for the client context of outgoing https connections, if your OP is using a certificate from a trusted CA, no additional configuration is needed. - -However, if your OP uses a certificate that is issued for instance, by a CA used only in your Organization, you must configure Elasticsearch Add-On for Heroku to trust that CA. - -1. Prepare a ZIP file with a [custom bundle](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md) that contains the CA certificate (`company-ca.pem`) that signed the certificate your OpenID Connect Provider uses for TLS inside of an `oidc-tls` folder -2. Update your Elasticsearch cluster on the [deployments page](../../../deploy-manage/deploy/elastic-cloud/add-plugins-provided-with-elastic-cloud-hosted.md) to use the bundle you prepared in the previous step. - - - Custom bundles are unzipped under the path `/app/config/BUNDLE_DIRECTORY_STRUCTURE`, where `BUNDLE_DIRECTORY_STRUCTURE` is the directory structure in the ZIP file. Make sure to save the file location where custom bundles get unzipped, as you will need it in the next step. - - In our example, the CA certificate file will be located in the path `/app/config/oidc-tls/company-ca.pem`: - - ```sh - $ tree . - . - └── oidc-tls - └── company-ca.pem - ``` - -3. Adjust your `oidc` realm configuration accordingly: - - ```sh - xpack: - security: - authc: - realms: - oidc: - oidc-realm-name: - ... - ssl.certificate_authorities: ["/app/config/oidc-tls/company-ca.pem"] <1> - ``` - - 1. The path where the CA Certificate file was uploaded - - - -## Optional Settings [ech-oidc-optional-settings] - -The following optional oidc realm settings are supported and can be set if needed: - -* `op.endsession_endpoint` The URL to the End Session Endpoint in the OpenID Connect Provider. This is the endpoint where the user’s browser will be redirected after local logout, if the realm is configured for RP initiated Single Logout and the OP supports it. The value for this setting should be provided by your OpenID Connect Provider. -* `rp.post_logout_redirect_uri` The Redirect URL where the OpenID Connect Provider should redirect the user after a successful Single Logout. This should be set to a value that will not trigger a new OpenID Connect Authentication, `/security/logged_out` is a good choice for this parameter. -* `rp.signature_algorithm` The signature algorithm that will be used by {{es}} in order to verify the signature of the ID tokens it will receive from the OpenID Connect Provider. Defaults to `RSA256`. -* `rp.requested_scopes` The scope values that will be requested by the OpenID Connect Provider as part of the Authentication Request. Defaults to `openid`, which is the only required scope for authentication. If your use case requires that you receive additional claims, you might need to request additional scopes, one of `profile`, `email`, `address`, `phone`. Note that `openid` should always be included in the list of requested scopes. - diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-securing-clusters-JWT.md b/raw-migrated-files/cloud/cloud-heroku/ech-securing-clusters-JWT.md deleted file mode 100644 index bfa9fec67..000000000 --- a/raw-migrated-files/cloud/cloud-heroku/ech-securing-clusters-JWT.md +++ /dev/null @@ -1,103 +0,0 @@ -# Secure your clusters with JWT [ech-securing-clusters-JWT] - -These steps show how you can secure your Elasticsearch clusters in a deployment by using a JSON Web Token (JWT) realm for authentication. - - -## Before you begin [echbefore_you_begin_11] - -Elasticsearch Add-On for Heroku supports JWT of ID Token format with Elastic Stack version 8.2 and later. Support for JWT of certain access token format is available since 8.7. - - -## Configure your 8.2 or above cluster to use JWT of ID Token format [echconfigure_your_8_2_or_above_cluster_to_use_jwt_of_id_token_format] - -```sh -xpack: - security: - authc: - realms: - jwt: <1> - jwt-realm-name: <2> - order: 2 <3> - client_authentication.type: "shared_secret" <4> - allowed_signature_algorithms: "HS256,HS384,HS512,RS256,RS384,RS512,ES256,ES384,ES512,PS256,PS384,PS512" <5> - allowed_issuer: "issuer1" <6> - allowed_audiences: "elasticsearch1,elasticsearch2" <7> - claims.principal: "sub" <8> - claims.groups: "groups" <9> -``` - -1. Specifies the authentication realm service. -2. Defines the JWT realm name. -3. The order of the JWT realm in your authentication chain. Allowed values are between `2` and `100`, inclusive. -4. Defines the client authenticate type. -5. Defines the JWT `alg` header values allowed by the realm. -6. Defines the JWT `iss` claim value allowed by the realm. -7. Defines the JWT `aud` claim values allowed by the realm. -8. Defines the JWT claim name used for the principal (username). No default. -9. Defines the JWT claim name used for the groups. No default. - - -By default, users authenticating through JWT have no roles assigned to them. If you want all users in the group `elasticadmins` in your identity provider to be assigned the `superuser` role in your Elasticsearch cluster, issue the following request to Elasticsearch: - -```sh -POST /_security/role_mapping/CLOUD_JWT_ELASTICADMIN_TO_SUPERUSER <1> -{ - "enabled": true, - "roles": [ "superuser" ], <2> - "rules": { "all" : [ <3> - { "field": { "realm.name": "jwt-realm-name" } }, <4> - { "field": { "groups": "elasticadmins" } } - ]}, - "metadata": { "version": 1 } -} -``` - -1. The mapping name. -2. The Elastic Stack role to map to. -3. A rule specifying the JWT role to map from. -4. `realm.name` can be any string containing only alphanumeric characters, underscores, and hyphens. - - -::::{note} -In order to use the field `groups` in the mapping rule, you need to have mapped the JWT Attribute that conveys the group membership to `claims.groups` in the previous step. -:::: - - - -## Configure your 8.7 or above cluster to use JWT of access token format [echconfigure_your_8_7_or_above_cluster_to_use_jwt_of_access_token_format] - -```sh -xpack: - security: - authc: - realms: - jwt: - jwt-realm-name: - order: 2 - token_type: "access_token" <1> - client_authentication.type: "shared_secret" - allowed_signature_algorithms: [ "RS256", "HS256" ] - allowed_subjects: [ "123456-compute@developer.example.com" ] <2> - allowed_issuer: "issuer1" - allowed_audiences: [ "elasticsearch1", "elasticsearch2" ] - required_claims: <3> - token_use: "access" - fallback_claims.sub: "client_id" <4> - fallback_claims.aud: "scope" <5> - claims.principal: "sub" <6> - claims.groups: "groups" -``` - -1. Specifies token type accepted by this JWT realm -2. Specifies subjects allowed by the realm. This setting is mandatory for `access_token` JWT realms. -3. Additional claims required for successful authentication. The claim name can be any valid variable names and the claim values must be either string or array of strings. -4. The name of the JWT claim to extract the subject information if the `sub` claim does not exist. This setting is only available for `access_token` JWT realms. -5. The name of the JWT claim to extract the audiences information if the `aud` claim does not exist. This setting is only available for `access_token` JWT realms. -6. Since the fallback claim for `sub` is defined as `client_id`, the principal will also be extracted from `client_id` if the `sub` claim does not exist - - -::::{note} -Refer to [JWT authentication documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/jwt.md) for more details and examples. -:::: - - diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-securing-clusters-SAML.md b/raw-migrated-files/cloud/cloud-heroku/ech-securing-clusters-SAML.md deleted file mode 100644 index 94470d3f3..000000000 --- a/raw-migrated-files/cloud/cloud-heroku/ech-securing-clusters-SAML.md +++ /dev/null @@ -1,176 +0,0 @@ -# Secure your clusters with SAML [ech-securing-clusters-SAML] - -These steps show how you can secure your Elasticsearch clusters and Kibana instances in a deployment by using a Security Assertion Markup Language (SAML) identity provider (IdP) for cross-domain, single sign-on authentication. - - -## Configure your 8.0 or above cluster to use SAML [echconfigure_your_8_0_or_above_cluster_to_use_saml] - -You must edit your cluster configuration, sometimes also referred to as the deployment plan, to point to the SAML IdP before you can complete the configuration in Kibana. If you are using machine learning or a deployment with hot-warm architecture, you must include this SAML IdP configuration in the user settings section for each node type. - -1. Create or use an existing deployment that includes a Kibana instance. -2. Copy the Kibana endpoint URL. -3. $$$step-3$$$[Update your Elasticsearch user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) for the `saml` realm and specify your IdP provider configuration: - - ```sh - xpack: - security: - authc: - realms: - saml: <1> - saml-realm-name: <2> - order: 2 <3> - attributes.principal: "nameid:persistent" <4> - attributes.groups: "groups" <5> - idp.metadata.path: "" <6> - idp.entity_id: "" <7> - sp.entity_id: "KIBANA_ENDPOINT_URL/" <8> - sp.acs: "KIBANA_ENDPOINT_URL/api/security/saml/callback" - sp.logout: "KIBANA_ENDPOINT_URL/logout" - ``` - - 1. Specifies the authentication realm service. - 2. Defines the SAML realm name. The SAML realm name can only contain alphanumeric characters, underscores, and hyphens. - 3. The order of the SAML realm in your authentication chain. Allowed values are between `2` and `100`. Set to `2` unless you plan on configuring multiple SSO realms for this cluster. - 4. Defines the SAML attribute that is going to be mapped to the principal (username) of the authenticated user in Kibana. In this non-normative example, `nameid:persistent` maps the `NameID` with the `urn:oasis:names:tc:SAML:2.0:nameid-format:persistent` format from the Subject of the SAML Assertion. You can use any SAML attribute that carries the necessary value for your use case in this setting, such as `uid` or `mail`. Refer to [the attribute mapping documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-attributes-mapping) for details and available options. - 5. Defines the SAML attribute used for role mapping when configured in Kibana. Common choices are `groups` or `roles`. The values for both `attributes.principal` and `attributes.groups` depend on the IdP provider, so be sure to review their documentation. Refer to [the attribute mapping documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-attributes-mapping) for details and available options. - 6. The file path or the HTTPS URL where your IdP metadata is available, such as `https://idpurl.com/sso/saml/metadata`. If you configure a URL you need to make ensure that your Elasticsearch cluster can access it. - 7. The SAML EntityID of your IdP. This can be read from the configuration page of the IdP, or its SAML metadata, such as `https://idpurl.com/entity_id`. - 8. Replace `KIBANA_ENDPOINT_URL` with the one noted in the previous step, such as `sp.entity_id: https://eddac6b924f5450c91e6ecc6d247b514.us-east-1.aws.found.io:443/` including the slash at the end. - -4. By default, users authenticating through SAML have no roles assigned to them. For example, if you want all your users authenticating with SAML to get access to Kibana, issue the following request to Elasticsearch: - - ```sh - POST /_security/role_mapping/CLOUD_SAML_TO_KIBANA_ADMIN <1> - { - "enabled": true, - "roles": [ "kibana_admin" ], <2> - "rules": { <3> - "field": { "realm.name": "saml-realm-name" } <4> - }, - "metadata": { "version": 1 } - } - ``` - - 1. The mapping name. - 2. The Elastic Stack role to map to. - 3. A rule specifying the SAML role to map from. - 4. `realm.name` can be any string containing only alphanumeric characters, underscores, and hyphens. - -5. Alternatively, if you want the users that belong to the group `elasticadmins` in your identity provider to be assigned the `superuser` role in your Elasticsearch cluster, issue the following request to Elasticsearch: - - ```sh - POST /_security/role_mapping/CLOUD_SAML_ELASTICADMIN_TO_SUPERUSER <1> - { - "enabled": true, - "roles": [ "superuser" ], <2> - "rules": { "all" : [ <3> - { "field": { "realm.name": "saml-realm-name" } }, <4> - { "field": { "groups": "elasticadmins" } } - ]}, - "metadata": { "version": 1 } - } - ``` - - 1. The mapping name. - 2. The Elastic Stack role to map to. - 3. A rule specifying the SAML role to map from. - 4. `realm.name` can be any string containing only alphanumeric characters, underscores, and hyphens. - - - ::::{note} - In order to use the field `groups` in the mapping rule, you need to have mapped the SAML Attribute that conveys the group membership to `attributes.groups` in the previous step. - :::: - -6. Update Kibana in the [user settings configuration](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) to use SAML as the authentication provider: - - ```sh - xpack.security.authc.providers: - saml.saml1: - order: 0 - realm: saml-realm-name <1> - ``` - - 1. The name of the SAML realm that you have configured earlier, for instance `saml-realm-name`. The SAML realm name can only contain alphanumeric characters, underscores, and hyphens. - - - This configuration disables all other realms and only allows users to authenticate with SAML. If you wish to allow your native realm users to authenticate, you need to also enable the `basic` `provider` like this: - - ```sh - xpack.security.authc.providers: - saml.saml1: - order: 0 - realm: saml-realm-name - description: "Log in with my SAML" <1> - basic.basic1: - order: 1 - ``` - - 1. This arbitrary string defines how SAML login is titled in the Login Selector UI that is shown when you enable multiple authentication providers in Kibana. You can also configure the optional `icon` and `hint` settings for any authentication provider. - - - -+ . Optional: Generate SAML metadata for the Service Provider. - -+ The SAML 2.0 specification provides a mechanism for Service Providers to describe their capabilities and configuration using a metadata file. If your SAML Identity Provider requires or allows you to configure it to trust the Elastic Stack Service Provider through the use of a metadata file, you can generate the SAML metadata by issuing the following request to Elasticsearch: - -+ - -```console -GET /_security/saml/metadata/realm_name <1> -``` - -+ <1> The name of the SAML realm in Elasticsearch. - -+ You can generate the SAML metadata by issuing the API request to Elasticsearch and storing metadata as an XML file using tools like `jq`. - -+ The following command, for example, generates the metadata for the SAML realm `saml1` and saves it to `metadata.xml` file: - -+ - -```console -curl -X GET -H "Content-Type: application/json" -u user_name:password https://:443/_security/saml/metadata/saml1 <1> -|jq -r '.[]' > metadata.xml -``` - -+ <1> The elasticsearch endpoint for the given deployment where the `saml1` realm is configured. - -+ - -1. Optional: If your Identity Provider doesn’t publish its SAML metadata at an HTTP URL, or if your Elasticsearch cluster cannot reach that URL, you can upload the SAML metadata as a file. - - 1. Prepare a ZIP file with a [custom bundle](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md) that contains your Identity Provider’s metadata (`metadata.xml`) inside of a `saml` folder. - - This bundle allows all Elasticsearch containers to access the metadata file. - - 2. Update your Elasticsearch cluster on the [deployments page](../../../deploy-manage/deploy/elastic-cloud/add-plugins-provided-with-elastic-cloud-hosted.md) to use the bundle you prepared in the previous step. - - - Custom bundles are unzipped under the path `/app/config/BUNDLE_DIRECTORY_STRUCTURE`, where `BUNDLE_DIRECTORY_STRUCTURE` is the directory structure in the ZIP file. Make sure to save the file location where custom bundles get unzipped, as you will need it in the next step. - - In our example, the SAML metadata file will be located in the path `/app/config/saml/metadata.xml`: - - ```sh - $ tree . - . - └── saml - └── metadata.xml - ``` - - 3. Adjust your `saml` realm configuration accordingly: - - ```sh - idp.metadata.path: /app/config/saml/metadata.xml <1> - ``` - - 1. The path to the SAML metadata file that was uploaded. - -2. Use the Kibana endpoint URL to log in. - - -## Configure your 7.x cluster to use SAML [ech-7x-saml] - -For 7.x deployments, the instructions are similar to those for 8.x, but your Elasticsearch request should use `POST /_security/role_mapping/CLOUD_SAML_TO_KIBANA_ADMIN` (for Step 4) or `POST /_security/role_mapping/CLOUD_SAML_ELASTICADMIN_TO_SUPERUSER` (for Step 5). - -All of the other steps are the same. - - diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-security.md b/raw-migrated-files/cloud/cloud-heroku/ech-security.md index 9eebeec8c..fdbe40c6c 100644 --- a/raw-migrated-files/cloud/cloud-heroku/ech-security.md +++ b/raw-migrated-files/cloud/cloud-heroku/ech-security.md @@ -7,7 +7,7 @@ The security of Elasticsearch Add-On for Heroku is described on the [{{ecloud}} * Reset the [`elastic` user password](../../../deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md). * Use third-party authentication providers and services like [SAML](../../../deploy-manage/users-roles/cluster-or-deployment-auth/saml.md), [OpenID Connect](../../../deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md), or [Kerberos](../../../deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md) to provide dynamic [role mappings](../../../deploy-manage/users-roles/cluster-or-deployment-auth/mapping-users-groups-to-roles.md) for role based or attribute based access control. * Use {{kib}} Spaces and roles to [secure access to {{kib}}](../../../deploy-manage/users-roles/cluster-or-deployment-auth/quickstart.md). - * Authorize and authenticate service accounts for {{beats}} by [granting access using API keys](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/beats-api-keys.md). + * Authorize and authenticate service accounts for {{beats}} by [granting access using API keys](asciidocalypse://docs/beats/docs/reference/filebeat/beats-api-keys.md). * Roles can provide full, or read only, access to your data and can be created in Kibana or directly in Elasticsearch. Check [defining roles](../../../deploy-manage/users-roles/cluster-or-deployment-auth/defining-roles.md) for full details. diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-traffic-filtering-deployment-configuration.md b/raw-migrated-files/cloud/cloud-heroku/ech-traffic-filtering-deployment-configuration.md index f805d72b8..399b710d8 100644 --- a/raw-migrated-files/cloud/cloud-heroku/ech-traffic-filtering-deployment-configuration.md +++ b/raw-migrated-files/cloud/cloud-heroku/ech-traffic-filtering-deployment-configuration.md @@ -120,7 +120,7 @@ This section offers suggestions on how to troubleshoot your traffic filters. Bef ### Review the rule sets associated with a deployment [ech-review-rule-sets] 1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. @@ -135,7 +135,7 @@ On this screen you can view and remove existing filters and attach new filters. To identify which rule sets are automatically applied to new deployments in your account: 1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. 3. Under the **Features** tab, open the **Traffic filters** page. 4. You can find the list of traffic filter rule sets. 5. Select each of the rule sets — **Include by default** is checked when this rule set is automatically applied to all new deployments in its region. diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-traffic-filtering-ip.md b/raw-migrated-files/cloud/cloud-heroku/ech-traffic-filtering-ip.md index 09fa4a66e..0da1f519e 100644 --- a/raw-migrated-files/cloud/cloud-heroku/ech-traffic-filtering-ip.md +++ b/raw-migrated-files/cloud/cloud-heroku/ech-traffic-filtering-ip.md @@ -14,7 +14,7 @@ You can combine any rules into a set, so we recommend that you group rules accor To create a rule set: 1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. 3. Under the **Features** tab, open the **Traffic filters** page. 4. Select **Create filter**. 5. Select **IP filtering rule set**. @@ -58,7 +58,7 @@ If you want to remove any traffic restrictions from a deployment or delete a rul You can edit a rule set name or change the allowed traffic sources using IPv4, or a range of addresses with CIDR. 1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. 3. Under the **Features** tab, open the **Traffic filters** page. 4. Find the rule set you want to edit. 5. Select the **Edit** icon. diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-traffic-filtering-vpc.md b/raw-migrated-files/cloud/cloud-heroku/ech-traffic-filtering-vpc.md index a10f8d8e1..eb6f03094 100644 --- a/raw-migrated-files/cloud/cloud-heroku/ech-traffic-filtering-vpc.md +++ b/raw-migrated-files/cloud/cloud-heroku/ech-traffic-filtering-vpc.md @@ -188,7 +188,7 @@ Having trouble finding your VPC endpoint ID? You can find it in the AWS console. Once you know your VPC endpoint ID you can create a private link traffic filter rule set. 1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. 3. Under the **Features** tab, open the **Traffic filters** page. 4. Select **Create filter**. 5. Select **Private link endpoint**. @@ -248,7 +248,7 @@ The settings `xpack.fleet.agents.fleet_server.hosts` and `xpack.fleet.outputs` t You can edit a rule set name or to change the VPC endpoint ID. 1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. 3. Under the **Features** tab, open the **Traffic filters** page. 4. Find the rule set you want to edit. 5. Select the **Edit** icon. diff --git a/raw-migrated-files/cloud/cloud-heroku/ech-upgrade-deployment.md b/raw-migrated-files/cloud/cloud-heroku/ech-upgrade-deployment.md index 32d45a69c..d2c939215 100644 --- a/raw-migrated-files/cloud/cloud-heroku/ech-upgrade-deployment.md +++ b/raw-migrated-files/cloud/cloud-heroku/ech-upgrade-deployment.md @@ -30,7 +30,7 @@ When upgrading from one recent major Elasticsearch version to the next, we recom * [Upgrade to Elasticsearch 5.x](https://www.elastic.co/guide/en/cloud-heroku/current/ech-upgrading-v5.html) ::::{warning} -If you have a custom plugin installed, you must [update the plugin](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md#ech-update-bundles-and-plugins) so that it matches the Elasticsearch version that you are upgrading to. When the custom plugin does not match the Elasticsearch version, the upgrade fails. +If you have a custom plugin installed, you must [update the plugin](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md) so that it matches the Elasticsearch version that you are upgrading to. When the custom plugin does not match the Elasticsearch version, the upgrade fails. :::: @@ -42,7 +42,7 @@ To successfully replace and override a plugin which is being upgraded, the `name To upgrade a cluster in Elasticsearch Add-On for Heroku: 1. Log in to the [Elasticsearch Add-On for Heroku console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. On the deployments page, select your deployment. +2. On the **Deployments** page, select your deployment. Narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. @@ -56,7 +56,7 @@ To upgrade a cluster in Elasticsearch Add-On for Heroku: 7. If you are upgrading to version 6.6 and earlier, major upgrades require a full cluster restart to complete the upgrade process. 8. If you had Kibana enabled, the UI will prompt you to also upgrade Kibana. The Kibana upgrade takes place separately from the Elasticsearch version upgrade and needs to be triggered manually: - 1. On the deployments page, select your deployment. + 1. On the **Deployments** page, select your deployment. 2. From your deployment menu, select **Kibana**. 3. If the button is available, select **Upgrade Kibana**. If the button is not available, Kibana does not need to be upgraded further. 4. Confirm the upgrade. diff --git a/raw-migrated-files/cloud/cloud-heroku/echsign-outgoing-saml-message.md b/raw-migrated-files/cloud/cloud-heroku/echsign-outgoing-saml-message.md deleted file mode 100644 index 206ea4b12..000000000 --- a/raw-migrated-files/cloud/cloud-heroku/echsign-outgoing-saml-message.md +++ /dev/null @@ -1,64 +0,0 @@ -# Sign outgoing SAML messages [echsign-outgoing-saml-message] - -If configured, Elastic Stack will sign outgoing SAML messages. - -As a prerequisite, you need to generate a signing key and a self-signed certificate. You need to share this certificate with your SAML Identity Provider so that it can verify the received messages. The key needs to be unencrypted. The exact procedure is system dependent, you can use for example `openssl`: - -```sh -openssl req -new -x509 -days 3650 -nodes -sha256 -out saml-sign.crt -keyout saml-sign.key -``` - -Place the files under the `saml` folder and add them to the existing SAML bundle, or [create a new one](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md). - -In our example, the certificate and the key will be located in the path `/app/config/saml/saml-sign.{crt,key}`: - -```sh -$ tree . -. -└── saml - ├── saml-sign.crt - └── saml-sign.key -``` - -Make sure that the bundle is included with your deployment. - -Adjust your realm configuration accordingly: - -```sh - signing.certificate: /app/config/saml/saml-sign.crt <1> - signing.key: /app/config/saml/saml-sign.key <2> -``` - -1. The path to the SAML signing certificate that was uploaded. -2. The path to the SAML signing key that was uploaded. - - -When configured with a signing key and certificate, Elastic Stack will sign all outgoing messages (SAML Authentication Requests, SAML Logout Requests, SAML Logout Responses) by default. This behavior can be altered by configuring `signing.saml_messages` appropriately with the comma separated list of messages to sign. Supported values are `AuthnRequest`, `LogoutRequest` and `LogoutResponse` and the default value is `*`. - -For example: - -```sh -xpack: - security: - authc: - realms: - saml-realm-name: - order: 2 - ... - signing.saml_messages: AuthnRequest <1> - ... -``` - -1. This configuration ensures that only SAML authentication requests will be sent signed to the Identity Provider. - - -## Optional settings [echoptional_settings] - -The following optional realm settings are supported: - -* `force_authn` Specifies whether to set the `ForceAuthn` attribute when requesting that the IdP authenticate the current user. If set to `true`, the IdP is required to verify the user’s identity, irrespective of any existing sessions they might have. Defaults to `false`. -* `idp.use_single_logout` Indicates whether to utilise the Identity Provider’s `` (if one exists in the IdP metadata file). Defaults to `true`. - -After completing these steps, you can log in to Kibana by authenticating against your SAML IdP. If you encounter any issues with the configuration, refer to the [SAML troubleshooting page](/troubleshoot/elasticsearch/security/trb-security-saml.md) which contains information about common issues and suggestions for their resolution. - - diff --git a/raw-migrated-files/cloud/cloud/ec-about.md b/raw-migrated-files/cloud/cloud/ec-about.md index dca303eef..3833dbad4 100644 --- a/raw-migrated-files/cloud/cloud/ec-about.md +++ b/raw-migrated-files/cloud/cloud/ec-about.md @@ -1,11 +1,11 @@ -# About Elasticsearch Service [ec-about] +# About {{ech}} [ec-about] The information in this section covers: * [Subscription Levels](../../../deploy-manage/license.md) * [Version Policy](../../../deploy-manage/deploy/elastic-cloud/available-stack-versions.md) -* [Elasticsearch Service Hardware](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/hardware.md) -* [Elasticsearch Service Regions](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/regions.md) +* [{{ech}} Hardware](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/hardware.md) +* [{{ech}} Regions](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/regions.md) * [Service Status](../../../deploy-manage/cloud-organization/service-status.md) * [Getting help](../../../troubleshoot/index.md) * [Restrictions and known problems](../../../deploy-manage/deploy/elastic-cloud/restrictions-known-problems.md) diff --git a/raw-migrated-files/cloud/cloud/ec-access-kibana.md b/raw-migrated-files/cloud/cloud/ec-access-kibana.md index 7454c4871..682c66292 100644 --- a/raw-migrated-files/cloud/cloud/ec-access-kibana.md +++ b/raw-migrated-files/cloud/cloud/ec-access-kibana.md @@ -6,10 +6,10 @@ For new Elasticsearch clusters, we automatically create a Kibana instance for yo To access Kibana: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. On the **Deployments** page, select your deployment. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. Under **Applications**, select the Kibana **Launch** link and wait for Kibana to open. @@ -37,10 +37,10 @@ If your deployment didn’t include a Kibana instance initially, use these instr To enable Kibana on your deployment: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. From your deployment menu, go to the **Kibana** page. 4. Select **Enable**. diff --git a/raw-migrated-files/cloud/cloud/ec-activity-page.md b/raw-migrated-files/cloud/cloud/ec-activity-page.md index adf4e6f65..caec9c856 100644 --- a/raw-migrated-files/cloud/cloud/ec-activity-page.md +++ b/raw-migrated-files/cloud/cloud/ec-activity-page.md @@ -4,10 +4,10 @@ The deployment **Activity** page gives you a convenient way to follow all config To view the activity for a deployment: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. On the **Deployments** page, select your deployment. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. In your deployment menu, select **Activity**. 4. You can: @@ -30,7 +30,7 @@ Summary : A summary of what change was applied, when the change was performed, and how long it took. Applied by -: The user who submitted the configuration change. `System` indicates configuration changes initiated automatically by the Elasticsearch Service platform. +: The user who submitted the configuration change. `System` indicates configuration changes initiated automatically by the {{ecloud}} platform. Actions : Select **Details** for an expanded view of each step in the configuration change, including the start time, end time, and duration. You can select **Reapply** to re-run the configuration change. diff --git a/raw-migrated-files/cloud/cloud/ec-add-user-settings.md b/raw-migrated-files/cloud/cloud/ec-add-user-settings.md index 4952b0577..9c8dcb5eb 100644 --- a/raw-migrated-files/cloud/cloud/ec-add-user-settings.md +++ b/raw-migrated-files/cloud/cloud/ec-add-user-settings.md @@ -1,20 +1,20 @@ # Edit {{es}} user settings [ec-add-user-settings] -Change how {{es}} runs by providing your own user settings. Elasticsearch Service appends these settings to each node’s `elasticsearch.yml` configuration file. +Change how {{es}} runs by providing your own user settings. {{ech}} appends these settings to each node’s `elasticsearch.yml` configuration file. -Elasticsearch Service automatically rejects `elasticsearch.yml` settings that could break your cluster. For a list of supported settings, check [Supported {{es}} settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md#ec-es-elasticsearch-settings). +{{ech}} automatically rejects `elasticsearch.yml` settings that could break your cluster. For a list of supported settings, check [Supported {{es}} settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md#ec-es-elasticsearch-settings). ::::{warning} -You can also update [dynamic cluster settings](../../../deploy-manage/deploy/self-managed/configure-elasticsearch.md#dynamic-cluster-setting) using {{es}}'s [update cluster settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings). However, Elasticsearch Service doesn’t reject unsafe setting changes made using this API. Use with caution. +You can also update [dynamic cluster settings](../../../deploy-manage/deploy/self-managed/configure-elasticsearch.md#dynamic-cluster-setting) using {{es}}'s [update cluster settings API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cluster-put-settings). However, {{ech}} doesn’t reject unsafe setting changes made using this API. Use with caution. :::: To add or edit user settings: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. From your deployment menu, go to the **Edit** page. 4. In the **Elasticsearch** section, select **Manage user settings and extensions**. @@ -28,7 +28,7 @@ In some cases, you may get a warning saying "User settings are different across ## Supported {{es}} settings [ec-es-elasticsearch-settings] -Elasticsearch Service supports the following `elasticsearch.yml` settings. +{{ech}} supports the following `elasticsearch.yml` settings. ### General settings [ec_general_settings] diff --git a/raw-migrated-files/cloud/cloud/ec-autoscaling.md b/raw-migrated-files/cloud/cloud/ec-autoscaling.md index 98d8c2612..a1399649a 100644 --- a/raw-migrated-files/cloud/cloud/ec-autoscaling.md +++ b/raw-migrated-files/cloud/cloud/ec-autoscaling.md @@ -40,7 +40,7 @@ Currently, autoscaling behavior is as follows: ::::{note} -For any Elasticsearch Service Elasticsearch component the number of availability zones is not affected by autoscaling. You can always set the number of availability zones manually and the autoscaling mechanism will add or remove capacity per availability zone. +The number of availability zones for each component of your {{ech}} deployments is not affected by autoscaling. You can always set the number of availability zones manually and the autoscaling mechanism will add or remove capacity per availability zone. :::: @@ -80,10 +80,10 @@ The following are known limitations and restrictions with autoscaling: To enable or disable autoscaling on a deployment: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. On the **Deployments** page, select your deployment. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. In your deployment menu, select **Edit**. 4. Select desired autoscaling configuration for this deployment using **Enable Autoscaling for:** dropdown menu. @@ -98,10 +98,10 @@ When autoscaling has been disabled, you need to adjust the size of data tiers an Each autoscaling setting is configured with a default value. You can adjust these if necessary, as follows: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. On the **Deployments** page, select your deployment. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. In your deployment menu, select **Edit**. 4. To update a data tier: diff --git a/raw-migrated-files/cloud/cloud/ec-billing-stop.md b/raw-migrated-files/cloud/cloud/ec-billing-stop.md index 9e932f7d3..10609f59d 100644 --- a/raw-migrated-files/cloud/cloud/ec-billing-stop.md +++ b/raw-migrated-files/cloud/cloud/ec-billing-stop.md @@ -9,10 +9,10 @@ Got a deployment you no longer need and don’t want to be charged for any longe To stop being charged for a deployment: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. Select **Delete deployment** and confirm the deletion. diff --git a/raw-migrated-files/cloud/cloud/ec-cloud-ingest-data.md b/raw-migrated-files/cloud/cloud/ec-cloud-ingest-data.md index cf0a58e2e..36f8fffcd 100644 --- a/raw-migrated-files/cloud/cloud/ec-cloud-ingest-data.md +++ b/raw-migrated-files/cloud/cloud/ec-cloud-ingest-data.md @@ -143,16 +143,16 @@ One reason for preprocessing your data is to control the structure of the data t ### Data integrity [ec-data-integrity] -Logstash boosts data resiliency for important data that you don’t want to lose. Logstash offers an on-disk [persistent queue (PQ)](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/persistent-queues.md) that absorbs bursts of events without an external buffering mechanism. It attempts to deliver messages stored in the PQ until delivery succeeds at least once. +Logstash boosts data resiliency for important data that you don’t want to lose. Logstash offers an on-disk [persistent queue (PQ)](asciidocalypse://docs/logstash/docs/reference/persistent-queues.md) that absorbs bursts of events without an external buffering mechanism. It attempts to deliver messages stored in the PQ until delivery succeeds at least once. -The Logstash [dead letter queue (DLQ)](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/dead-letter-queues.md) provides on-disk storage for events that Logstash can’t process, giving you a chance to evaluate them. You can use the dead_letter_queue input plugin to easily reprocess DLQ events. +The Logstash [dead letter queue (DLQ)](asciidocalypse://docs/logstash/docs/reference/dead-letter-queues.md) provides on-disk storage for events that Logstash can’t process, giving you a chance to evaluate them. You can use the dead_letter_queue input plugin to easily reprocess DLQ events. ### Data flow [ec-data-flow] If you need to collect data from multiple Beats or Elastic Agents, consider using Logstash as a proxy. Logstash can receive data from multiple endpoints, even on different networks, and send the data on to Elasticsearch through a single firewall rule. You get more security for less work than if you set up individual rules for each endpoint. -Logstash can send to multiple [outputs](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/output-plugins.md) from a single pipeline to help you get the most value from your data. +Logstash can send to multiple [outputs](asciidocalypse://docs/logstash/docs/reference/output-plugins.md) from a single pipeline to help you get the most value from your data. ## Where to go from here [ec-data-ingest-where-to-go] @@ -182,7 +182,7 @@ For users who want to build their own solution, we can help you get started inge [Introduction to Fleet management](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/index.md) : {{fleet}} provides a web-based UI in Kibana for centrally managing Elastic Agents and their policies. -[{{ls}} introduction](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/index.md) +[{{ls}} introduction](asciidocalypse://docs/logstash/docs/reference/index.md) : Use {{ls}} to dynamically unify data from disparate sources and normalize the data into destinations of your choice. @@ -212,7 +212,7 @@ For users who want to build their own solution, we can help you get started inge [{{agent}} processors](asciidocalypse://docs/docs-content/docs/reference/ingestion-tools/fleet/agent-processors.md) : Use the {{agent}} lightweight processors to parse, filter, transform, and enrich data at the source. -[Creating a {{ls}} pipeline](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/creating-logstash-pipeline.md) +[Creating a {{ls}} pipeline](asciidocalypse://docs/logstash/docs/reference/creating-logstash-pipeline.md) : Create a {{ls}} pipeline by stringing together plugins—​inputs, outputs, filters, and sometimes codecs—​in order to process your data during ingestion. diff --git a/raw-migrated-files/cloud/cloud/ec-configuring-keystore.md b/raw-migrated-files/cloud/cloud/ec-configuring-keystore.md index bf99ff565..7c267874d 100644 --- a/raw-migrated-files/cloud/cloud/ec-configuring-keystore.md +++ b/raw-migrated-files/cloud/cloud/ec-configuring-keystore.md @@ -1,6 +1,6 @@ # Secure your settings [ec-configuring-keystore] -Some of the settings that you configure in Elasticsearch Service are sensitive, such as passwords, and relying on file system permissions to protect these settings is insufficient. To protect your sensitive settings, use the Elasticsearch keystore. With the Elasticsearch keystore, you can add a key and its secret value, then use the key in place of the secret value when you configure your sensitive settings. +Some of the settings that you configure in {{ech}} are sensitive, such as passwords, and relying on file system permissions to protect these settings is insufficient. To protect your sensitive settings, use the Elasticsearch keystore. With the Elasticsearch keystore, you can add a key and its secret value, then use the key in place of the secret value when you configure your sensitive settings. There are three types of secrets that you can use: @@ -13,10 +13,10 @@ There are three types of secrets that you can use: Add keys and secret values to the keystore. -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. From your deployment menu, select **Security**. 4. Locate **Elasticsearch keystore** and select **Add settings**. @@ -34,10 +34,10 @@ Only some settings are designed to be read from the keystore. However, the keyst When your keys and secret values are no longer needed, delete them from the keystore. -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. From your deployment menu, select **Security**. 4. From the **Existing keystores** list, use the delete icon next to the **Setting Name** that you want to delete. diff --git a/raw-migrated-files/cloud/cloud/ec-custom-bundles.md b/raw-migrated-files/cloud/cloud/ec-custom-bundles.md index de5c18b93..61e50495b 100644 --- a/raw-migrated-files/cloud/cloud/ec-custom-bundles.md +++ b/raw-migrated-files/cloud/cloud/ec-custom-bundles.md @@ -87,7 +87,7 @@ Bundles └── MyGeoLite2-City.mmdb ``` - Note that the extension must be `-(City|Country|ASN).mmdb`, and it must be a different name than the original file name `GeoLite2-City.mmdb` which already exists in Elasticsearch Service. To use this bundle, you can refer it in the GeoIP ingest pipeline as `MyGeoLite2-City.mmdb` under `database_file`. + Note that the extension must be `-(City|Country|ASN).mmdb`, and it must be a different name than the original file name `GeoLite2-City.mmdb` which already exists in {{ech}}. To use this bundle, you can refer it in the GeoIP ingest pipeline as `MyGeoLite2-City.mmdb` under `database_file`. @@ -95,8 +95,8 @@ Bundles You must upload your files before you can apply them to your cluster configuration: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. 3. Under **Features**, select **Extensions**. 4. Select **Upload extension**. 5. Complete the extension fields, including the {{es}} version. @@ -123,10 +123,10 @@ Refer to [Managing plugins and extensions through the API](../../../deploy-manag After uploading your files, you can select to enable them when creating a new {{es}} deployment. For existing deployments, you must update your deployment configuration to use the new files: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. From the **Actions** dropdown, select **Edit deployment**. 4. Select **Manage user settings and extensions**. @@ -155,9 +155,9 @@ To update an extension with a new file version, 1. Prepare a new plugin or bundle. 2. On the **Extensions** page, [upload a new extension](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md#ec-add-your-plugin). 3. Make your new files available by uploading them. -4. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +4. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 5. From the **Actions** dropdown, select **Edit deployment**. 6. Select **Manage user settings and extensions**. diff --git a/raw-migrated-files/cloud/cloud/ec-custom-repository.md b/raw-migrated-files/cloud/cloud/ec-custom-repository.md index 6d63d5504..cf790ff52 100644 --- a/raw-migrated-files/cloud/cloud/ec-custom-repository.md +++ b/raw-migrated-files/cloud/cloud/ec-custom-repository.md @@ -2,7 +2,7 @@ Specify your own repositories to snapshot to and restore from. This can be useful, for example, to do long-term archiving of old indexes, restore snapshots across Elastic Cloud accounts, or to be certain you have an exit strategy, should you need to move away from our service. -Elasticsearch Service supports these repositories: +{{ech}} supports these repositories: * [Amazon Web Services (AWS)](../../../deploy-manage/tools/snapshot-and-restore/ec-aws-custom-repository.md) * [Google Cloud Storage (GCS)](../../../deploy-manage/tools/snapshot-and-restore/ec-gcs-snapshotting.md) diff --git a/raw-migrated-files/cloud/cloud/ec-customize-deployment.md b/raw-migrated-files/cloud/cloud/ec-customize-deployment.md deleted file mode 100644 index a0683e87a..000000000 --- a/raw-migrated-files/cloud/cloud/ec-customize-deployment.md +++ /dev/null @@ -1,58 +0,0 @@ -# Change your configuration [ec-customize-deployment] - -You might want to change the configuration of your deployment to: - -* Add features, such as machine learning or APM (application performance monitoring). -* Increase or decrease capacity by changing the amount of reserved memory and storage for different parts of your deployment. - - ::::{note} - During the free trial, Elasticsearch Service deployments are restricted to a limited size. You can increase the size of your deployments when your trial is converted to a paid subscription. - :::: - -* Enable [autoscaling](../../../deploy-manage/autoscaling.md) so that the available resources for deployment components, such as data tiers and machine learning nodes, adjust automatically as the demands on them change over time. -* Enable high availability, also known as fault tolerance, by adjusting the number of data center availability zones that parts of your deployment run on. -* Upgrade to new versions of {{es}}. You can upgrade from one major version to another, such as from 6.8.23 to 7.17.27, or from one minor version to another, such as 6.1 to 6.2. You can’t downgrade versions. -* Change what plugins are available on your {{es}} cluster. - -With the exception of major version upgrades for Elastic Stack products, Elasticsearch Service can perform configuration changes without having to interrupt your deployment. You can continue searching and indexing. The changes can also be done in bulk. For example: in one action, you can add more memory, upgrade, adjust the number of {{es}} plugins and adjust the number of availability zones. - -We perform all of these changes by creating instances with the new configurations that join your existing deployment before removing the old ones. For example: if you are changing your {{es}} cluster configuration, we create new {{es}} nodes, recover your indexes, and start routing requests to the new nodes. Only when all new {{es}} nodes are ready, do we bring down the old ones. - -By doing it this way, we reduce the risk of making configuration changes. If any of the new instances have a problems, the old ones are still there, processing requests. - -::::{note} -If you use a Platform-as-a-Service provider like Heroku, the administration console is slightly different and does not allow you to make changes that will affect the price. That must be done in the platform provider’s add-on system. You can still do things like change {{es}} version or plugins. -:::: - - -To change your deployment: - -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. - - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -3. From the deployment menu, select **Edit**. -4. Let the user interface guide you through the cluster configuration for your cluster. For a full list of the supported settings, check [What Deployment Settings Are Available?](../../../deploy-manage/deploy/elastic-cloud/ec-configure-deployment-settings.md) - - If you are changing an existing deployment, you can make multiple changes to your {{es}} cluster with a single configuration update, such as changing the capacity and upgrading to a new {{es}} version in one step. - -5. Save your changes. The new configuration takes a few moments to create. - -Review the changes to your configuration on the **Activity** page, with a tab for {{es}} and one for {{kib}}. - -::::{tip} -If you are creating a new deployment, select **Edit settings** to change the cloud provider, region, hardware profile, and stack version; or select **Advanced settings** for more complex configuration settings. -:::: - - -That’s it! If you haven’t already, [start exploring with {{kib}}](../../../deploy-manage/deploy/elastic-cloud/access-kibana.md), our visualization tool. If you’re not familiar with adding data yet, {{kib}} can show you how to index your data into {{es}}, or try our basic steps for working with [{{es}}](../../../manage-data/data-store/manage-data-from-the-command-line.md). - -::::{tip} -Some features are not available during the 14-day free trial. If a feature is greyed out, [add a credit card](../../../deploy-manage/cloud-organization/billing/add-billing-details.md) to unlock the feature. -:::: - - - - - diff --git a/raw-migrated-files/cloud/cloud/ec-editing-user-settings.md b/raw-migrated-files/cloud/cloud/ec-editing-user-settings.md index 478ea8a25..cada29207 100644 --- a/raw-migrated-files/cloud/cloud/ec-editing-user-settings.md +++ b/raw-migrated-files/cloud/cloud/ec-editing-user-settings.md @@ -1,6 +1,6 @@ # Edit your user settings [ec-editing-user-settings] -From the Elasticsearch Service console you can customize Elasticsearch, Kibana, and related products to suit your needs. These editors append your changes to the appropriate YAML configuration file and they affect all users of that cluster. In each editor you can: +From the {{ecloud}} Console you can customize Elasticsearch, Kibana, and related products to suit your needs. These editors append your changes to the appropriate YAML configuration file and they affect all users of that cluster. In each editor you can: * [Dictate the behavior of Elasticsearch and its security features](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md). * [Manage Kibana’s settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md). diff --git a/raw-migrated-files/cloud/cloud/ec-enable-logging-and-monitoring.md b/raw-migrated-files/cloud/cloud/ec-enable-logging-and-monitoring.md index aae765e7a..27c12b67b 100644 --- a/raw-migrated-files/cloud/cloud/ec-enable-logging-and-monitoring.md +++ b/raw-migrated-files/cloud/cloud/ec-enable-logging-and-monitoring.md @@ -7,15 +7,15 @@ The deployment logging and monitoring feature lets you monitor your deployment i Monitoring consists of two components: -* A monitoring and logging agent that is installed on each node in your deployment. The agents collect and index metrics to {{es}}, either on the same deployment or by sending logs and metrics to an external monitoring deployment. Elasticsearch Service manages the installation and configuration of the monitoring agent for you, and you should not modify any of the settings. +* A monitoring and logging agent that is installed on each node in your deployment. The agents collect and index metrics to {{es}}, either on the same deployment or by sending logs and metrics to an external monitoring deployment. {{ech}} manages the installation and configuration of the monitoring agent for you, and you should not modify any of the settings. * The stack monitoring application in Kibana that visualizes the monitoring metrics through a dashboard and the logs application that allows you to search and analyze deployment logs. -The steps in this section cover only the enablement of the monitoring and logging features in Elasticsearch Service. For more information on how to use the monitoring features, refer to [Monitor a cluster](../../../deploy-manage/monitor.md). +The steps in this section cover only the enablement of the monitoring and logging features in {{ech}}. For more information on how to use the monitoring features, refer to [Monitor a cluster](../../../deploy-manage/monitor.md). ### Before you begin [ec-logging-and-monitoring-limitations] -Some limitations apply when you use monitoring on Elasticsearch Service. To learn more, check the monitoring [restrictions and limitations](../../../deploy-manage/monitor/stack-monitoring/elastic-cloud-stack-monitoring.md#ec-restrictions-monitoring). +Some limitations apply when you use monitoring on {{ech}}. To learn more, check the monitoring [restrictions and limitations](../../../deploy-manage/monitor/stack-monitoring/elastic-cloud-stack-monitoring.md#ec-restrictions-monitoring). ### Monitoring for production use [ec-logging-and-monitoring-production] @@ -40,13 +40,13 @@ Logs and metrics that get sent to a dedicated monitoring {{es}} deployment [may #### Stack versions 8.0 and above [ec-logging-and-monitoring-retention-8] -When you enable monitoring in Elasticsearch Service, your monitoring indices are retained for a certain period by default. After the retention period has passed, the monitoring indices are deleted automatically. The retention period is configured in the `.monitoring-8-ilm-policy` index lifecycle policy. To view or edit the policy open {{kib}} **Stack management > Data > Index Lifecycle Policies**. +When you enable monitoring in {{ech}}, your monitoring indices are retained for a certain period by default. After the retention period has passed, the monitoring indices are deleted automatically. The retention period is configured in the `.monitoring-8-ilm-policy` index lifecycle policy. To view or edit the policy open {{kib}} **Stack management > Data > Index Lifecycle Policies**. ### Sending monitoring data to itself (self monitoring) [ec-logging-and-monitoring-retention-self-monitoring] $$$ec-logging-and-monitoring-retention-7$$$ -When you enable self-monitoring in Elasticsearch Service, your monitoring indices are retained for a certain period by default. After the retention period has passed, the monitoring indices are deleted automatically. Monitoring data is retained for three days by default or as specified by the [`xpack.monitoring.history.duration` user setting](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md#xpack-monitoring-history-duration). +When you enable self-monitoring in {{ech}}, your monitoring indices are retained for a certain period by default. After the retention period has passed, the monitoring indices are deleted automatically. Monitoring data is retained for three days by default or as specified by the [`xpack.monitoring.history.duration` user setting](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md#xpack-monitoring-history-duration). To retain monitoring indices as is without deleting them automatically, you must disable the [cleaner service](../../../deploy-manage/monitor/stack-monitoring/es-local-exporter.md#local-exporter-cleaner) by adding a disabled local exporter in your cluster settings. @@ -69,7 +69,7 @@ PUT /_cluster/settings When [monitoring for production use](../../../deploy-manage/monitor/stack-monitoring/elastic-cloud-stack-monitoring.md#ec-logging-and-monitoring-production), where you configure your deployments **to send monitoring data to a dedicated monitoring deployment** for indexing, this retention period does not apply. Monitoring indices on a dedicated monitoring deployment are retained until you remove them. There are three options open to you: -* To enable the automatic deletion of monitoring indices from dedicated monitoring deployments, [enable monitoring](../../../deploy-manage/monitor/stack-monitoring/elastic-cloud-stack-monitoring.md#ec-enable-logging-and-monitoring-steps) on your dedicated monitoring deployment in Elasticsearch Service to send monitoring data to itself. When an {{es}} deployment sends monitoring data to itself, all monitoring indices are deleted automatically after the retention period, regardless of the origin of the monitoring data. +* To enable the automatic deletion of monitoring indices from dedicated monitoring deployments, [enable monitoring](../../../deploy-manage/monitor/stack-monitoring/elastic-cloud-stack-monitoring.md#ec-enable-logging-and-monitoring-steps) on your dedicated monitoring deployment in {{ech}} to send monitoring data to itself. When an {{es}} deployment sends monitoring data to itself, all monitoring indices are deleted automatically after the retention period, regardless of the origin of the monitoring data. * Alternatively, you can enable the cleaner service on the monitoring deployment by creating a local exporter. You can define the retention period at the same time. For example @@ -110,14 +110,14 @@ When sending monitoring data to a deployment, you can configure [Index Lifecycle ### Enable logging and monitoring [ec-enable-logging-and-monitoring-steps] -Elasticsearch Service manages the installation and configuration of the monitoring agent for you. When you enable monitoring on a deployment, you are configuring where the monitoring agent for your current deployment should send its logs and metrics. +{{ech}} manages the installation and configuration of the monitoring agent for you. When you enable monitoring on a deployment, you are configuring where the monitoring agent for your current deployment should send its logs and metrics. To enable monitoring on your deployment: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. From your deployment menu, go to the **Logs and metrics** page. 4. Select **Enable**. @@ -145,10 +145,10 @@ Enabling logs and monitoring requires some extra resource on a deployment. For p With monitoring enabled for your deployment, you can access the [logs](https://www.elastic.co/guide/en/kibana/current/observability.html) and [stack monitoring](../../../deploy-manage/monitor/monitoring-data/visualizing-monitoring-data.md) through Kibana. -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. From your deployment menu, go to the **Logs and Metrics** page. 4. Select the corresponding **View** button to check the logs or metrics data. @@ -214,10 +214,10 @@ With logging and monitoring enabled for a deployment, metrics are collected for Audit logs are useful for tracking security events on your {{es}} and/or {{kib}} clusters. To enable {{es}} audit logs on your deployment: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. From your deployment menu, go to the **Edit** page. 4. To enable audit logs in {{es}}, in the **Elasticsearch** section select **Manage user settings and extensions**. For deployments with existing user settings, you may have to expand the **Edit elasticsearch.yml** caret for each node instead. diff --git a/raw-migrated-files/cloud/cloud/ec-faq-getting-started.md b/raw-migrated-files/cloud/cloud/ec-faq-getting-started.md index b6dbed257..ce7f62be1 100644 --- a/raw-migrated-files/cloud/cloud/ec-faq-getting-started.md +++ b/raw-migrated-files/cloud/cloud/ec-faq-getting-started.md @@ -1,30 +1,30 @@ -# Elasticsearch Service FAQ [ec-faq-getting-started] +# {{ech}} FAQ [ec-faq-getting-started] -This frequently-asked-questions list helps you with common questions while you get Elasticsearch Service up and running for the first time. For questions about Elasticsearch Service configuration options or billing, check the [Technical FAQ](../../../deploy-manage/index.md) and the [Billing FAQ](../../../deploy-manage/cloud-organization/billing/billing-faq.md). +This frequently-asked-questions list helps you with common questions while you get {{ech}} up and running for the first time. For questions about {{ech}} configuration options or billing, check the [Technical FAQ](../../../deploy-manage/index.md) and the [Billing FAQ](../../../deploy-manage/cloud-organization/billing/billing-faq.md). -* [What is Elasticsearch Service?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-what) -* [Is Elasticsearch Service the same as Amazon’s {{es}} Service?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-aws-difference) -* [Can I run the full Elastic Stack in Elasticsearch Service?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-full-stack) -* [Can I try Elasticsearch Service for free?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-trial) +* [What is {{ech}}?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-what) +* [Is {{ech}} the same as Amazon’s {{es}} Service?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-aws-difference) +* [Can I run the full Elastic Stack in {{ech}}?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-full-stack) +* [Can I try {{ech}} for free?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-trial) * [What if I need to change the size of my {{es}} cluster at a later time?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-config) * [Do you offer support subscriptions?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-subscriptions) -* [Where is Elasticsearch Service hosted?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-where) -* [What is the difference between Elasticsearch Service and the Amazon {{es}} Service?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-vs-aws) -* [Can I use Elasticsearch Service on platforms other than AWS?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-aws) +* [Where is {{ech}} hosted?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-where) +* [What is the difference between {{ech}} and the Amazon {{es}} Service?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-vs-aws) +* [Can I use {{ech}} on platforms other than AWS?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-aws) * [Do you offer Elastic’s commercial products?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-elastic) * [Is my {{es}} cluster protected by X-Pack?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-x-pack) * [Is there a limit on the number of documents or indexes I can have in my cluster?](../../../deploy-manage/deploy/elastic-cloud/cloud-hosted.md#faq-limit) - $$$faq-what$$$What is Elasticsearch Service? - : Elasticsearch Service is hosted and managed {{es}} and {{kib}} brought to you by the creators of {{es}}. Elasticsearch Service is part of Elastic Cloud and ships with features that you can only get from the company behind {{es}}, {{kib}}, {{beats}}, and {{ls}}. {{es}} is a full text search engine that suits a range of uses, from search on websites to big data analytics and more. + $$$faq-what$$$What is {{ech}}? + : {{ech}} is hosted and managed {{es}} and {{kib}} brought to you by the creators of {{es}}. {{ech}} is part of Elastic Cloud and ships with features that you can only get from the company behind {{es}}, {{kib}}, {{beats}}, and {{ls}}. {{es}} is a full text search engine that suits a range of uses, from search on websites to big data analytics and more. - $$$faq-aws-difference$$$Is Elasticsearch Service the same as Amazon’s {{es}} Service? - : Elasticsearch Service is not the same as the Amazon {{es}} service. To learn more about the differences, check our [AWS {{es}} Service](https://www.elastic.co/aws-elasticsearch-service) comparison. + $$$faq-aws-difference$$$Is {{ech}} the same as Amazon’s {{es}} Service? + : {{ech}} is not the same as the Amazon {{es}} service. To learn more about the differences, check our [AWS {{es}} Service](https://www.elastic.co/aws-elasticsearch-service) comparison. - $$$faq-full-stack$$$Can I run the full Elastic Stack in Elasticsearch Service? - : Many of the products that are part of the Elastic Stack are readily available in Elasticsearch Service, including {{es}}, {{kib}}, plugins, and features such as monitoring and security. Use other Elastic Stack products directly with Elasticsearch Service. For example, both Logstash and Beats can send their data to Elasticsearch Service. What is run is determined by the [subscription level](https://www.elastic.co/cloud/as-a-service/subscriptions). + $$$faq-full-stack$$$Can I run the full Elastic Stack in {{ech}}? + : Many of the products that are part of the Elastic Stack are readily available in {{ech}}, including {{es}}, {{kib}}, plugins, and features such as monitoring and security. Use other Elastic Stack products directly with {{ech}}. For example, both Logstash and Beats can send their data to {{ech}}. What is run is determined by the [subscription level](https://www.elastic.co/cloud/as-a-service/subscriptions). - $$$faq-trial$$$Can I try Elasticsearch Service for free? + $$$faq-trial$$$Can I try {{ech}} for free? : Yes, sign up for a 14-day free trial. The trial starts the moment a cluster is created. During the free trial period get access to a deployment to explore Elastic solutions for Search, Observability, Security, or the latest version of the Elastic Stack. @@ -34,24 +34,24 @@ This frequently-asked-questions list helps you with common questions while you g : Scale your clusters both up and down from the user console, whenever you like. The resizing of the cluster is transparently done in the background, and highly available clusters are resized without any downtime. If you scale your cluster down, make sure that the downsized cluster can handle your {{es}} memory requirements. Read more about sizing and memory in [Sizing {{es}}](https://www.elastic.co/blog/found-sizing-elasticsearch). $$$faq-subscriptions$$$Do you offer support? - : Yes, all subscription levels for Elasticsearch Service include support, handled by email or through the Elastic Support Portal. Different subscription levels include different levels of support. For the Standard subscription level, there is no service-level agreement (SLA) on support response times. Gold and Platinum subscription levels include an SLA on response times to tickets and dedicated resources. To learn more, check [Getting Help](../../../troubleshoot/index.md). + : Yes, all subscription levels for {{ech}} include support, handled by email or through the Elastic Support Portal. Different subscription levels include different levels of support. For the Standard subscription level, there is no service-level agreement (SLA) on support response times. Gold and Platinum subscription levels include an SLA on response times to tickets and dedicated resources. To learn more, check [Getting Help](../../../troubleshoot/index.md). - $$$faq-where$$$Where is Elasticsearch Service hosted? - : We host our {{es}} clusters on Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. Check out which [regions we support](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/regions.md) and what [hardware we use](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/hardware.md). New data centers are added all the time. + $$$faq-where$$$Where is {{ech}} hosted? + : We host our {{es}} clusters on Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. Check out which [regions we support](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/regions.md) and what [hardware we use](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/hardware.md). New data centers are added all the time. - $$$faq-vs-aws$$$What is the difference between Elasticsearch Service and the Amazon {{es}} Service? - : Elasticsearch Service is the only hosted and managed {{es}} service built, managed, and supported by the company behind {{es}}, {{kib}}, {{beats}}, and {{ls}}. With Elasticsearch Service, you always get the latest versions of the software. Our service is built on best practices and years of experience hosting and managing thousands of {{es}} clusters in the Cloud and on premise. For more information, check the following Amazon and Elastic {{es}} Service [comparison page](https://www.elastic.co/aws-elasticsearch-service). + $$$faq-vs-aws$$$What is the difference between {{ech}} and the Amazon {{es}} Service? + : {{ech}} is the only hosted and managed {{es}} service built, managed, and supported by the company behind {{es}}, {{kib}}, {{beats}}, and {{ls}}. With {{ech}}, you always get the latest versions of the software. Our service is built on best practices and years of experience hosting and managing thousands of {{es}} clusters in the Cloud and on premise. For more information, check the following Amazon and Elastic {{es}} Service [comparison page](https://www.elastic.co/aws-elasticsearch-service). Please note that there is no formal partnership between Elastic and Amazon Web Services (AWS), and Elastic does not provide any support on the AWS {{es}} Service. - $$$faq-aws$$$Can I use Elasticsearch Service on platforms other than AWS? + $$$faq-aws$$$Can I use {{ech}} on platforms other than AWS? : Yes, create deployments on the Google Cloud Platform and Microsoft Azure. $$$faq-elastic$$$Do you offer Elastic’s commercial products? - : Yes, all Elasticsearch Service customers have access to basic authentication, role-based access control, and monitoring. + : Yes, all {{ech}} customers have access to basic authentication, role-based access control, and monitoring. - Elasticsearch Service Gold, Platinum and Enterprise customers get complete access to all the capabilities in X-Pack: + {{ech}} Gold, Platinum and Enterprise customers get complete access to all the capabilities in X-Pack: * Security * Alerting @@ -63,7 +63,7 @@ This frequently-asked-questions list helps you with common questions while you g $$$faq-x-pack$$$Is my Elasticsearch cluster protected by X-Pack? - : Yes, X-Pack security features offer the full power to protect your Elasticsearch Service deployment with basic authentication and role-based access control. + : Yes, X-Pack security features offer the full power to protect your {{ech}} deployment with basic authentication and role-based access control. $$$faq-limit$$$Is there a limit on the number of documents or indexes I can have in my cluster? : No. We do not enforce any artificial limit on the number of indexes or documents you can store in your cluster. diff --git a/raw-migrated-files/cloud/cloud/ec-faq-technical.md b/raw-migrated-files/cloud/cloud/ec-faq-technical.md index 7aa63686b..6685499dc 100644 --- a/raw-migrated-files/cloud/cloud/ec-faq-technical.md +++ b/raw-migrated-files/cloud/cloud/ec-faq-technical.md @@ -1,38 +1,38 @@ # Technical FAQ [ec-faq-technical] -This frequently-asked-questions list answers some of your more common questions about configuring Elasticsearch Service. +This frequently-asked-questions list answers some of your more common questions about configuring {{ech}}. * [Can I implement a Hot-Warm architecture?](../../../deploy-manage/index.md#faq-hw-architecture) * [What about dedicated master nodes?](../../../deploy-manage/index.md#faq-master-nodes) * [Can I use a Custom SSL certificate?](../../../deploy-manage/index.md#faq-ssl) -* [Can Elasticsearch Service autoscale?](../../../deploy-manage/index.md#faq-autoscale) +* [Can {{ech}} autoscale?](../../../deploy-manage/index.md#faq-autoscale) * [Do you support IP sniffing?](../../../deploy-manage/index.md#faq-ip-sniffing) -* [Does Elasticsearch Service support encryption at rest?](../../../deploy-manage/index.md#faq-encryption-at-rest) -* [Can I find the static IP addresses for my endpoints on Elasticsearch Service?](../../../deploy-manage/index.md#faq-static-ip-elastic-cloud) +* [Does {{ech}} support encryption at rest?](../../../deploy-manage/index.md#faq-encryption-at-rest) +* [Can I find the static IP addresses for my endpoints on {{ech}}?](../../../deploy-manage/index.md#faq-static-ip-elastic-cloud) $$$faq-hw-architecture$$$Can I implement a hot-warm architecture? - : [*hot-warm architecture*](https://www.elastic.co/blog/hot-warm-architecture) refers to an Elasticsearch setup for larger time-data analytics use cases with two different types of nodes, hot and warm. Elasticsearch Service supports hot-warm architectures in all of the solutions provided by allowing you to add warm nodes to any of your deployments. + : [*hot-warm architecture*](https://www.elastic.co/blog/hot-warm-architecture) refers to an Elasticsearch setup for larger time-data analytics use cases with two different types of nodes, hot and warm. {{ech}} supports hot-warm architectures in all of the solutions provided by allowing you to add warm nodes to any of your deployments. $$$faq-master-nodes$$$What about dedicated master nodes? : [Master nodes](https://www.elastic.co/guide/en/elasticsearch/reference/7.17/modules-node.html#master-node) are responsible for cluster-wide actions, such as creating or deleting an index, tracking which nodes are part of the cluster, and deciding which shards to allocate to which nodes. For clusters that have six or more Elasticsearch nodes, dedicated master-eligible nodes are introduced. When your cluster grows, consider separating dedicated master-eligible nodes from dedicated data nodes. We recommend using at least 4GB RAM for dedicated master nodes. $$$faq-ssl$$$Can I use a Custom SSL certificate? - : We don’t support custom SSL certificates, which means that a custom CNAME for an Elasticsearch Service endpoint such as *mycluster.mycompanyname.com* also is not supported. + : We don’t support custom SSL certificates, which means that a custom CNAME for an {{ech}} endpoint such as *mycluster.mycompanyname.com* also is not supported. - $$$faq-autoscale$$$Can Elasticsearch Service autoscale? - : Elasticsearch Service now supports autoscaling. To learn how to enable it through the console or the API, check [Deployment autoscaling](../../../deploy-manage/autoscaling.md). + $$$faq-autoscale$$$Can {{ech}} autoscale? + : {{ech}} now supports autoscaling. To learn how to enable it through the console or the API, check [Deployment autoscaling](../../../deploy-manage/autoscaling.md). $$$faq-ip-sniffing$$$Do you support IP sniffing? - : IP sniffing is not supported by design and will not return the expected results. We prevent IP sniffing from returning the expected results to improve the security of our underlying Elasticsearch Service infrastructure. + : IP sniffing is not supported by design and will not return the expected results. We prevent IP sniffing from returning the expected results to improve the security of our underlying {{ech}} infrastructure. - $$$faq-encryption-at-rest$$$Does Elasticsearch Service support encryption at rest? - : Yes, encryption at rest (EAR) is enabled in Elasticsearch Service by default. We support EAR for both the data stored in your clusters and the snapshots we take for backup, on all cloud platforms and across all regions. + $$$faq-encryption-at-rest$$$Does {{ech}} support encryption at rest? + : Yes, encryption at rest (EAR) is enabled in {{ech}} by default. We support EAR for both the data stored in your clusters and the snapshots we take for backup, on all cloud platforms and across all regions. You can also bring your own key (BYOK) to encrypt your Elastic Cloud deployment data and snapshots. For more information, check [Encrypt your deployment with a customer-managed encryption key](../../../deploy-manage/security/encrypt-deployment-with-customer-managed-encryption-key.md). Note that the encryption happens at the file system level. -$$$faq-static-ip-elastic-cloud$$$We have requirements around restricting access by adding firewall rules to only allow access to certain IP addresses from our Infosec team. Do you provide static IP addresses for the endpoints on Elasticsearch Service? -: We do provide [static IP ranges](../../../deploy-manage/security/elastic-cloud-static-ips.md), but they should be used with caution as noted in the documentation. IP addresses assigned to cloud resources can change without notice. This could be initiated by cloud providers with no knowledge to us. For this reason, we generally do not recommend that you use firewall rules to allow or restrict certain IP ranges. If you do wish to secure communication for deployment endpoints on Elasticsearch Service, please use [Private Link](../../../deploy-manage/security/traffic-filtering.md). However, in situations where using Private Link services do not meet requirements (for example, secure traffic **from** Elastic Cloud), static IP ranges can be used. +$$$faq-static-ip-elastic-cloud$$$We have requirements around restricting access by adding firewall rules to only allow access to certain IP addresses from our Infosec team. Do you provide static IP addresses for the endpoints on {{ech}}? +: We do provide [static IP ranges](../../../deploy-manage/security/elastic-cloud-static-ips.md), but they should be used with caution as noted in the documentation. IP addresses assigned to cloud resources can change without notice. This could be initiated by cloud providers with no knowledge to us. For this reason, we generally do not recommend that you use firewall rules to allow or restrict certain IP ranges. If you do wish to secure communication for deployment endpoints on {{ech}}, please use [Private Link](../../../deploy-manage/security/traffic-filtering.md). However, in situations where using Private Link services do not meet requirements (for example, secure traffic **from** Elastic Cloud), static IP ranges can be used. diff --git a/raw-migrated-files/cloud/cloud/ec-get-help.md b/raw-migrated-files/cloud/cloud/ec-get-help.md index c47aab112..a1065d46d 100644 --- a/raw-migrated-files/cloud/cloud/ec-get-help.md +++ b/raw-migrated-files/cloud/cloud/ec-get-help.md @@ -1,13 +1,13 @@ # Getting help [ec-get-help] -With your Elasticsearch Service subscription, you get access to support from the creators of Elasticsearch, Kibana, Beats, Logstash, and much more. We’re here to help! +With your {{ecloud}} subscription, you get access to support from the creators of Elasticsearch, Kibana, Beats, Logstash, and much more. We’re here to help! ## How do I open a support case? [ec_how_do_i_open_a_support_case] All roads lead to the Elastic Support Portal, where you can access to all your cases, subscriptions, and licenses. -As an Elasticsearch Service customer, you will receive an email with instructions how to log in to the Support Portal, where you can track both current and archived cases. If you are a new customer who just signed up for Elasticsearch Service, it can take a few hours for your Support Portal access to be set up. If you have questions, reach out to us at `support@elastic.co`. +As an {{ecloud}} customer, you will receive an email with instructions how to log in to the Support Portal, where you can track both current and archived cases. If you are a new customer who just signed up for E{{ecloud}}, it can take a few hours for your Support Portal access to be set up. If you have questions, reach out to us at `support@elastic.co`. ::::{note} With the release of the new Support Portal, even if you have an existing account, you might be prompted to update your password. @@ -17,7 +17,7 @@ With the release of the new Support Portal, even if you have an existing account There are three ways you can get to the portal: * Go directly to the Support Portal: [http://support.elastic.co](http://support.elastic.co) -* From the Elasticsearch Service Console: Go to the [Support page](https://cloud.elastic.co/support?page=docs&placement=docs-body) or select the support icon, that looks like a life preserver, on any page in the console. +* From the {{ecloud}} Console: Go to the [Support page](https://cloud.elastic.co/support?page=docs&placement=docs-body) or select the support icon, that looks like a life preserver, on any page in the console. * Contact us by email: `support@elastic.co` If you contact us by email, please use the email address that you registered with, so that we can help you more quickly. If you are using a distribution list as your registered email, you can also register a second email address with us. Just open a case to let us know the name and email address you would like to be added. @@ -25,35 +25,35 @@ There are three ways you can get to the portal: When opening a case, there are a few things you can do to get help faster: -* Include the deployment ID that you want help with, especially if you have several deployments. The deployment ID can be found on the overview page for your cluster in the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). +* Include the deployment ID that you want help with, especially if you have several deployments. The deployment ID can be found on the overview page for your cluster in the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). * Describe the problem. Include any relevant details, including error messages you encountered, dates and times when the problem occurred, or anything else you think might be helpful. * Upload any pertinent files. ## What level of support can I expect? [ec_what_level_of_support_can_i_expect] -Support is governed by the [Elasticsearch Service Standard Terms of Service](https://www.elastic.co/legal/terms-of-service/cloud). The level of support you can expect to receive applies to your Elasticsearch Service environment only and depends on your subscription level: +Support is governed by the [{{ecloud}} Standard Terms of Service](https://www.elastic.co/legal/terms-of-service/cloud). The level of support you can expect to receive applies to your {{ecloud}} environment only and depends on your subscription level: -Elasticsearch Service Standard subscriptions -: Support is provided by email or through the Elastic Support Portal. The main focus of support is to ensure your Elasticsearch Service deployment shows a green status and is available. There is no guaranteed initial or ongoing response time, but we do strive to engage on every issue within three business days. We do not offer weekend coverage, so we respond Monday through Friday only. To learn more, check [Working with Elastic Support Elasticsearch Service Standard](https://www.elastic.co/support/welcome/cloud). +{{ecloud}} Standard subscriptions +: Support is provided by email or through the Elastic Support Portal. The main focus of support is to ensure your {{ech}} deployment shows a green status and is available. There is no guaranteed initial or ongoing response time, but we do strive to engage on every issue within three business days. We do not offer weekend coverage, so we respond Monday through Friday only. To learn more, check [Working with Elastic Support {{ecloud}} Standard](https://www.elastic.co/support/welcome/cloud). -Elasticsearch Service Gold and Platinum subscriptions -: Support is handled by email or through the Elastic Support Portal. Provides guaranteed response times for support issues, better support coverage hours, and support contacts at Elastic. Also includes support for how-to and development questions. The exact support coverage depends on whether you are a Gold or Platinum customer. To learn more, check [Elasticsearch Service Premium Support Services Policy](https://www.elastic.co/legal/support_policy/cloud_premium). +{{ecloud}} Gold and Platinum subscriptions +: Support is handled by email or through the Elastic Support Portal. Provides guaranteed response times for support issues, better support coverage hours, and support contacts at Elastic. Also includes support for how-to and development questions. The exact support coverage depends on whether you are a Gold or Platinum customer. To learn more, check [{{ecloud}} Premium Support Services Policy](https://www.elastic.co/legal/support_policy/cloud_premium). ::::{note} -If you are in free trial, you are also eligible to get the Elasticsearch Service Standard level support for as long as the trial is active. +If you are in free trial, you are also eligible to get the {{ecloud}} Standard level support for as long as the trial is active. :::: -If you are on an Elasticsearch Service Standard subscription and you are interested in moving to Gold or Platinum support, please [contact us](https://www.elastic.co/cloud/contact). We also recommend that you read our best practices guide for getting the most out of your support experience: [https://www.elastic.co/support/welcome](https://www.elastic.co/support/welcome). +If you are on an {{ecloud}} Standard subscription and you are interested in moving to Gold or Platinum support, please [contact us](https://www.elastic.co/cloud/contact). We also recommend that you read our best practices guide for getting the most out of your support experience: [https://www.elastic.co/support/welcome](https://www.elastic.co/support/welcome). ## Join the community forums [ec_join_the_community_forums] -Elasticsearch, Logstash, and Kibana enjoy the benefit of having vibrant and helpful communities. You have our assurance of high-quality support and single source of truth as an Elasticsearch Service customer, but the Elastic community can also be a useful resource for you whenever you need it. +Elasticsearch, Logstash, and Kibana enjoy the benefit of having vibrant and helpful communities. You have our assurance of high-quality support and single source of truth as an {{ecloud}} customer, but the Elastic community can also be a useful resource for you whenever you need it. ::::{tip} -As of May 1, 2017, support for Elasticsearch Service **Standard** customers has moved from the Discuss forum to our link: [Elastic Support Portal](https://support.elastic.co). You should receive login instructions by email. We will also monitor the forum and help you get into the Support Portal, in case you’re unsure where to go. +As of May 1, 2017, support for {{ecloud}} **Standard** customers has moved from the Discuss forum to our link: [Elastic Support Portal](https://support.elastic.co). You should receive login instructions by email. We will also monitor the forum and help you get into the Support Portal, in case you’re unsure where to go. :::: diff --git a/raw-migrated-files/cloud/cloud/ec-getting-started-existing-email.md b/raw-migrated-files/cloud/cloud/ec-getting-started-existing-email.md deleted file mode 100644 index c1b80f4d5..000000000 --- a/raw-migrated-files/cloud/cloud/ec-getting-started-existing-email.md +++ /dev/null @@ -1,17 +0,0 @@ -# Sign up using an existing email address [ec-getting-started-existing-email] - -Your email address is used to uniquely identify you. It can’t be used for more than one Elastic Cloud account, whether that account is a trial account, a standard Elasticsearch Service account, or a subscription account through a marketplace. - -In some situations you may want to create a new Elastic Cloud account using an email address that is already associated with an existing account. For this procedure, it’s assumed that you no longer want to use the original account. - -To sign up to Elastic Cloud using an email address associated with another Elastic Cloud account: - -1. Use your current email address (for example, `my.preferred.address@foobar.com`) to log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Follow the steps to [update that email address](../../../cloud-account/update-your-email-address.md) to another email address, such as `my.alternate.address@gmail.com`. - -You can now use the email address from Step 1 to do the following: - -* Sign up for a new account. -* [Join an existing organization](../../../deploy-manage/cloud-organization.md). - -For questions or any problems, contact us at `support@elastic.co`. diff --git a/raw-migrated-files/cloud/cloud/ec-getting-started-node-js.md b/raw-migrated-files/cloud/cloud/ec-getting-started-node-js.md index dc50f9c96..4e9b0fb23 100644 --- a/raw-migrated-files/cloud/cloud/ec-getting-started-node-js.md +++ b/raw-migrated-files/cloud/cloud/ec-getting-started-node-js.md @@ -1,17 +1,17 @@ -# Ingest data with Node.js on Elasticsearch Service [ec-getting-started-node-js] +# Ingest data with Node.js on {{ech}} [ec-getting-started-node-js] This guide tells you how to get started with: -* Securely connecting to Elasticsearch Service with Node.js +* Securely connecting to {{ech}} with Node.js * Ingesting data into your deployment from your application -* Searching and modifying your data on Elasticsearch Service +* Searching and modifying your data on {{ech}} If you are an Node.js application programmer who is new to the Elastic Stack, this content helps you get started more easily. *Time required: 45 minutes* -## Get Elasticsearch Service [ec_get_elasticsearch_service] +## Get {{ech}} [ec_get_elasticsearch_service] 1. [Get a free trial](https://cloud.elastic.co/registration?page=docs&placement=docs-body). 2. Log into [Elastic Cloud](https://cloud.elastic.co?page=docs&placement=docs-body). @@ -20,7 +20,7 @@ If you are an Node.js application programmer who is new to the Elastic Stack, th 5. Select **Create deployment** and save your Elastic deployment credentials. You need these credentials later on. 6. When the deployment is ready, click **Continue** and a page of **Setup guides** is displayed. To continue to the deployment homepage click **I’d like to do something else**. -Prefer not to subscribe to yet another service? You can also get Elasticsearch Service through [AWS, Azure, and GCP marketplaces](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md). +Prefer not to subscribe to yet another service? You can also get {{ech}} through [AWS, Azure, and GCP marketplaces](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md). ## Set up your application [ec_set_up_your_application] @@ -73,9 +73,9 @@ The example here shows what the `config` package expects. You need to update `co ## About connecting securely [ec_about_connecting_securely] -When connecting to Elasticsearch Service use a Cloud ID to specify the connection details. You must pass the Cloud ID that is found in {{kib}} or the cloud console. +When connecting to {{ech}} use a Cloud ID to specify the connection details. You must pass the Cloud ID that is found in {{kib}} or the cloud console. -To connect to, stream data to, and issue queries with Elasticsearch Service, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. +To connect to, stream data to, and issue queries with {{ech}}, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. ### Basic authentication [ec_basic_authentication] @@ -150,7 +150,7 @@ async function run() { run().catch(console.log) ``` -When using the [client.index](asciidocalypse://docs/elasticsearch-js/docs/reference/elasticsearch/elasticsearch-client-javascript-api/api-reference.md#_index) API, the request automatically creates the `game-of-thrones` index if it doesn’t already exist, as well as document IDs for each indexed document if they are not explicitly specified. +When using the [client.index](asciidocalypse://docs/elasticsearch-js/docs/reference/api-reference.md#_index) API, the request automatically creates the `game-of-thrones` index if it doesn’t already exist, as well as document IDs for each indexed document if they are not explicitly specified. ## Search and modify data [ec_search_and_modify_data] @@ -197,7 +197,7 @@ async function update() { update().catch(console.log) ``` -This [more comprehensive list of API examples](asciidocalypse://docs/elasticsearch-js/docs/reference/elasticsearch/elasticsearch-client-javascript-api/examples.md) includes bulk operations, checking the existence of documents, updating by query, deleting, scrolling, and SQL queries. To learn more, check the complete [API reference](asciidocalypse://docs/elasticsearch-js/docs/reference/elasticsearch/elasticsearch-client-javascript-api/api-reference.md). +This [more comprehensive list of API examples](asciidocalypse://docs/elasticsearch-js/docs/reference/examples.md) includes bulk operations, checking the existence of documents, updating by query, deleting, scrolling, and SQL queries. To learn more, check the complete [API reference](asciidocalypse://docs/elasticsearch-js/docs/reference/api-reference.md). ## Switch to API key authentication [ec_switch_to_api_key_authentication] @@ -278,17 +278,17 @@ Check [Create API key API](https://www.elastic.co/docs/api/doc/elasticsearch/ope ### Best practices [ec_best_practices] Security -: When connecting to Elasticsearch Service, the client automatically enables both request and response compression by default, since it yields significant throughput improvements. Moreover, the client also sets the SSL option `secureProtocol` to `TLSv1_2_method` unless specified otherwise. You can still override this option by configuring it. +: When connecting to {{ech}}, the client automatically enables both request and response compression by default, since it yields significant throughput improvements. Moreover, the client also sets the SSL option `secureProtocol` to `TLSv1_2_method` unless specified otherwise. You can still override this option by configuring it. - Do not enable sniffing when using Elasticsearch Service, since the nodes are behind a load balancer. Elasticsearch Service takes care of everything for you. Take a look at [Elasticsearch sniffing best practices: What, when, why, how](https://www.elastic.co/blog/elasticsearch-sniffing-best-practices-what-when-why-how) if you want to know more. + Do not enable sniffing when using {{ech}}, since the nodes are behind a load balancer. {{ech}} takes care of everything for you. Take a look at [Elasticsearch sniffing best practices: What, when, why, how](https://www.elastic.co/blog/elasticsearch-sniffing-best-practices-what-when-why-how) if you want to know more. Connections -: If your application connecting to Elasticsearch Service runs under the Java security manager, you should at least disable the caching of positive hostname resolutions. To learn more, check the [Java API Client documentation](asciidocalypse://docs/elasticsearch-java/docs/reference/elasticsearch/elasticsearch-client-java-api-client/_others.md). +: If your application connecting to {{ech}} runs under the Java security manager, you should at least disable the caching of positive hostname resolutions. To learn more, check the [Java API Client documentation](asciidocalypse://docs/elasticsearch-java/docs/reference/_others.md). Schema : When the example code was run an index mapping was created automatically. The field types were selected by {{es}} based on the content seen when the first record was ingested, and updated as new fields appeared in the data. It would be more efficient to specify the fields and field types in advance to optimize performance. Refer to the Elastic Common Schema documentation and Field Type documentation when you are designing the schema for your production use cases. Ingest -: For more advanced scenarios, this [bulk ingestion](asciidocalypse://docs/elasticsearch-js/docs/reference/elasticsearch/elasticsearch-client-javascript-api/bulk_examples.md) reference gives an example of the `bulk` API that makes it possible to perform multiple operations in a single call. This bulk example also explicitly specifies document IDs. If you have a lot of documents to index, using bulk to batch document operations is significantly faster than submitting requests individually. +: For more advanced scenarios, this [bulk ingestion](asciidocalypse://docs/elasticsearch-js/docs/reference/bulk_examples.md) reference gives an example of the `bulk` API that makes it possible to perform multiple operations in a single call. This bulk example also explicitly specifies document IDs. If you have a lot of documents to index, using bulk to batch document operations is significantly faster than submitting requests individually. diff --git a/raw-migrated-files/cloud/cloud/ec-getting-started-python.md b/raw-migrated-files/cloud/cloud/ec-getting-started-python.md index 0d92358eb..5068b510d 100644 --- a/raw-migrated-files/cloud/cloud/ec-getting-started-python.md +++ b/raw-migrated-files/cloud/cloud/ec-getting-started-python.md @@ -1,10 +1,10 @@ -# Ingest data with Python on Elasticsearch Service [ec-getting-started-python] +# Ingest data with Python on {{ech}} [ec-getting-started-python] This guide tells you how to get started with: -* Securely connecting to Elasticsearch Service with Python +* Securely connecting to {{ech}} with Python * Ingesting data into your deployment from your application -* Searching and modifying your data on Elasticsearch Service +* Searching and modifying your data on {{ech}} If you are an Python application programmer who is new to the Elastic Stack, this content can help you get started more easily. @@ -32,7 +32,7 @@ elasticsearch>=7.0.0,<8.0.0 ``` -## Get Elasticsearch Service [ec_get_elasticsearch_service_2] +## Get {{ech}} [ec_get_elasticsearch_service_2] 1. [Get a free trial](https://cloud.elastic.co/registration?page=docs&placement=docs-body). 2. Log into [Elastic Cloud](https://cloud.elastic.co?page=docs&placement=docs-body). @@ -41,14 +41,14 @@ elasticsearch>=7.0.0,<8.0.0 5. Select **Create deployment** and save your Elastic deployment credentials. You need these credentials later on. 6. When the deployment is ready, click **Continue** and a page of **Setup guides** is displayed. To continue to the deployment homepage click **I’d like to do something else**. -Prefer not to subscribe to yet another service? You can also get Elasticsearch Service through [AWS, Azure, and GCP marketplaces](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md). +Prefer not to subscribe to yet another service? You can also get {{ech}} through [AWS, Azure, and GCP marketplaces](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md). ## Connect securely [ec_connect_securely] -When connecting to Elasticsearch Service you need to use your Cloud ID to specify the connection details. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. +When connecting to {{ech}} you need to use your Cloud ID to specify the connection details. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. -To connect to, stream data to, and issue queries with Elasticsearch Service, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. +To connect to, stream data to, and issue queries with {{ech}}, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. ### Basic authentication [ec_basic_authentication_2] @@ -275,7 +275,7 @@ es.get(index='lord-of-the-rings', id='2EkAzngB_pyHD3p65UMt') 'birthplace': 'The Shire'}} ``` -For frequently used API calls with the Python client, check [Examples](asciidocalypse://docs/elasticsearch-py/docs/reference/elasticsearch/elasticsearch-client-python-api/examples.md). +For frequently used API calls with the Python client, check [Examples](asciidocalypse://docs/elasticsearch-py/docs/reference/examples.md). ## Switch to API key authentication [ec_switch_to_api_key_authentication_2] @@ -315,7 +315,7 @@ POST /_security/api_key } ``` -Edit the `example.ini` file you created earlier and add the `id` and `api_key` you just created. You should also remove the lines for `user` and `password` you added earlier after you have tested the `api_key`, and consider changing the `elastic` password using the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). +Edit the `example.ini` file you created earlier and add the `id` and `api_key` you just created. You should also remove the lines for `user` and `password` you added earlier after you have tested the `api_key`, and consider changing the `elastic` password using the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). ```sh [DEFAULT] @@ -333,22 +333,22 @@ es = Elasticsearch( ) ``` -Check [Create API key API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key) to learn more about API Keys and [Security privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md) to understand which privileges are needed. If you are not sure what the right combination of privileges for your custom application is, you can enable [audit logging](../../../deploy-manage/monitor/logging-configuration/enabling-audit-logs.md) on {{es}} to find out what privileges are being used. To learn more about how logging works on Elasticsearch Service, check [Monitoring Elastic Cloud deployment logs and metrics](https://www.elastic.co/blog/monitoring-elastic-cloud-deployment-logs-and-metrics). +Check [Create API key API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-security-create-api-key) to learn more about API Keys and [Security privileges](../../../deploy-manage/users-roles/cluster-or-deployment-auth/elasticsearch-privileges.md) to understand which privileges are needed. If you are not sure what the right combination of privileges for your custom application is, you can enable [audit logging](../../../deploy-manage/monitor/logging-configuration/enabling-audit-logs.md) on {{es}} to find out what privileges are being used. To learn more about how logging works on {{ech}}, check [Monitoring Elastic Cloud deployment logs and metrics](https://www.elastic.co/blog/monitoring-elastic-cloud-deployment-logs-and-metrics). -For more information on refreshing an index, searching, updating, and deleting, check the [elasticsearch-py examples](asciidocalypse://docs/elasticsearch-py/docs/reference/elasticsearch/elasticsearch-client-python-api/examples.md). +For more information on refreshing an index, searching, updating, and deleting, check the [elasticsearch-py examples](asciidocalypse://docs/elasticsearch-py/docs/reference/examples.md). ### Best practices [ec_best_practices_2] Security -: When connecting to Elasticsearch Service, the client automatically enables both request and response compression by default, since it yields significant throughput improvements. Moreover, the client also sets the SSL option `secureProtocol` to `TLSv1_2_method` unless specified otherwise. You can still override this option by configuring it. +: When connecting to {{ech}}, the client automatically enables both request and response compression by default, since it yields significant throughput improvements. Moreover, the client also sets the SSL option `secureProtocol` to `TLSv1_2_method` unless specified otherwise. You can still override this option by configuring it. - Do not enable sniffing when using Elasticsearch Service, since the nodes are behind a load balancer. Elasticsearch Service takes care of everything for you. Take a look at [Elasticsearch sniffing best practices: What, when, why, how](https://www.elastic.co/blog/elasticsearch-sniffing-best-practices-what-when-why-how) if you want to know more. + Do not enable sniffing when using {{ech}}, since the nodes are behind a load balancer. {{ech}} takes care of everything for you. Take a look at [Elasticsearch sniffing best practices: What, when, why, how](https://www.elastic.co/blog/elasticsearch-sniffing-best-practices-what-when-why-how) if you want to know more. Schema : When the example code is run, an index mapping is created automatically. The field types are selected by {{es}} based on the content seen when the first record was ingested, and updated as new fields appeared in the data. It would be more efficient to specify the fields and field types in advance to optimize performance. Refer to the Elastic Common Schema documentation and Field Type documentation when you design the schema for your production use cases. Ingest -: For more advanced scenarios, [Bulk helpers](asciidocalypse://docs/elasticsearch-py/docs/reference/elasticsearch/elasticsearch-client-python-api/client-helpers.md#bulk-helpers) gives examples for the `bulk` API that makes it possible to perform multiple operations in a single call. If you have a lot of documents to index, using bulk to batch document operations is significantly faster than submitting requests individually. +: For more advanced scenarios, [Bulk helpers](asciidocalypse://docs/elasticsearch-py/docs/reference/client-helpers.md#bulk-helpers) gives examples for the `bulk` API that makes it possible to perform multiple operations in a single call. If you have a lot of documents to index, using bulk to batch document operations is significantly faster than submitting requests individually. diff --git a/raw-migrated-files/cloud/cloud/ec-getting-started-search-use-cases-beats-logstash.md b/raw-migrated-files/cloud/cloud/ec-getting-started-search-use-cases-beats-logstash.md index a32172f87..64e663a34 100644 --- a/raw-migrated-files/cloud/cloud/ec-getting-started-search-use-cases-beats-logstash.md +++ b/raw-migrated-files/cloud/cloud/ec-getting-started-search-use-cases-beats-logstash.md @@ -1,19 +1,19 @@ -# Ingest data from Beats to Elasticsearch Service with {{ls}} as a proxy [ec-getting-started-search-use-cases-beats-logstash] +# Ingest data from Beats to {{ech}} with {{ls}} as a proxy [ec-getting-started-search-use-cases-beats-logstash] -This guide explains how to ingest data from Filebeat and Metricbeat to {{ls}} as an intermediary, and then send that data to Elasticsearch Service. Using {{ls}} as a proxy limits your Elastic stack traffic through a single, external-facing firewall exception or rule. Consider the following features of this type of setup: +This guide explains how to ingest data from Filebeat and Metricbeat to {{ls}} as an intermediary, and then send that data to {{ech}}. Using {{ls}} as a proxy limits your Elastic stack traffic through a single, external-facing firewall exception or rule. Consider the following features of this type of setup: -* You can send multiple instances of Beats data through your local network’s demilitarized zone (DMZ) to {{ls}}. {{ls}} then acts as a proxy through your firewall to send the Beats data to Elasticsearch Service, as shown in the following diagram: +* You can send multiple instances of Beats data through your local network’s demilitarized zone (DMZ) to {{ls}}. {{ls}} then acts as a proxy through your firewall to send the Beats data to {{ech}}, as shown in the following diagram: ![A diagram showing data from multiple Beats into Logstash](../../../images/cloud-ec-logstash-beats-dataflow.png "") -* This proxying reduces the firewall exceptions or rules necessary for Beats to communicate with Elasticsearch Service. It’s common to have many Beats dispersed across a network, each installed close to the data that it monitors, and each Beat individually communicating with an Elasticsearch Service deployment. Multiple Beats support multiple servers. Rather than configure each Beat to send its data directly to Elasticsearch Service, you can use {{ls}} to proxy this traffic through one firewall exception or rule. +* This proxying reduces the firewall exceptions or rules necessary for Beats to communicate with {{ech}}. It’s common to have many Beats dispersed across a network, each installed close to the data that it monitors, and each Beat individually communicating with an {{ech}} deployment. Multiple Beats support multiple servers. Rather than configure each Beat to send its data directly to {{ech}}, you can use {{ls}} to proxy this traffic through one firewall exception or rule. * This setup is not suitable in simple scenarios when there is only one or a couple of Beats in use. {{ls}} makes the most sense for proxying when there are many Beats. The configuration in this example makes use of the System module, available for both Filebeat and Metricbeat. Filebeat’s System sends server system log details (that is, login success/failures, sudo *superuser do* command usage, and other key usage details). Metricbeat’s System module sends memory, CPU, disk, and other server usage metrics. In the following sections you are going to learn how to: -1. [Get Elasticsearch Service](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md#ec-beats-logstash-trial) +1. [Get {{ech}}](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md#ec-beats-logstash-trial) 2. [Connect securely](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md#ec-beats-logstash-connect-securely) 3. [Set up {{ls}}](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md#ec-beats-logstash-logstash) 4. [Set up Metricbeat](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md#ec-beats-logstash-metricbeat) @@ -27,7 +27,7 @@ In the following sections you are going to learn how to: *Time required: 1 hour* -## Get Elasticsearch Service [ec-beats-logstash-trial] +## Get {{ech}} [ec-beats-logstash-trial] 1. [Get a free trial](https://cloud.elastic.co/registration?page=docs&placement=docs-body). 2. Log into [Elastic Cloud](https://cloud.elastic.co?page=docs&placement=docs-body). @@ -36,14 +36,14 @@ In the following sections you are going to learn how to: 5. Select **Create deployment** and save your Elastic deployment credentials. You need these credentials later on. 6. When the deployment is ready, click **Continue** and a page of **Setup guides** is displayed. To continue to the deployment homepage click **I’d like to do something else**. -Prefer not to subscribe to yet another service? You can also get Elasticsearch Service through [AWS, Azure, and GCP marketplaces](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md). +Prefer not to subscribe to yet another service? You can also get {{ech}} through [AWS, Azure, and GCP marketplaces](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md). ## Connect securely [ec-beats-logstash-connect-securely] -When connecting to Elasticsearch Service you can use a Cloud ID to specify the connection details. You must pass the Cloud ID that you can find in the cloud console. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. +When connecting to {{ech}} you can use a Cloud ID to specify the connection details. You must pass the Cloud ID that you can find in the cloud console. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. -To connect to, stream data to, and issue queries with Elasticsearch Service, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. +To connect to, stream data to, and issue queries with {{ech}}, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. ## Set up {{ls}} [ec-beats-logstash-logstash] @@ -53,7 +53,7 @@ To connect to, stream data to, and issue queries with Elasticsearch Service, you ## Set up Metricbeat [ec-beats-logstash-metricbeat] -Now that {{ls}} is downloaded and your Elasticsearch Service deployment is set up, you can configure Metricbeat to send operational data to {{ls}}. +Now that {{ls}} is downloaded and your {{ech}} deployment is set up, you can configure Metricbeat to send operational data to {{ls}}. Install Metricbeat as close as possible to the service that you want to monitor. For example, if you have four servers with MySQL running, we recommend that you run Metricbeat on each server. This allows Metricbeat to access your service from *localhost*. This setup does not cause any additional network traffic and enables Metricbeat to collect metrics even in the event of network problems. Metrics from multiple Metricbeat instances are combined on the {{ls}} server. @@ -65,11 +65,11 @@ If you have multiple servers with metrics data, repeat the following steps to co **About Metricbeat modules** -Metricbeat has [many modules](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/metricbeat-modules.md) available that collect common metrics. You can [configure additional modules](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/configuration-metricbeat.md) as needed. For this example we’re using Metricbeat’s default configuration, which has the [System module](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/metricbeat-module-system.md) enabled. The System module allows you to monitor servers with the default set of metrics: *cpu*, *load*, *memory*, *network*, *process*, *process_summary*, *socket_summary*, *filesystem*, *fsstat*, and *uptime*. +Metricbeat has [many modules](asciidocalypse://docs/beats/docs/reference/metricbeat/metricbeat-modules.md) available that collect common metrics. You can [configure additional modules](asciidocalypse://docs/beats/docs/reference/metricbeat/configuration-metricbeat.md) as needed. For this example we’re using Metricbeat’s default configuration, which has the [System module](asciidocalypse://docs/beats/docs/reference/metricbeat/metricbeat-module-system.md) enabled. The System module allows you to monitor servers with the default set of metrics: *cpu*, *load*, *memory*, *network*, *process*, *process_summary*, *socket_summary*, *filesystem*, *fsstat*, and *uptime*. **Load the Metricbeat Kibana dashboards** -Metricbeat comes packaged with example dashboards, visualizations, and searches for visualizing Metricbeat data in Kibana. Before you can use the dashboards, you need to create the data view (formerly *index pattern*) *metricbeat-**, and load the dashboards into Kibana. This needs to be done from a local Beats machine that has access to the Elasticsearch Service deployment. +Metricbeat comes packaged with example dashboards, visualizations, and searches for visualizing Metricbeat data in Kibana. Before you can use the dashboards, you need to create the data view (formerly *index pattern*) *metricbeat-**, and load the dashboards into Kibana. This needs to be done from a local Beats machine that has access to the {{ech}} deployment. ::::{note} Beginning with Elastic Stack version 8.0, Kibana *index patterns* have been renamed to *data views*. To learn more, check the Kibana [What’s new in 8.0](https://www.elastic.co/guide/en/kibana/8.0/whats-new.html#index-pattern-rename) page. @@ -85,9 +85,9 @@ sudo ./metricbeat setup \ -E cloud.auth=: <2> ``` -1. Specify the Cloud ID of your Elasticsearch Service deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. +1. Specify the Cloud ID of your {{ech}} deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. 2. Specify the username and password provided to you when creating the deployment. Make sure to keep the colon between ** and **.::::{important} -Depending on variables including the installation location, environment and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-libbeat/config-file-permissions.md) of the metricbeat.yml. +Depending on variables including the installation location, environment and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/libbeat/config-file-permissions.md) of the metricbeat.yml. You might encounter similar permissions hurdles as you work through multiple sections of this document. These permission requirements are there for a good reason, a security safeguard to prevent unauthorized access and modification of key Elastic files. @@ -136,7 +136,7 @@ The next step is to configure Filebeat to send operational data to Logstash. As **Enable the Filebeat system module** -Filebeat has [many modules](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-modules.md) available that collect common log types. You can [configure additional modules](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/configuration-filebeat-modules.md) as needed. For this example we’re using Filebeat’s [System module](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-module-system.md). This module reads in the various system log files (with information including login successes or failures, sudo command usage, and other key usage details) based on the detected operating system. For this example, a Linux-based OS is used and Filebeat ingests logs from the */var/log/* folder. It’s important to verify that Filebeat is given permission to access your logs folder through standard file and folder permissions. +Filebeat has [many modules](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-modules.md) available that collect common log types. You can [configure additional modules](asciidocalypse://docs/beats/docs/reference/filebeat/configuration-filebeat-modules.md) as needed. For this example we’re using Filebeat’s [System module](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-module-system.md). This module reads in the various system log files (with information including login successes or failures, sudo command usage, and other key usage details) based on the detected operating system. For this example, a Linux-based OS is used and Filebeat ingests logs from the */var/log/* folder. It’s important to verify that Filebeat is given permission to access your logs folder through standard file and folder permissions. 1. Go to */filebeat-/modules.d/* where ** is the directory where Filebeat is installed. 2. Filebeat requires at least one fileset to be enabled. In file */filebeat-/modules.d/system.yml.disabled*, under both `syslog` and `auth` set `enabled` to `true`: @@ -173,9 +173,9 @@ sudo ./filebeat setup \ -E cloud.auth=: <2> ``` -1. Specify the Cloud ID of your Elasticsearch Service deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. +1. Specify the Cloud ID of your {{ech}} deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. 2. Specify the username and password provided to you when creating the deployment. Make sure to keep the colon between ** and **.::::{important} -Depending on variables including the installation location, environment, and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-libbeat/config-file-permissions.md) of the filebeat.yml. +Depending on variables including the installation location, environment, and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/libbeat/config-file-permissions.md) of the filebeat.yml. :::: @@ -238,7 +238,7 @@ Now the Filebeat and Metricbeat are set up, let’s configure a {{ls}} pipeline 1. {{ls}} listens for Beats input on the default port of 5044. Only one line is needed to do this. {{ls}} can handle input from many Beats of the same and also of varying types (Metricbeat, Filebeat, and others). 2. This sends output to the standard output, which displays through your command line interface. This plugin enables you to verify the data before you send it to {{es}}, in a later step. -3. Save the new *beats.conf* file in your Logstash folder. To learn more about the file format and options, check [{{ls}} Configuration Examples](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/config-examples.md). +3. Save the new *beats.conf* file in your Logstash folder. To learn more about the file format and options, check [{{ls}} Configuration Examples](asciidocalypse://docs/logstash/docs/reference/config-examples.md). ## Output {{ls}} data to stdout [ec-beats-logstash-stdout] @@ -361,7 +361,7 @@ Now, let’s try out the {{ls}} pipeline with the Metricbeats and Filebeats conf ## Output {{ls}} data to {{es}} [ec-beats-logstash-elasticsearch] -In this section, you configure {{ls}} to send the Metricbeat and Filebeat data to {{es}}. You modify the *beats.conf* created earlier, and specify the output credentials needed for our Elasticsearch Service deployment. Then, you start {{ls}} to send the Beats data into {{es}}. +In this section, you configure {{ls}} to send the Metricbeat and Filebeat data to {{es}}. You modify the *beats.conf* created earlier, and specify the output credentials needed for our {{ech}} deployment. Then, you start {{ls}} to send the Beats data into {{es}}. 1. In your */logstash-/* folder, open *beats.conf* for editing. 2. Replace the *output {}* section of the JSON with the following code: @@ -379,8 +379,8 @@ In this section, you configure {{ls}} to send the Metricbeat and Filebeat data t } ``` - 1. Use the Cloud ID of your Elasticsearch Service deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. - 2. the default usename is `elastic`. It is not recommended to use the `elastic` account for ingesting data as this is a superuser. We recommend using a user with reduced permissions, or an API Key with permissions specific to the indices or data streams that will be written to. Check the [Grant access to secured resources](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/feature-roles.md) for information on the writer role and API Keys. Use the password provided when you created the deployment if using the `elastic` user, or the password used when creating a new ingest user with the roles specified in the [Grant access to secured resources](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/feature-roles.md) documentation. + 1. Use the Cloud ID of your {{ech}} deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. + 2. the default usename is `elastic`. It is not recommended to use the `elastic` account for ingesting data as this is a superuser. We recommend using a user with reduced permissions, or an API Key with permissions specific to the indices or data streams that will be written to. Check the [Grant access to secured resources](asciidocalypse://docs/beats/docs/reference/filebeat/feature-roles.md) for information on the writer role and API Keys. Use the password provided when you created the deployment if using the `elastic` user, or the password used when creating a new ingest user with the roles specified in the [Grant access to secured resources](asciidocalypse://docs/beats/docs/reference/filebeat/feature-roles.md) documentation. Following are some additional details about the configuration file settings: @@ -392,14 +392,14 @@ In this section, you configure {{ls}} to send the Metricbeat and Filebeat data t If you use Metricbeat version 8.13.1, the index created in {{es}} is named *metricbeat-8.13.1*. Similarly, using the 8.13.1 version of Filebeat, the {{es}} index is named *filebeat-8.13.1*. - * *cloud_id*: This is the ID that uniquely identifies your Elasticsearch Service deployment. - * *ssl*: This should be set to `true` so that Secure Socket Layer (SSL) certificates are used for secure communication between {{ls}} and your Elasticsearch Service deployment. - * *ilm_enabled*: Enables and disables Elasticsearch Service [index lifecycle management](../../../manage-data/lifecycle/index-lifecycle-management.md). + * *cloud_id*: This is the ID that uniquely identifies your {{ech}} deployment. + * *ssl*: This should be set to `true` so that Secure Socket Layer (SSL) certificates are used for secure communication between {{ls}} and your {{ech}} deployment. + * *ilm_enabled*: Enables and disables {{ech}} [index lifecycle management](../../../manage-data/lifecycle/index-lifecycle-management.md). * *api_key*: If you choose to use an API key to authenticate (as discussed in the next step), you can provide it here. -3. **Optional**: For additional security, you can generate an {{es}} API key through the Elasticsearch Service console and configure {{ls}} to use the new key to connect securely to the Elasticsearch Service. +3. **Optional**: For additional security, you can generate an {{es}} API key through the {{ecloud}} Console and configure {{ls}} to use the new key to connect securely to {{ecloud}}. - 1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). + 1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. Select the deployment and go to **☰** > **Management** > **Dev Tools**. 3. Enter the following: @@ -467,14 +467,14 @@ In this section, you configure {{ls}} to send the Metricbeat and Filebeat data t ./filebeat -c filebeat.yml ``` -7. {{ls}} now outputs the Filebeat and Metricbeat data to your Elasticsearch Service instance. +7. {{ls}} now outputs the Filebeat and Metricbeat data to your {{ech}} instance. ::::{note} In this guide, you manually launch each of the Elastic stack applications through the command line interface. In production, you may prefer to configure {{ls}}, Metricbeat, and Filebeat to run as System Services. Check the following pages for the steps to configure each application to run as a service: -* [Running {{ls}} as a service on Debian or RPM](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/running-logstash.md) -* [Metricbeat and systemd](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-metricbeat/running-with-systemd.md) -* [Start filebeat](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-starting.md) +* [Running {{ls}} as a service on Debian or RPM](asciidocalypse://docs/logstash/docs/reference/running-logstash.md) +* [Metricbeat and systemd](asciidocalypse://docs/beats/docs/reference/metricbeat/running-with-systemd.md) +* [Start filebeat](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-starting.md) :::: @@ -482,7 +482,7 @@ In this guide, you manually launch each of the Elastic stack applications throug ## View data in Kibana [ec-beats-logstash-view-kibana] -In this section, you log into Elasticsearch Service, open Kibana, and view the Kibana dashboards populated with our Metricbeat and Filebeat data. +In this section, you log into {{ech}}, open Kibana, and view the Kibana dashboards populated with our Metricbeat and Filebeat data. **View the Metricbeat dashboard** diff --git a/raw-migrated-files/cloud/cloud/ec-getting-started-search-use-cases-db-logstash.md b/raw-migrated-files/cloud/cloud/ec-getting-started-search-use-cases-db-logstash.md index d40ffe78a..b50fba317 100644 --- a/raw-migrated-files/cloud/cloud/ec-getting-started-search-use-cases-db-logstash.md +++ b/raw-migrated-files/cloud/cloud/ec-getting-started-search-use-cases-db-logstash.md @@ -1,6 +1,6 @@ -# Ingest data from a relational database into Elasticsearch Service [ec-getting-started-search-use-cases-db-logstash] +# Ingest data from a relational database into {{ech}} [ec-getting-started-search-use-cases-db-logstash] -This guide explains how to ingest data from a relational database into Elasticsearch Service through [Logstash](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/index.md), using the Logstash [JDBC input plugin](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-jdbc.md). It demonstrates how Logstash can be used to efficiently copy records and to receive updates from a relational database, and then send them into {{es}} in an Elasticsearch Service deployment. +This guide explains how to ingest data from a relational database into {{ech}} through [Logstash](asciidocalypse://docs/logstash/docs/reference/index.md), using the Logstash [JDBC input plugin](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-jdbc.md). It demonstrates how Logstash can be used to efficiently copy records and to receive updates from a relational database, and then send them into {{es}} in an {{ech}} deployment. The code and methods presented here have been tested with MySQL. They should work with other relational databases. @@ -9,7 +9,7 @@ The Logstash Java Database Connectivity (JDBC) input plugin enables you to pull This document presents: 1. [Prerequisites](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md#ec-db-logstash-prerequisites) -2. [Get Elasticsearch Service](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md#ec-db-logstash-trial) +2. [Get {{ech}}](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md#ec-db-logstash-trial) 3. [Connect securely](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md#ec-db-logstash-connect-securely) 4. [Get the MySQL JDBC driver](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md#ec-db-logstash-driver) 5. [Prepare a source MySQL database](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md#ec-db-logstash-database) @@ -25,7 +25,7 @@ This document presents: For this tutorial you need a source MySQL instance for Logstash to read from. A free version of MySQL is available from the [MySQL Community Server section](https://dev.mysql.com/downloads/mysql/) of the MySQL Community Downloads site. -## Get Elasticsearch Service [ec-db-logstash-trial] +## Get {{ech}} [ec-db-logstash-trial] 1. [Get a free trial](https://cloud.elastic.co/registration?page=docs&placement=docs-body). 2. Log into [Elastic Cloud](https://cloud.elastic.co?page=docs&placement=docs-body). @@ -34,14 +34,14 @@ For this tutorial you need a source MySQL instance for Logstash to read from. A 5. Select **Create deployment** and save your Elastic deployment credentials. You need these credentials later on. 6. When the deployment is ready, click **Continue** and a page of **Setup guides** is displayed. To continue to the deployment homepage click **I’d like to do something else**. -Prefer not to subscribe to yet another service? You can also get Elasticsearch Service through [AWS, Azure, and GCP marketplaces](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md). +Prefer not to subscribe to yet another service? You can also get {{ech}} through [AWS, Azure, and GCP marketplaces](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md). ## Connect securely [ec-db-logstash-connect-securely] -When connecting to Elasticsearch Service you can use a Cloud ID to specify the connection details. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. +When connecting to {{ech}} you can use a Cloud ID to specify the connection details. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. -To connect to, stream data to, and issue queries with Elasticsearch Service, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. +To connect to, stream data to, and issue queries with {{ech}}, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. 1. [Download](https://www.elastic.co/downloads/logstash) and unpack Logstash on the local machine that hosts MySQL or another machine granted access to the MySQL machine. @@ -56,7 +56,7 @@ The Logstash JDBC input plugin does not include any database connection drivers. ## Prepare a source MySQL database [ec-db-logstash-database] -Let’s look at a simple database from which you’ll import data and send it to Elasticsearch Service. This example uses a MySQL database with timestamped records. The timestamps enable you to determine easily what’s changed in the database since the most recent data transfer to Elasticsearch Service. +Let’s look at a simple database from which you’ll import data and send it to {{ech}}. This example uses a MySQL database with timestamped records. The timestamps enable you to determine easily what’s changed in the database since the most recent data transfer to {{ech}}. ### Consider the database structure and design [ec-db-logstash-database-structure] @@ -192,13 +192,13 @@ Let’s set up a sample Logstash input pipeline to ingest data from your new JDB : The Logstash JDBC plugin does not come packaged with JDBC driver libraries. The JDBC driver library must be passed explicitly into the plugin using the `jdbc_driver_library` configuration option. tracking_column - : This parameter specifies the field `unix_ts_in_secs` that tracks the last document read by Logstash from MySQL, stored on disk in [logstash_jdbc_last_run](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-jdbc.md#plugins-inputs-jdbc-last_run_metadata_path). The parameter determines the starting value for documents that Logstash requests in the next iteration of its polling loop. The value stored in `logstash_jdbc_last_run` can be accessed in a SELECT statement as `sql_last_value`. + : This parameter specifies the field `unix_ts_in_secs` that tracks the last document read by Logstash from MySQL, stored on disk in [logstash_jdbc_last_run](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-jdbc.md#plugins-inputs-jdbc-last_run_metadata_path). The parameter determines the starting value for documents that Logstash requests in the next iteration of its polling loop. The value stored in `logstash_jdbc_last_run` can be accessed in a SELECT statement as `sql_last_value`. unix_ts_in_secs : The field generated by the SELECT statement, which contains the `modification_time` as a standard [Unix timestamp](https://en.wikipedia.org/wiki/Unix_time) (seconds since the epoch). The field is referenced by the `tracking column`. A Unix timestamp is used for tracking progress rather than a normal timestamp, as a normal timestamp may cause errors due to the complexity of correctly converting back and forth between UMT and the local timezone. sql_last_value - : This is a [built-in parameter](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/plugins-inputs-jdbc.md#_predefined_parameters) containing the starting point of the current iteration of the Logstash polling loop, and it is referenced in the SELECT statement line of the JDBC input configuration. This parameter is set to the most recent value of `unix_ts_in_secs`, which is read from `.logstash_jdbc_last_run`. This value is the starting point for documents returned by the MySQL query that is executed in the Logstash polling loop. Including this variable in the query guarantees that we’re not resending data that is already stored in Elasticsearch. + : This is a [built-in parameter](asciidocalypse://docs/logstash/docs/reference/plugins-inputs-jdbc.md#_predefined_parameters) containing the starting point of the current iteration of the Logstash polling loop, and it is referenced in the SELECT statement line of the JDBC input configuration. This parameter is set to the most recent value of `unix_ts_in_secs`, which is read from `.logstash_jdbc_last_run`. This value is the starting point for documents returned by the MySQL query that is executed in the Logstash polling loop. Including this variable in the query guarantees that we’re not resending data that is already stored in Elasticsearch. schedule : This uses cron syntax to specify how often Logstash should poll MySQL for changes. The specification `*/5 * * * * *` tells Logstash to contact MySQL every 5 seconds. Input from this plugin can be scheduled to run periodically according to a specific schedule. This scheduling syntax is powered by [rufus-scheduler](https://github.com/jmettraux/rufus-scheduler). The syntax is cron-like with some extensions specific to Rufus (for example, timezone support). @@ -269,7 +269,7 @@ Let’s set up a sample Logstash input pipeline to ingest data from your new JDB ## Output to Elasticsearch [ec-db-logstash-output] -In this section, we configure Logstash to send the MySQL data to Elasticsearch. We modify the configuration file created in the section [Configure a Logstash pipeline with the JDBC input plugin](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md#ec-db-logstash-pipeline) so that data is output directly to Elasticsearch. We start Logstash to send the data, and then log into Elasticsearch Service to verify the data in Kibana. +In this section, we configure Logstash to send the MySQL data to Elasticsearch. We modify the configuration file created in the section [Configure a Logstash pipeline with the JDBC input plugin](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md#ec-db-logstash-pipeline) so that data is output directly to Elasticsearch. We start Logstash to send the data, and then log into {{ech}} to verify the data in Kibana. 1. Open the `jdbc.conf` file in the Logstash folder for editing. 2. Update the output section with the one that follows: @@ -287,8 +287,8 @@ In this section, we configure Logstash to send the MySQL data to Elasticsearch. } ``` - 1. Use the Cloud ID of your Elasticsearch Service deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. - 2. the default username is `elastic`. It is not recommended to use the `elastic` account for ingesting data as this is a superuser. We recommend using a user with reduced permissions, or an API Key with permissions specific to the indices or data streams that will be written to. Check [Configuring security in Logstash](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/secure-connection.md) for information on roles and API Keys. Use the password provided when you created the deployment if using the `elastic` user, or the password used when creating a new ingest user with the roles specified in the [Configuring security in Logstash](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/secure-connection.md) documentation. + 1. Use the Cloud ID of your {{ech}} deployment. You can include or omit the `:` prefix at the beginning of the Cloud ID. Both versions work fine. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. + 2. the default username is `elastic`. It is not recommended to use the `elastic` account for ingesting data as this is a superuser. We recommend using a user with reduced permissions, or an API Key with permissions specific to the indices or data streams that will be written to. Check [Configuring security in Logstash](asciidocalypse://docs/logstash/docs/reference/secure-connection.md) for information on roles and API Keys. Use the password provided when you created the deployment if using the `elastic` user, or the password used when creating a new ingest user with the roles specified in the [Configuring security in Logstash](asciidocalypse://docs/logstash/docs/reference/secure-connection.md) documentation. Following are some additional details about the configuration file settings: @@ -299,9 +299,9 @@ In this section, we configure Logstash to send the MySQL data to Elasticsearch. api_key : If you choose to use an API key to authenticate (as discussed in the next step), you can provide it here. -3. **Optional**: For additional security, you can generate an Elasticsearch API key through the Elasticsearch Service console and configure Logstash to use the new key to connect securely to Elasticsearch Service. +3. **Optional**: For additional security, you can generate an Elasticsearch API key through the {{ecloud}} Console and configure Logstash to use the new key to connect securely to {{ech}}. - 1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). + 1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. Select the deployment name and go to **☰** > **Management** > **Dev Tools**. 3. Enter the following: @@ -375,9 +375,9 @@ In this section, we configure Logstash to send the MySQL data to Elasticsearch. bin/logstash -f jdbc.conf ``` -6. Logstash outputs the MySQL data to your Elasticsearch Service deployment. Let’s take a look in Kibana and verify that data: +6. Logstash outputs the MySQL data to your {{ech}} deployment. Let’s take a look in Kibana and verify that data: - 1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). + 1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. Select the deployment and go to **☰** > **Management** > **Dev Tools** 3. Copy and paste the following API GET request into the Console pane, and then click **▶**. This queries all records in the new `rdbms_idx` index. diff --git a/raw-migrated-files/cloud/cloud/ec-getting-started-search-use-cases-node-logs.md b/raw-migrated-files/cloud/cloud/ec-getting-started-search-use-cases-node-logs.md index 5c05c404d..c1263dec1 100644 --- a/raw-migrated-files/cloud/cloud/ec-getting-started-search-use-cases-node-logs.md +++ b/raw-migrated-files/cloud/cloud/ec-getting-started-search-use-cases-node-logs.md @@ -1,11 +1,11 @@ # Ingest logs from a Node.js web application using Filebeat [ec-getting-started-search-use-cases-node-logs] -This guide demonstrates how to ingest logs from a Node.js web application and deliver them securely into an Elasticsearch Service deployment. You’ll set up Filebeat to monitor a JSON-structured log file that has standard Elastic Common Schema (ECS) formatted fields, and you’ll then view real-time visualizations of the log events in Kibana as requests are made to the Node.js server. While Node.js is used for this example, this approach to monitoring log output is applicable across many client types. Check the list of [available ECS logging plugins](asciidocalypse://docs/ecs-logging/docs/reference/ecs/ecs-logging-overview/intro.md#_get_started). +This guide demonstrates how to ingest logs from a Node.js web application and deliver them securely into an {{ech}} deployment. You’ll set up Filebeat to monitor a JSON-structured log file that has standard Elastic Common Schema (ECS) formatted fields, and you’ll then view real-time visualizations of the log events in Kibana as requests are made to the Node.js server. While Node.js is used for this example, this approach to monitoring log output is applicable across many client types. Check the list of [available ECS logging plugins](asciidocalypse://docs/ecs-logging/docs/reference/intro.md#_get_started). This guide presents: 1. [Prerequisites](../../../manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md#ec-node-logs-prerequisites) -2. [Get Elasticsearch Service](../../../manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md#ec-node-logs-trial) +2. [Get {{ech}}](../../../manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md#ec-node-logs-trial) 3. [Connect securely](../../../manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md#ec-node-logs-connect-securely) 4. [Create a Node.js web application with logging](../../../manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md#ec-node-logs-create-server-script) 5. [Create a Node.js HTTP request application](../../../manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md#ec-node-logs-create-request-script) @@ -33,7 +33,7 @@ For the three following packages, you can create a working directory to install npm install winston ``` -* The [Elastic Common Schema (ECS) formatter](asciidocalypse://docs/ecs-logging-nodejs/docs/reference/ecs/ecs-logging-nodejs/winston.md) for the Node.js winston logger - This plugin formats your Node.js logs into an ECS structured JSON format ideally suited for ingestion into Elasticsearch. To install the ECS winston logger, run the following command in your working directory so that the package is installed in the same location as the winston package: +* The [Elastic Common Schema (ECS) formatter](asciidocalypse://docs/ecs-logging-nodejs/docs/reference/winston.md) for the Node.js winston logger - This plugin formats your Node.js logs into an ECS structured JSON format ideally suited for ingestion into Elasticsearch. To install the ECS winston logger, run the following command in your working directory so that the package is installed in the same location as the winston package: ```sh npm install @elastic/ecs-winston-format @@ -47,7 +47,7 @@ For the three following packages, you can create a working directory to install -## Get Elasticsearch Service [ec-node-logs-trial] +## Get {{ech}} [ec-node-logs-trial] 1. [Get a free trial](https://cloud.elastic.co/registration?page=docs&placement=docs-body). 2. Log into [Elastic Cloud](https://cloud.elastic.co?page=docs&placement=docs-body). @@ -56,14 +56,14 @@ For the three following packages, you can create a working directory to install 5. Select **Create deployment** and save your Elastic deployment credentials. You need these credentials later on. 6. When the deployment is ready, click **Continue** and a page of **Setup guides** is displayed. To continue to the deployment homepage click **I’d like to do something else**. -Prefer not to subscribe to yet another service? You can also get Elasticsearch Service through [AWS, Azure, and GCP marketplaces](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md). +Prefer not to subscribe to yet another service? You can also get {{ech}} through [AWS, Azure, and GCP marketplaces](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md). ## Connect securely [ec-node-logs-connect-securely] -When connecting to Elasticsearch Service you can use a Cloud ID to specify the connection details. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. +When connecting to {{ech}} you can use a Cloud ID to specify the connection details. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. -To connect to, stream data to, and issue queries with Elasticsearch Service, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. +To connect to, stream data to, and issue queries with {{ech}}, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. ## Create a Node.js web application with logging [ec-node-logs-create-server-script] @@ -227,13 +227,13 @@ In this step, you’ll create a Node.js application that sends HTTP requests to ## Set up Filebeat [ec-node-logs-filebeat] -Filebeat offers a straightforward, easy to configure way to monitor your Node.js log files and port the log data into Elasticsearch Service. +Filebeat offers a straightforward, easy to configure way to monitor your Node.js log files and port the log data into {{ech}}. **Get Filebeat** [Download Filebeat](https://www.elastic.co/downloads/beats/filebeat) and unpack it on the local server from which you want to collect data. -**Configure Filebeat to access Elasticsearch Service** +**Configure Filebeat to access {{ech}}** In */filebeat-/* (where ** is the directory where Filebeat is installed and ** is the Filebeat version number), open the *filebeat.yml* configuration file for editing. @@ -297,7 +297,7 @@ For this example, Filebeat uses the following four decoding options. json.expand_keys: true ``` -To learn more about these settings, check [JSON input configuration options](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-input-log.md#filebeat-input-log-config-json) and [Decode JSON fields](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/decode-json-fields.md) in the Filebeat Reference. +To learn more about these settings, check [JSON input configuration options](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-input-log.md#filebeat-input-log-config-json) and [Decode JSON fields](asciidocalypse://docs/beats/docs/reference/filebeat/decode-json-fields.md) in the Filebeat Reference. Append the four JSON decoding options to the *Filebeat inputs* section of *filebeat.yml*, so that the section now looks like this: @@ -333,7 +333,7 @@ Filebeat comes with predefined assets for parsing, indexing, and visualizing you ``` ::::{important} -Depending on variables including the installation location, environment, and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-libbeat/config-file-permissions.md) of filebeat.yml. You can also try running the command as *root*: *sudo ./filebeat setup -e* or you can disable strict permission checks by running the command with the `--strict.perms=false` option. +Depending on variables including the installation location, environment, and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/libbeat/config-file-permissions.md) of filebeat.yml. You can also try running the command as *root*: *sudo ./filebeat setup -e* or you can disable strict permission checks by running the command with the `--strict.perms=false` option. :::: @@ -351,9 +351,9 @@ The Filebeat data view is now available in Elasticsearch. To verify: **Optional: Use an API key to authenticate** -For additional security, instead of using basic authentication you can generate an Elasticsearch API key through the Elasticsearch Service console, and then configure Filebeat to use the new key to connect securely to the Elasticsearch Service deployment. +For additional security, instead of using basic authentication you can generate an Elasticsearch API key through the {{ecloud}} Console, and then configure Filebeat to use the new key to connect securely to the {{ech}} deployment. -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. Select the deployment name and go to **☰** > **Management** > **Dev Tools**. 3. Enter the following request: @@ -434,7 +434,7 @@ In this command: * The *-c* flag specifies the path to the Filebeat config file. ::::{note} -Just in case the command doesn’t work as expected, check the [Filebeat quick start](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-installation-configuration.md#start) for the detailed command syntax for your operating system. You can also try running the command as *root*: *sudo ./filebeat -e -c filebeat.yml*. +Just in case the command doesn’t work as expected, check the [Filebeat quick start](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-installation-configuration.md#start) for the detailed command syntax for your operating system. You can also try running the command as *root*: *sudo ./filebeat -e -c filebeat.yml*. :::: @@ -452,9 +452,9 @@ node webrequests.js Let the script run for a few minutes and maybe brew up a quick coffee or tea ☕ . After that, make sure that the *log.json* file is generated as expected and is populated with several log entries. -**Verify the log entries in Elasticsearch Service** +**Verify the log entries in {{ech}}** -The next step is to confirm that the log data has successfully found it’s way into Elasticsearch Service. +The next step is to confirm that the log data has successfully found it’s way into {{ech}}. 1. [Login to Kibana](../../../deploy-manage/deploy/elastic-cloud/access-kibana.md). 2. Open the {{kib}} main menu and select **Management** > **{{kib}}** > **Data views**. @@ -517,5 +517,5 @@ You can add titles to the visualizations, resize and position them as you like, 2. As your final step, remember to stop Filebeat, the Node.js web server, and the client. Enter *CTRL + C* in the terminal window for each application to stop them. -You now know how to monitor log files from a Node.js web application, deliver the log event data securely into an Elasticsearch Service deployment, and then visualize the results in Kibana in real time. Consult the [Filebeat documentation](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-overview.md) to learn more about the ingestion and processing options available for your data. You can also explore our [documentation](../../../manage-data/ingest.md) to learn all about working in Elasticsearch Service. +You now know how to monitor log files from a Node.js web application, deliver the log event data securely into an {{ech}} deployment, and then visualize the results in Kibana in real time. Consult the [Filebeat documentation](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-overview.md) to learn more about the ingestion and processing options available for your data. You can also explore our [documentation](../../../manage-data/ingest.md) to learn all about working in {{ech}}. diff --git a/raw-migrated-files/cloud/cloud/ec-getting-started-search-use-cases-python-logs.md b/raw-migrated-files/cloud/cloud/ec-getting-started-search-use-cases-python-logs.md index a79693608..5ee8ccf43 100644 --- a/raw-migrated-files/cloud/cloud/ec-getting-started-search-use-cases-python-logs.md +++ b/raw-migrated-files/cloud/cloud/ec-getting-started-search-use-cases-python-logs.md @@ -1,6 +1,6 @@ # Ingest logs from a Python application using Filebeat [ec-getting-started-search-use-cases-python-logs] -This guide demonstrates how to ingest logs from a Python application and deliver them securely into an Elasticsearch Service deployment. You’ll set up Filebeat to monitor a JSON-structured log file that has standard Elastic Common Schema (ECS) formatted fields, and you’ll then view real-time visualizations of the log events in {{kib}} as they occur. While Python is used for this example, this approach to monitoring log output is applicable across many client types. Check the list of [available ECS logging plugins](asciidocalypse://docs/ecs-logging/docs/reference/ecs/ecs-logging-overview/intro.md). +This guide demonstrates how to ingest logs from a Python application and deliver them securely into an {{ech}} deployment. You’ll set up Filebeat to monitor a JSON-structured log file that has standard Elastic Common Schema (ECS) formatted fields, and you’ll then view real-time visualizations of the log events in {{kib}} as they occur. While Python is used for this example, this approach to monitoring log output is applicable across many client types. Check the list of [available ECS logging plugins](asciidocalypse://docs/ecs-logging/docs/reference/intro.md). You are going to learn how to: @@ -14,7 +14,7 @@ You are going to learn how to: ## Prerequisites [ec_prerequisites_2] -To complete these steps you need to have [Python](https://www.python.org/) installed on your system as well as the [Elastic Common Schema (ECS) logger](asciidocalypse://docs/ecs-logging-python/docs/reference/ecs/ecs-logging-python/installation.md) for the Python logging library. +To complete these steps you need to have [Python](https://www.python.org/) installed on your system as well as the [Elastic Common Schema (ECS) logger](asciidocalypse://docs/ecs-logging-python/docs/reference/installation.md) for the Python logging library. To install *ecs-logging-python*, run: @@ -23,7 +23,7 @@ python -m pip install ecs-logging ``` -## Get Elasticsearch Service [ec_get_elasticsearch_service_3] +## Get {{ech}} [ec_get_elasticsearch_service_3] 1. [Get a free trial](https://cloud.elastic.co/registration?page=docs&placement=docs-body). 2. Log into [Elastic Cloud](https://cloud.elastic.co?page=docs&placement=docs-body). @@ -32,14 +32,14 @@ python -m pip install ecs-logging 5. Select **Create deployment** and save your Elastic deployment credentials. You need these credentials later on. 6. When the deployment is ready, click **Continue** and a page of **Setup guides** is displayed. To continue to the deployment homepage click **I’d like to do something else**. -Prefer not to subscribe to yet another service? You can also get Elasticsearch Service through [AWS, Azure, and GCP marketplaces](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md). +Prefer not to subscribe to yet another service? You can also get {{ech}} through [AWS, Azure, and GCP marketplaces](../../../deploy-manage/deploy/elastic-cloud/subscribe-from-marketplace.md). ## Connect securely [ec_connect_securely_2] -When connecting to Elasticsearch Service you can use a Cloud ID to specify the connection details. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. +When connecting to {{ech}} you can use a Cloud ID to specify the connection details. Find your Cloud ID by going to the {{kib}} main menu and selecting Management > Integrations, and then selecting View deployment details. -To connect to, stream data to, and issue queries with Elasticsearch Service, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. +To connect to, stream data to, and issue queries with {{ech}}, you need to think about authentication. Two authentication mechanisms are supported, *API key* and *basic authentication*. Here, to get you started quickly, we’ll show you how to use basic authentication, but you can also generate API keys as shown later on. API keys are safer and preferred for production environments. ## Create a Python script with logging [ec-python-logs-create-script] @@ -102,7 +102,7 @@ In this step, you’ll create a Python script that generates logs in JSON format Having your logs written in a JSON format with ECS fields allows for easy parsing and analysis, and for standardization with other applications. A standard, easily parsible format becomes increasingly important as the volume and type of data captured in your logs expands over time. - Together with the standard fields included for each log entry is an extra *http.request.body.content* field. This extra field is there just to give you some additional, interesting data to work with, and also to demonstrate how you can add optional fields to your log data. Check the [ECS Field Reference](asciidocalypse://docs/ecs/docs/reference/ecs/ecs-field-reference.md) for the full list of available fields. + Together with the standard fields included for each log entry is an extra *http.request.body.content* field. This extra field is there just to give you some additional, interesting data to work with, and also to demonstrate how you can add optional fields to your log data. Check the [ECS Field Reference](asciidocalypse://docs/ecs/docs/reference/ecs-field-reference.md) for the full list of available fields. 2. Let’s give the Python script a test run. Open a terminal instance in the location where you saved *elvis.py* and run the following: @@ -121,13 +121,13 @@ In this step, you’ll create a Python script that generates logs in JSON format ## Set up Filebeat [ec-python-logs-filebeat] -Filebeat offers a straightforward, easy to configure way to monitor your Python log files and port the log data into Elasticsearch Service. +Filebeat offers a straightforward, easy to configure way to monitor your Python log files and port the log data into {{ech}}. **Get Filebeat** [Download Filebeat](https://www.elastic.co/downloads/beats/filebeat) and unpack it on the local server from which you want to collect data. -**Configure Filebeat to access Elasticsearch Service** +**Configure Filebeat to access {{ech}}** In */filebeat-/* (where ** is the directory where Filebeat is installed and ** is the Filebeat version number), open the *filebeat.yml* configuration file for editing. @@ -188,7 +188,7 @@ For this example, Filebeat uses the following four decoding options. json.expand_keys: true ``` -To learn more about these settings, check [JSON input configuration options](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-input-log.md#filebeat-input-log-config-json) and [Decode JSON fields](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/decode-json-fields.md) in the Filebeat Reference. +To learn more about these settings, check [JSON input configuration options](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-input-log.md#filebeat-input-log-config-json) and [Decode JSON fields](asciidocalypse://docs/beats/docs/reference/filebeat/decode-json-fields.md) in the Filebeat Reference. Append the four JSON decoding options to the *Filebeat inputs* section of *filebeat.yml*, so that the section now looks like this: @@ -224,7 +224,7 @@ Filebeat comes with predefined assets for parsing, indexing, and visualizing you ``` ::::{important} -Depending on variables including the installation location, environment, and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-libbeat/config-file-permissions.md) of filebeat.yml. You can also try running the command as *root*: *sudo ./filebeat setup -e* or you can disable strict permission checks by running the command with the `--strict.perms=false` option. +Depending on variables including the installation location, environment, and local permissions, you might need to [change the ownership](asciidocalypse://docs/beats/docs/reference/libbeat/config-file-permissions.md) of filebeat.yml. You can also try running the command as *root*: *sudo ./filebeat setup -e* or you can disable strict permission checks by running the command with the `--strict.perms=false` option. :::: @@ -247,9 +247,9 @@ Beginning with Elastic Stack version 8.0, Kibana *index patterns* have been rena **Optional: Use an API key to authenticate** -For additional security, instead of using basic authentication you can generate an Elasticsearch API key through the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body), and then configure Filebeat to use the new key to connect securely to the Elasticsearch Service deployment. +For additional security, instead of using basic authentication you can generate an Elasticsearch API key through the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body), and then configure Filebeat to use the new key to connect securely to the {{ech}} deployment. -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. Select the deployment name and go to **☰** > **Management** > **Dev Tools**. 3. Enter the following request: @@ -330,7 +330,7 @@ In this command: * The *-c* flag specifies the path to the Filebeat config file. ::::{note} -Just in case the command doesn’t work as expected, check the [Filebeat quick start](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-installation-configuration.md#start) for the detailed command syntax for your operating system. You can also try running the command as *root*: *sudo ./filebeat -e -c filebeat.yml*. +Just in case the command doesn’t work as expected, check the [Filebeat quick start](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-installation-configuration.md#start) for the detailed command syntax for your operating system. You can also try running the command as *root*: *sudo ./filebeat -e -c filebeat.yml*. :::: @@ -342,9 +342,9 @@ python elvis.py Let the script run for a few minutes and maybe brew up a quick coffee or tea ☕ . After that, make sure that the *elvis.json* file is generated as expected and is populated with several log entries. -**Verify the log entries in Elasticsearch Service** +**Verify the log entries in {{ech}}** -The next step is to confirm that the log data has successfully found it’s way into Elasticsearch Service. +The next step is to confirm that the log data has successfully found it’s way into {{ech}}. 1. [Login to Kibana](../../../deploy-manage/deploy/elastic-cloud/access-kibana.md). 2. Open the {{kib}} main menu and select **Management** > **{{kib}}** > **Data views**. @@ -408,5 +408,5 @@ You can add titles to the visualizations, resize and position them as you like, 2. As your final step, remember to stop Filebeat and the Python script. Enter *CTRL + C* in both your Filebeat terminal and in your `elvis.py` terminal. -You now know how to monitor log files from a Python application, deliver the log event data securely into an Elasticsearch Service deployment, and then visualize the results in Kibana in real time. Consult the [Filebeat documentation](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/filebeat-overview.md) to learn more about the ingestion and processing options available for your data. You can also explore our [documentation](../../../manage-data/ingest.md) to learn all about working in Elasticsearch Service. +You now know how to monitor log files from a Python application, deliver the log event data securely into an {{ech}} deployment, and then visualize the results in Kibana in real time. Consult the [Filebeat documentation](asciidocalypse://docs/beats/docs/reference/filebeat/filebeat-overview.md) to learn more about the ingestion and processing options available for your data. You can also explore our [documentation](../../../manage-data/ingest.md) to learn all about working in {{ech}}. diff --git a/raw-migrated-files/cloud/cloud/ec-getting-started.md b/raw-migrated-files/cloud/cloud/ec-getting-started.md index cb03a884f..bccf92ab6 100644 --- a/raw-migrated-files/cloud/cloud/ec-getting-started.md +++ b/raw-migrated-files/cloud/cloud/ec-getting-started.md @@ -1,4 +1,4 @@ -# Introducing Elasticsearch Service [ec-getting-started] +# Introducing {{ech}} [ec-getting-started] ::::{note} Are you just discovering Elastic or are unfamiliar with the core concepts of the Elastic Stack? Would you like to be guided through the very first steps and understand how Elastic can help you? Try one of our [getting started guides](https://www.elastic.co/guide/en/starting-with-the-elasticsearch-platform-and-its-solutions/current/getting-started-guides.html) first. @@ -6,11 +6,11 @@ Are you just discovering Elastic or are unfamiliar with the core concepts of the -## What is Elasticsearch Service? [ec_what_is_elasticsearch_service] +## What is {{ech}}? [ec_what_is_elasticsearch_service] **The Elastic Stack, managed through {{ecloud}} deployments.** -Elasticsearch Service allows you to manage one or more instances of the Elastic Stack through **deployments**. These deployments are hosted on {{ecloud}}, through the cloud provider and regions of your choice, and are tied to your organization account. +{{ech}} allows you to manage one or more instances of the Elastic Stack through **deployments**. These deployments are hosted on {{ecloud}}, through the cloud provider and regions of your choice, and are tied to your organization account. A *deployment* helps you manage an Elasticsearch cluster and instances of other Elastic products, like Kibana or APM instances, in one place. Spin up, scale, upgrade, and delete your Elastic Stack products without having to manage each one separately. In a deployment, everything works together. @@ -44,7 +44,7 @@ These solutions help you accomplish your use cases: Ingest data into the deploym Of course, you can choose to follow your own path and use Elastic components available in your deployment to ingest, visualize, and analyze your data independently from solutions. -## How to operate Elasticsearch Service? [ec_how_to_operate_elasticsearch_service] +## How to operate {{ech}}? [ec_how_to_operate_elasticsearch_service] **Where to start?** @@ -68,10 +68,10 @@ Control which users and services can access your deployments by [securing your e **Monitor your deployments and keep them healthy** -Elasticsearch Service provides several ways to monitor your deployments, anticipate and prevent issues, or fix them when they occur. Check [Monitoring your deployment](../../../deploy-manage/monitor/stack-monitoring.md) to get more details. +{{ech}} provides several ways to monitor your deployments, anticipate and prevent issues, or fix them when they occur. Check [Monitoring your deployment](../../../deploy-manage/monitor/stack-monitoring.md) to get more details. **And then?** -Now is the time for you to work with your data. The content of the Elasticsearch Service section helps you get your environment up and ready to handle your data the way you need. You can always adjust your deployments and their configuration as your usage evolves over time. +Now is the time for you to work with your data. The content of the {{ecloud}} section helps you get your environment up and ready to handle your data the way you need. You can always adjust your deployments and their configuration as your usage evolves over time. To get the most out of the solutions that the Elastic Stack offers, [log in to {{ecloud}}](https://cloud.elastic.co) or [browse the documentation](https://www.elastic.co/docs). diff --git a/raw-migrated-files/cloud/cloud/ec-ingest-guides.md b/raw-migrated-files/cloud/cloud/ec-ingest-guides.md index 0b163f75c..43a87d82c 100644 --- a/raw-migrated-files/cloud/cloud/ec-ingest-guides.md +++ b/raw-migrated-files/cloud/cloud/ec-ingest-guides.md @@ -2,23 +2,23 @@ The following tutorials demonstrate how you can use the Elasticsearch language clients to ingest data from an application. -[Ingest data with Node.js on Elasticsearch Service](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-with-nodejs-on-elasticsearch-service.md) -: Get Node.js application data securely into Elasticsearch Service, where it can then be searched and modified. +[Ingest data with Node.js on {{ech}}](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-with-nodejs-on-elasticsearch-service.md) +: Get Node.js application data securely into {{ech}}, where it can then be searched and modified. -[Ingest data with Python on Elasticsearch Service](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-with-python-on-elasticsearch-service.md) -: Get Python application data securely into Elasticsearch Service, where it can then be searched and modified. +[Ingest data with Python on {{ech}}](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-with-python-on-elasticsearch-service.md) +: Get Python application data securely into {{ech}}, where it can then be searched and modified. -[Ingest data from Beats to Elasticsearch Service with Logstash as a proxy](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md) -: Get server metrics or other types of data from Filebeat and Metricbeat into Logstash as an intermediary, and then send that data to Elasticsearch Service. Using Logstash as a proxy limits your Elastic Stack traffic through a single, external-facing firewall exception or rule. +[Ingest data from Beats to {{ech}} with Logstash as a proxy](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-beats-to-elasticsearch-service-with-logstash-as-proxy.md) +: Get server metrics or other types of data from Filebeat and Metricbeat into Logstash as an intermediary, and then send that data to {{ech}}. Using Logstash as a proxy limits your Elastic Stack traffic through a single, external-facing firewall exception or rule. -[Ingest data from a relational database into Elasticsearch Service](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md) -: Get data from a relational database into Elasticsearch Service using the Logstash JDBC input plugin. Logstash can be used as an efficient way to copy records and to receive updates from a relational database as changes happen, and then send the new data to a deployment. +[Ingest data from a relational database into {{ech}}](../../../manage-data/ingest/ingesting-data-from-applications/ingest-data-from-relational-database-into-elasticsearch-service.md) +: Get data from a relational database into {{ech}} using the Logstash JDBC input plugin. Logstash can be used as an efficient way to copy records and to receive updates from a relational database as changes happen, and then send the new data to a deployment. [Ingest logs from a Python application using Filebeat](../../../manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-python-application-using-filebeat.md) -: Get logs from a Python application and deliver them securely into an Elasticsearch Service deployment. You’ll set up Filebeat to monitor an ECS-formatted log file, and then view real-time visualizations of the log events in Kibana as they occur. +: Get logs from a Python application and deliver them securely into an {{ech}} deployment. You’ll set up Filebeat to monitor an ECS-formatted log file, and then view real-time visualizations of the log events in Kibana as they occur. [Ingest logs from a Node.js web application using Filebeat](../../../manage-data/ingest/ingesting-data-from-applications/ingest-logs-from-nodejs-web-application-using-filebeat.md) -: Get HTTP request logs from a Node.js web application and deliver them securely into an Elasticsearch Service deployment. You’ll set up Filebeat to monitor an ECS-formatted log file and then view real-time visualizations of the log events as HTTP requests occur on your Node.js web server. +: Get HTTP request logs from a Node.js web application and deliver them securely into an {{ech}} deployment. You’ll set up Filebeat to monitor an ECS-formatted log file and then view real-time visualizations of the log events as HTTP requests occur on your Node.js web server. ::::{tip} You can use [Elasticsearch ingest pipelines](../../../manage-data/ingest/transform-enrich/ingest-pipelines.md) to preprocess incoming data. This enables you to optimize how your data is indexed, and simplifies tasks such as extracting error codes from a log file and mapping geographic locations to IP addresses. diff --git a/raw-migrated-files/cloud/cloud/ec-maintenance-mode-routing.md b/raw-migrated-files/cloud/cloud/ec-maintenance-mode-routing.md index 98f8ed73d..223d5d9a3 100644 --- a/raw-migrated-files/cloud/cloud/ec-maintenance-mode-routing.md +++ b/raw-migrated-files/cloud/cloud/ec-maintenance-mode-routing.md @@ -13,7 +13,7 @@ It might be helpful to temporarily block upstream requests in order to protect s ## Considerations [ec_considerations] * {{ecloud}} will automatically set and remove routing blocks during plan changes. Elastic recommends avoiding manually overriding these settings for a deployment while its plans are pending. -* The [{{es}} API console](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-api-console.md) bypasses {{ecloud}} proxy routing blocks against {{es}} to enable administrative tasks while plan changes are pending. You should generally default traffic to the {{es}} endpoint. However, if you enable **Stop routing requests** across all {{es}} nodes, you need to use this UI to administer your cluster. +* The [{{es}} API console](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-api-console.md) bypasses {{ecloud}} proxy routing blocks against {{es}} to enable administrative tasks while plan changes are pending. You should generally default traffic to the {{es}} endpoint. However, if you enable **Stop routing requests** across all {{es}} nodes, you need to use this UI to administer your cluster. * While {{es}} has **Stop routing requests** set across all nodes, other products with the deployment may become unhealthy. This is because {{es}} is a prerequisite for those other products, such as {{kib}}. In {{kib}}, this results in a [**Kibana server is not ready yet**](/troubleshoot/kibana/error-server-not-ready.md) message. * Enabling **Stop routing requests** does not affect your [billing](../../../deploy-manage/cloud-organization/billing.md). If needed, you can stop charges for a deployment by [deleting the deployment](../../../deploy-manage/uninstall/delete-a-cloud-deployment.md). diff --git a/raw-migrated-files/cloud/cloud/ec-manage-apm-settings.md b/raw-migrated-files/cloud/cloud/ec-manage-apm-settings.md index a102997d5..d9fa25299 100644 --- a/raw-migrated-files/cloud/cloud/ec-manage-apm-settings.md +++ b/raw-migrated-files/cloud/cloud/ec-manage-apm-settings.md @@ -22,10 +22,10 @@ User settings are appended to the `apm-server.yml` configuration file for your i To add user settings: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. From your deployment menu, go to the **Edit** page. 4. In the **APM** section, select **Edit user settings**. (For existing deployments with user settings, you may have to expand the **Edit apm-server.yml** caret instead.) @@ -33,14 +33,14 @@ To add user settings: 6. Select **Save changes**. ::::{note} -If a setting is not supported by Elasticsearch Service, you will get an error message when you try to save. +If a setting is not supported by {{ech}}, you will get an error message when you try to save. :::: ## Supported standalone APM settings (legacy) [ec-apm-settings] -Elasticsearch Service supports the following setting when running APM in standalone mode (legacy). +{{ech}} supports the following setting when running APM in standalone mode (legacy). ::::{tip} Some settings that could break your cluster if set incorrectly are blocklisted. The following settings are generally safe in cloud environments. For detailed information about APM settings, check the [APM documentation](/solutions/observability/apps/configure-apm-server.md). diff --git a/raw-migrated-files/cloud/cloud/ec-manage-appsearch-settings.md b/raw-migrated-files/cloud/cloud/ec-manage-appsearch-settings.md index 6d95d5c85..f74c36e76 100644 --- a/raw-migrated-files/cloud/cloud/ec-manage-appsearch-settings.md +++ b/raw-migrated-files/cloud/cloud/ec-manage-appsearch-settings.md @@ -9,10 +9,10 @@ Some settings that could break your cluster if set incorrectly are blocked. Revi To add user settings: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. From your deployment menu, go to the **Edit** page. 4. At the bottom of the **App Search** section, expand the **User settings overrides** caret. @@ -20,14 +20,14 @@ To add user settings: 6. Select **Save changes**. ::::{note} -If a setting is not supported by Elasticsearch Service, you get an error message when you try to save. +If a setting is not supported by {{ech}}, you get an error message when you try to save. :::: ## Supported App Search settings [ec-appsearch-settings] -Elasticsearch Service supports the following App Search settings. +{{ech}} supports the following App Search settings. `app_search.auth.source` : The origin of authenticated App Search users. Options are `standard`, `elasticsearch-native`, and `elasticsearch-saml`. diff --git a/raw-migrated-files/cloud/cloud/ec-manage-enterprise-search-settings.md b/raw-migrated-files/cloud/cloud/ec-manage-enterprise-search-settings.md index b6b1c5a56..28c7c169a 100644 --- a/raw-migrated-files/cloud/cloud/ec-manage-enterprise-search-settings.md +++ b/raw-migrated-files/cloud/cloud/ec-manage-enterprise-search-settings.md @@ -6,14 +6,14 @@ Enterprise Search is not available in {{stack}} 9.0+. Change how Enterprise Search runs by providing your own user settings. User settings are appended to the `ent-search.yml` configuration file for your instance and provide custom configuration options. -Refer to the [Configuration settings reference](https://www.elastic.co/guide/en/enterprise-search/current/configuration.html#configuration-file) in the Enterprise Search documentation for a full list of configuration settings. Settings supported on Elasticsearch Service are indicated by an {{ecloud}} icon (![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ecloud}}")). Be sure to refer to the documentation version that matches the Elastic Stack version used in your deployment. +Refer to the [Configuration settings reference](https://www.elastic.co/guide/en/enterprise-search/current/configuration.html#configuration-file) in the Enterprise Search documentation for a full list of configuration settings. Settings supported on {{ech}} are indicated by an {{ecloud}} icon (![logo cloud](https://doc-icons.s3.us-east-2.amazonaws.com/logo_cloud.svg "Supported on {{ecloud}}")). Be sure to refer to the documentation version that matches the Elastic Stack version used in your deployment. To add user settings: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. From your deployment menu, go to the **Edit** page. 4. In the **Enterprise Search** section, select **Edit user settings**. For deployments with existing user settings, you may have to expand the **Edit enterprise-search.yml** caret instead. @@ -21,7 +21,7 @@ To add user settings: 6. Select **Save changes**. ::::{note} -If a setting is not supported by Elasticsearch Service, an error message displays when you try to save your settings. +If a setting is not supported by {{ech}}, an error message displays when you try to save your settings. :::: diff --git a/raw-migrated-files/cloud/cloud/ec-manage-kibana-settings.md b/raw-migrated-files/cloud/cloud/ec-manage-kibana-settings.md index 1ffa3329b..059023584 100644 --- a/raw-migrated-files/cloud/cloud/ec-manage-kibana-settings.md +++ b/raw-migrated-files/cloud/cloud/ec-manage-kibana-settings.md @@ -1,6 +1,6 @@ # Edit Kibana user settings [ec-manage-kibana-settings] -Elasticsearch Service supports most of the standard Kibana and X-Pack settings. Through a YAML editor in the console, you can append Kibana properties to the `kibana.yml` file. Your changes to the configuration file are read on startup. +{{ech}} supports most of the standard Kibana and X-Pack settings. Through a YAML editor in the console, you can append Kibana properties to the `kibana.yml` file. Your changes to the configuration file are read on startup. ::::{important} Be aware that some settings that could break your cluster if set incorrectly and that the syntax might change between major versions. Before upgrading, be sure to review the full list of the [latest Kibana settings and syntax](asciidocalypse://docs/kibana/docs/reference/configuration-reference/general-settings.md). @@ -9,10 +9,10 @@ Be aware that some settings that could break your cluster if set incorrectly and To change Kibana settings: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. From your deployment menu, go to the **Edit** page. 4. In the **Kibana** section, select **Edit user settings**. (For deployments with existing user settings, you may have to expand the **Edit kibana.yml** caret instead.) @@ -22,7 +22,7 @@ To change Kibana settings: Saving your changes initiates a configuration plan change that restarts Kibana automatically for you. ::::{note} -If a setting is not supported by Elasticsearch Service, you will get an error message when you try to save. +If a setting is not supported by {{ech}}, you will get an error message when you try to save. :::: @@ -222,7 +222,7 @@ If a setting is not supported by Elasticsearch Service, you will get an error me ### SAML settings [ec_saml_settings] -If you are using SAML to secure your clusters, these settings are supported in Elasticsearch Service. +If you are using SAML to secure your clusters, these settings are supported in {{ech}}. To learn more, refer to [configuring Kibana to use SAML](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-configure-kibana). @@ -285,7 +285,7 @@ The following additional setting is supported: ### OpenID Connect [ec_openid_connect] -If you are using OpenID Connect to secure your clusters, these settings are supported in Elasticsearch Service. +If you are using OpenID Connect to secure your clusters, these settings are supported in {{ech}}. `xpack.security.authc.providers.oidc..order` : Specifies order of the OpenID Connect authentication provider in the authentication chain. @@ -304,7 +304,7 @@ To learn more, check [configuring Kibana to use OpenID Connect](/deploy-manage/u ### Anonymous authentication [ec_anonymous_authentication] -If you want to allow anonymous authentication in Kibana, these settings are supported in Elasticsearch Service. To learn more on how to enable anonymous access, check [Enabling anonymous access](/deploy-manage/users-roles/cluster-or-deployment-auth/anonymous-access.md) and [Configuring Kibana to use anonymous authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/user-authentication.md#anonymous-authentication). +If you want to allow anonymous authentication in Kibana, these settings are supported in {{ech}}. To learn more on how to enable anonymous access, check [Enabling anonymous access](/deploy-manage/users-roles/cluster-or-deployment-auth/anonymous-access.md) and [Configuring Kibana to use anonymous authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/kibana-authentication.md#anonymous-authentication). #### Supported versions before 8.0.0 [ec_supported_versions_before_8_0_0] diff --git a/raw-migrated-files/cloud/cloud/ec-metrics-memory-pressure.md b/raw-migrated-files/cloud/cloud/ec-metrics-memory-pressure.md index 38a3d326a..33a8d5182 100644 --- a/raw-migrated-files/cloud/cloud/ec-metrics-memory-pressure.md +++ b/raw-migrated-files/cloud/cloud/ec-metrics-memory-pressure.md @@ -25,7 +25,7 @@ If the performance impact from high memory pressure is not acceptable, you need ## Increase the deployment size [ec_increase_the_deployment_size] -Scaling with Elasticsearch Service is easy: simply log in to the Elasticsearch Service console, select your deployment, select edit, and either increase the number of zones or the size per zone. +Scaling with {{ech}} is easy: simply log in to the {{ecloud}} Console, select your deployment, select edit, and either increase the number of zones or the size per zone. ## Reduce the workload [ec_reduce_the_workload] diff --git a/raw-migrated-files/cloud/cloud/ec-monitoring-setup.md b/raw-migrated-files/cloud/cloud/ec-monitoring-setup.md index bd773d949..13d58758a 100644 --- a/raw-migrated-files/cloud/cloud/ec-monitoring-setup.md +++ b/raw-migrated-files/cloud/cloud/ec-monitoring-setup.md @@ -9,7 +9,7 @@ These steps are helpful to set yourself up for success by making monitoring read As you manage, monitor, and troubleshoot your deployment, make sure you have an understanding of the [shared responsibilities](https://www.elastic.co/cloud/shared-responsibility) between Elastic and yourself, so you know what you need to do to keep your deployments running smoothly. -You may also consider subscribing to incident notices reported on the Elasticsearch Service [status page](https://status.elastic.co). +You may also consider subscribing to incident notices reported on the {{ecloud}} [status page](https://status.elastic.co). ## Enable logs and metrics [ec_enable_logs_and_metrics] @@ -59,7 +59,7 @@ To learn more about what [Elasticsearch monitoring metrics](/deploy-manage/monit :alt: Node tab in Kibana under Stack Monitoring ::: -Some [performance metrics](../../../deploy-manage/monitor/monitoring-data/ec-saas-metrics-accessing.md) are also available directly in the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body) and don’t require looking at your monitoring deployment. If you’re ever in a rush to determine if there is a performance problem, you can get a quick overview by going to the **Performance** page from your deployment menu: +Some [performance metrics](../../../deploy-manage/monitor/monitoring-data/ec-saas-metrics-accessing.md) are also available directly in the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body) and don’t require looking at your monitoring deployment. If you’re ever in a rush to determine if there is a performance problem, you can get a quick overview by going to the **Performance** page from your deployment menu: :::{image} ../../../images/cloud-ec-ce-monitoring-performance.png :alt: Performance page of the Elastic Cloud console @@ -89,7 +89,7 @@ Navigate to the **Discover** or **Stream** pages to check if you’ve misconfigu :alt: Log error in Stream page showing failed SAML authentication ::: -You can also use this page to test how problematic proxy traffic requests show up in audit logs. To illustrate, create a spurious test request from the [Elasticsearch API console](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-api-console.md): +You can also use this page to test how problematic proxy traffic requests show up in audit logs. To illustrate, create a spurious test request from the [Elasticsearch API console](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-api-console.md): :::{image} ../../../images/cloud-ec-ce-monitoring-api-console.png :alt: Elasticsearch API console showing a spurious request that fails @@ -146,7 +146,7 @@ When issues come up that you need to troubleshoot, you’ll frequently start wit You can run this query and many others from the API consoles available via: * **Kibana** > **Dev Tools**. Check [Run Elasticsearch API requests](/explore-analyze/query-filter/tools/console.md). -* **Elastic Cloud** > **Deployment** > **Elasticsearch** > **API Console**. Check [Access the Elasticsearch API console](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-api-console.md). +* **Elastic Cloud** > **Deployment** > **Elasticsearch** > **API Console**. Check [Access the Elasticsearch API console](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-api-console.md). You can also learn more about the queries you should run for your deployment by reading our blog [Managing and Troubleshooting Elasticsearch Memory](https://www.elastic.co/blog/managing-and-troubleshooting-elasticsearch-memory). diff --git a/raw-migrated-files/cloud/cloud/ec-monitoring.md b/raw-migrated-files/cloud/cloud/ec-monitoring.md index 0d0ac69f2..2b01e9b48 100644 --- a/raw-migrated-files/cloud/cloud/ec-monitoring.md +++ b/raw-migrated-files/cloud/cloud/ec-monitoring.md @@ -32,7 +32,7 @@ For {{stack}} versions 8.4 and later, the deployment **Health** page provides de To view the health for a deployment: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. On the **Deployments** page, select your deployment. 3. In your deployment menu, select **Health**. @@ -67,12 +67,12 @@ The deployment **Health** page does not include information on cluster performan ## Health warnings [ec-es-health-warnings] -In the normal course of using your Elasticsearch Service deployments, health warnings and errors might appear from time to time. Following are the most common scenarios and methods to resolve them. +In the normal course of using your {{ech}} deployments, health warnings and errors might appear from time to time. Following are the most common scenarios and methods to resolve them. Health warning messages : Health warning messages will sometimes appear on the main page for one of your deployments, as well as on the **Logs and metrics** page. - A single warning is rarely cause for concern, as often it just reflects ongoing, routine maintenance activity occurring on the Elasticsearch Service platform. + A single warning is rarely cause for concern, as often it just reflects ongoing, routine maintenance activity occurring on {{ecloud}}. Configuration change failures @@ -128,5 +128,5 @@ We’ve compiled some guidelines to help you ensure the health of your deploymen : Learn about the common causes of increased query response times and decreased performance in your deployment. [Why did my node move to a different host?](../../../troubleshoot/monitoring/node-moves-outages.md) -: Learn about why we may, from time to time, relocate your Elasticsearch Service deployments across hosts. +: Learn about why we may, from time to time, relocate your {{ech}} deployments across hosts. diff --git a/raw-migrated-files/cloud/cloud/ec-password-reset.md b/raw-migrated-files/cloud/cloud/ec-password-reset.md index 697797846..c054ef069 100644 --- a/raw-migrated-files/cloud/cloud/ec-password-reset.md +++ b/raw-migrated-files/cloud/cloud/ec-password-reset.md @@ -13,16 +13,16 @@ Resetting the `elastic` user password does not interfere with Marketplace integr ::::{note} -The `elastic` user should be not be used unless you have no other way to access your deployment. [Create API keys for ingesting data](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/beats-api-keys.md), and create user accounts with [appropriate roles for user access](../../../deploy-manage/users-roles/cluster-or-deployment-auth/quickstart.md). +The `elastic` user should be not be used unless you have no other way to access your deployment. [Create API keys for ingesting data](asciidocalypse://docs/beats/docs/reference/filebeat/beats-api-keys.md), and create user accounts with [appropriate roles for user access](../../../deploy-manage/users-roles/cluster-or-deployment-auth/quickstart.md). :::: To reset the password: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. From your deployment menu, go to **Security**. 4. Select **Reset password**. diff --git a/raw-migrated-files/cloud/cloud/ec-planning.md b/raw-migrated-files/cloud/cloud/ec-planning.md index 8a046ad51..97eb60a55 100644 --- a/raw-migrated-files/cloud/cloud/ec-planning.md +++ b/raw-migrated-files/cloud/cloud/ec-planning.md @@ -1,6 +1,6 @@ # Plan for production [ec-planning] -Elasticsearch Service supports a wide range of configurations. With such flexibility comes great freedom, but also the first rule of deployment planning: Your deployment needs to be matched to the workloads that you plan to run on your {{es}} clusters and {{kib}} instances. Specifically, this means two things: +{{ech}} supports a wide range of configurations. With such flexibility comes great freedom, but also the first rule of deployment planning: Your deployment needs to be matched to the workloads that you plan to run on your {{es}} clusters and {{kib}} instances. Specifically, this means two things: * [Does your data need to be highly available?](../../../deploy-manage/production-guidance/plan-for-production-elastic-cloud.md#ec-ha) * [Do you know when to scale?](../../../deploy-manage/production-guidance/plan-for-production-elastic-cloud.md#ec-workloads) @@ -8,7 +8,7 @@ Elasticsearch Service supports a wide range of configurations. With such flexibi ## Does your data need to be highly available? [ec-ha] -With Elasticsearch Service, your deployment can be spread across as many as three separate availability zones, each hosted in its own, separate data center. Why this matters: +With {{ech}}, your deployment can be spread across as many as three separate availability zones, each hosted in its own, separate data center. Why this matters: * Data centers can have issues with availability. Internet outages, earthquakes, floods, or other events could affect the availability of a single data center. With a single availability zone, you have a single point of failure that can bring down your deployment. * Multiple availability zones help your deployment remain available. This includes your {{es}} cluster, provided that your cluster is sized so that it can sustain your workload on the remaining data centers and that your indices are configured to have at least one replica. @@ -46,10 +46,10 @@ Clusters that only have one master node are not highly available and are at risk Knowing how to scale your deployment is critical, especially when unexpected workloads hits. Don’t forget to [check your performance metrics](../../../deploy-manage/monitor/monitoring-data/ec-saas-metrics-accessing.md) to make sure your deployments are healthy and can cope with your workloads. -Scaling with Elasticsearch Service is easy: +Scaling with {{ech}} is easy: -* Turn on [deployment autoscaling](../../../deploy-manage/autoscaling.md) to let Elasticsearch Service manage your deployments by adjusting their available resources automatically. -* Or, if you prefer manual control, log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body), select your deployment, select **Edit deployment** from the **Actions** dropdown, and either increase the number of zones or the size per zone. +* Turn on [deployment autoscaling](../../../deploy-manage/autoscaling.md) to let {{ecloud}} manage your deployments by adjusting their available resources automatically. +* Or, if you prefer manual control, log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body), select your deployment, select **Edit deployment** from the **Actions** dropdown, and either increase the number of zones or the size per zone. ::::{warning} Increasing the number of zones should not be used to add more resources. The concept of zones is meant for High Availability (2 zones) and Fault Tolerance (3 zones), but neither will work if the cluster relies on the resources from those zones to be operational. The recommendation is to scale up the resources within a single zone until the cluster can take the full load (add some buffer to be prepared for a peak of requests), then scale out by adding additional zones depending on your requirements: 2 zones for High Availability, 3 zones for Fault Tolerance. diff --git a/raw-migrated-files/cloud/cloud/ec-prepare-production.md b/raw-migrated-files/cloud/cloud/ec-prepare-production.md deleted file mode 100644 index 00c9d4226..000000000 --- a/raw-migrated-files/cloud/cloud/ec-prepare-production.md +++ /dev/null @@ -1,12 +0,0 @@ -# Preparing a deployment for production [ec-prepare-production] - -To make sure you’re all set for production, consider the following actions: - -* [Plan for your expected workloads](../../../deploy-manage/production-guidance/plan-for-production-elastic-cloud.md) and consider how many availability zones you’ll need. -* [Create a deployment](../../../deploy-manage/deploy/elastic-cloud/create-an-elastic-cloud-hosted-deployment.md) on the region you need and with a hardware profile that matches your use case. -* [Change your configuration](../../../deploy-manage/deploy/elastic-cloud/ec-customize-deployment.md) by turning on autoscaling, adding high availability, or adjusting components of the Elastic Stack. -* [Add extensions and plugins](../../../deploy-manage/deploy/elastic-cloud/add-plugins-extensions.md) to use Elastic supported extensions or add your own custom dictionaries and scripts. -* [Edit settings and defaults](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) to fine tune the performance of specific features. -* [Manage your deployment](../../../deploy-manage/deploy/elastic-cloud/manage-deployments.md) as a whole to restart, upgrade, stop routing, or delete. -* [Set up monitoring](../../../deploy-manage/monitor/stack-monitoring/elastic-cloud-stack-monitoring.md) to learn how to configure your deployments for observability, which includes metric and log collection, troubleshooting views, and cluster alerts to automate performance monitoring. - diff --git a/raw-migrated-files/cloud/cloud/ec-regional-deployment-aliases.md b/raw-migrated-files/cloud/cloud/ec-regional-deployment-aliases.md index a09e8bfd6..70405a3db 100644 --- a/raw-migrated-files/cloud/cloud/ec-regional-deployment-aliases.md +++ b/raw-migrated-files/cloud/cloud/ec-regional-deployment-aliases.md @@ -1,6 +1,6 @@ # Custom endpoint aliases [ec-regional-deployment-aliases] -Custom aliases for your deployment endpoints on Elasticsearch Service allow you to have predictable, human-readable URLs that can be shared easily. An alias is unique to only one deployment within a region. +Custom aliases for your deployment endpoints on {{ech}} allow you to have predictable, human-readable URLs that can be shared easily. An alias is unique to only one deployment within a region. ## Create a custom endpoint alias for a deployment [ec-create-regional-deployment-alias] diff --git a/raw-migrated-files/cloud/cloud/ec-restoring-snapshots.md b/raw-migrated-files/cloud/cloud/ec-restoring-snapshots.md index 8470e4bae..3f8594eb3 100644 --- a/raw-migrated-files/cloud/cloud/ec-restoring-snapshots.md +++ b/raw-migrated-files/cloud/cloud/ec-restoring-snapshots.md @@ -2,9 +2,9 @@ Snapshots provide a way to restore your Elasticsearch indices. They can be used to copy indices for testing, to recover from failures or accidental deletions, or to migrate data to other deployments. -By default, Elasticsearch Service takes a snapshot of all the indices in your Elasticsearch cluster every 30 minutes. You can set a different snapshot interval, if needed for your environment. You can also take snapshots on demand, without having to wait for the next interval. Taking a snapshot on demand does not affect the retention schedule for existing snapshots, it just adds an additional snapshot to the repository. This might be helpful if you are about to make a deployment change and you don’t have a current snapshot. +By default, {{ech}} takes a snapshot of all the indices in your Elasticsearch cluster every 30 minutes. You can set a different snapshot interval, if needed for your environment. You can also take snapshots on demand, without having to wait for the next interval. Taking a snapshot on demand does not affect the retention schedule for existing snapshots, it just adds an additional snapshot to the repository. This might be helpful if you are about to make a deployment change and you don’t have a current snapshot. -Use Kibana to manage your snapshots. In Kibana, you can set up additional repositories where the snapshots are stored, other than the one currently managed by Elasticsearch Service. You can view and delete snapshots, and configure a snapshot lifecycle management (SLM) policy to automate when snapshots are created and deleted. To learn more, check the [Snapshot and Restore](../../../deploy-manage/tools/snapshot-and-restore/create-snapshots.md) documentation. +Use Kibana to manage your snapshots. In Kibana, you can set up additional repositories where the snapshots are stored, other than the one currently managed by {{ech}}. You can view and delete snapshots, and configure a snapshot lifecycle management (SLM) policy to automate when snapshots are created and deleted. To learn more, check the [Snapshot and Restore](../../../deploy-manage/tools/snapshot-and-restore/create-snapshots.md) documentation. ::::{important} Snapshots back up only open indices. If you close an index, it is not included in snapshots and you will not be able to restore the data. @@ -16,5 +16,5 @@ A snapshot taken using the default `found-snapshots` repository can only be rest :::: -From within Elasticsearch Service, you can [restore a snapshot](../../../deploy-manage/tools/snapshot-and-restore/restore-snapshot.md) from a different deployment in the same region. +From within {{ech}}, you can [restore a snapshot](../../../deploy-manage/tools/snapshot-and-restore/restore-snapshot.md) from a different deployment in the same region. diff --git a/raw-migrated-files/cloud/cloud/ec-saas-metrics-accessing.md b/raw-migrated-files/cloud/cloud/ec-saas-metrics-accessing.md index b6a2562c5..dd1b72703 100644 --- a/raw-migrated-files/cloud/cloud/ec-saas-metrics-accessing.md +++ b/raw-migrated-files/cloud/cloud/ec-saas-metrics-accessing.md @@ -1,15 +1,15 @@ # Access performance metrics [ec-saas-metrics-accessing] -Cluster performance metrics are available directly in the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). The graphs on this page include a subset of Elasticsearch Service-specific performance metrics. +Cluster performance metrics are available directly in the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). The graphs on this page include a subset of {{ech}}-specific performance metrics. For advanced views or production monitoring, [enable logging and monitoring](../../../deploy-manage/monitor/stack-monitoring/elastic-cloud-stack-monitoring.md). The monitoring application provides more advanced views for Elasticsearch and JVM metrics, and includes a configurable retention period. To access cluster performance metrics: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. For example, you might want to select **Is unhealthy** and **Has master problems** to get a short list of deployments that need attention. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. For example, you might want to select **Is unhealthy** and **Has master problems** to get a short list of deployments that need attention. 3. From your deployment menu, go to the **Performance** page. @@ -22,7 +22,7 @@ The following metrics are available: :alt: Graph showing CPU usage ::: -Shows the maximum usage of the CPU resources assigned to your Elasticsearch cluster, as a percentage. CPU resources are relative to the size of your cluster, so that a cluster with 32GB of RAM gets assigned twice as many CPU resources as a cluster with 16GB of RAM. All clusters are guaranteed their share of CPU resources, as Elasticsearch Service infrastructure does not overcommit any resources. CPU credits permit boosting the performance of smaller clusters temporarily, so that CPU usage can exceed 100%. +Shows the maximum usage of the CPU resources assigned to your Elasticsearch cluster, as a percentage. CPU resources are relative to the size of your cluster, so that a cluster with 32GB of RAM gets assigned twice as many CPU resources as a cluster with 16GB of RAM. All clusters are guaranteed their share of CPU resources, as {{ech}} infrastructure does not overcommit any resources. CPU credits permit boosting the performance of smaller clusters temporarily, so that CPU usage can exceed 100%. ::::{tip} This chart reports the maximum CPU values over the sampling period. [Logs and Metrics](../../../deploy-manage/monitor/stack-monitoring/elastic-cloud-stack-monitoring.md) ingested into [Stack Monitoring](../../../deploy-manage/monitor/monitoring-data/visualizing-monitoring-data.md)'s "CPU Usage" instead reflects the average CPU over the sampling period. Therefore, you should not expect the two graphs to look exactly the same. When investigating [CPU-related performance issues](../../../troubleshoot/monitoring/performance.md), you should default to [Stack Monitoring](../../../deploy-manage/monitor/monitoring-data/visualizing-monitoring-data.md). @@ -88,7 +88,7 @@ Indicates the overhead involved in JVM garbage collection to reclaim memory. Performance correlates directly with resources assigned to your cluster, and many of these metrics will show some sort of correlation with each other when you are trying to determine the cause of a performance issue. Take a look at some of the scenarios included in this section to learn how you can determine the cause of performance issues. -It is not uncommon for performance issues on Elasticsearch Service to be caused by an undersized cluster that cannot cope with the workload it is being asked to handle. If your cluster performance metrics often shows high CPU usage or excessive memory pressure, consider increasing the size of your cluster soon to improve performance. This is especially true for clusters that regularly reach 100% of CPU usage or that suffer out-of-memory failures; it is better to resize your cluster early when it is not yet maxed out than to have to resize a cluster that is already overwhelmed. [Changing the configuration of your cluster](../../../deploy-manage/deploy/elastic-cloud/ec-customize-deployment.md) may add some overhead if data needs to be migrated to the new nodes, which can increase the load on a cluster further and delay configuration changes. +It is not uncommon for performance issues on {{ech}} to be caused by an undersized cluster that cannot cope with the workload it is being asked to handle. If your cluster performance metrics often shows high CPU usage or excessive memory pressure, consider increasing the size of your cluster soon to improve performance. This is especially true for clusters that regularly reach 100% of CPU usage or that suffer out-of-memory failures; it is better to resize your cluster early when it is not yet maxed out than to have to resize a cluster that is already overwhelmed. [Changing the configuration of your cluster](../../../deploy-manage/deploy/elastic-cloud/configure.md) may add some overhead if data needs to be migrated to the new nodes, which can increase the load on a cluster further and delay configuration changes. To help diagnose high CPU usage you can also use the Elasticsearch [nodes hot threads API](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-nodes-hot-threads), which identifies the threads on each node that have the highest CPU usage or that have been executing for a longer than normal period of time. @@ -119,7 +119,7 @@ Cluster performance metrics are shown per node and are color-coded to indicate w ## Cluster restarts after out-of-memory failures [ec_cluster_restarts_after_out_of_memory_failures] -For clusters that suffer out-of-memory failures, it can be difficult to determine whether the clusters are in a completely healthy state afterwards. For this reason, Elasticsearch Service automatically reboots clusters that suffer out-of-memory failures. +For clusters that suffer out-of-memory failures, it can be difficult to determine whether the clusters are in a completely healthy state afterwards. For this reason, {{ech}} automatically reboots clusters that suffer out-of-memory failures. You will receive an email notification to let you know that a restart occurred. For repeated alerts, the emails are aggregated so that you do not receive an excessive number of notifications. Either [resizing your cluster to reduce memory pressure](../../../deploy-manage/deploy/elastic-cloud/ec-customize-deployment-components.md#ec-cluster-size) or reducing the workload that a cluster is being asked to handle can help avoid these cluster restarts. diff --git a/raw-migrated-files/cloud/cloud/ec-secure-clusters-kerberos.md b/raw-migrated-files/cloud/cloud/ec-secure-clusters-kerberos.md deleted file mode 100644 index bd78c4073..000000000 --- a/raw-migrated-files/cloud/cloud/ec-secure-clusters-kerberos.md +++ /dev/null @@ -1,57 +0,0 @@ -# Secure your clusters with Kerberos [ec-secure-clusters-kerberos] - -You can secure your Elasticsearch clusters and Kibana instances in a deployment by using the Kerberos-5 protocol to authenticate users. - - -## Before you begin [ec_before_you_begin_13] - -The steps in this section require an understanding of Kerberos. To learn more about Kerberos, check our documentation on [configuring Elasticsearch for Kerberos authentication](/deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md). - - -## Configure the cluster to use Kerberos [ec-configure-kerberos-settings] - -With a custom bundle containing the Kerberos files and changes to the cluster configuration, you can enforce user authentication through the Kerberos protocol. - -1. Create or use an existing deployment that includes a Kibana instance. -2. Create a [custom bundle](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md) that contains your `krb5.conf` and `keytab` files, and add it to your cluster. - - ::::{tip} - You should use these exact filenames for Elasticsearch Service to recognize the file in the bundle. - :::: - -3. Edit your cluster configuration, sometimes also referred to as the deployment plan, to define Kerberos settings as described in [Elasticsearch documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md). - - ```sh - xpack.security.authc.realms.kerberos.cloud-krb: - order: 2 - keytab.path: es.keytab - remove_realm_name: false - ``` - - ::::{important} - The name of the realm must be `cloud-krb`, and the order must be 2: `xpack.security.authc.realms.kerberos.cloud-krb.order: 2` - :::: - -4. Update Kibana in the [user settings configuration](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) to use Kerberos as the authentication provider: - - ```sh - xpack.security.authc.providers: - kerberos.kerberos1: - order: 0 - ``` - - This configuration disables all other realms and only allows users to authenticate with Kerberos. If you wish to allow your native realm users to authenticate, you need to also enable the `basic` `provider` like this: - - ```sh - xpack.security.authc.providers: - kerberos.kerberos1: - order: 0 - description: "Log in with Kerberos" <1> - basic.basic1: - order: 1 - ``` - - 1. This arbitrary string defines how Kerberos login is titled in the Login Selector UI that is shown when you enable multiple authentication providers in Kibana. You can also configure the optional `icon` and `hint` settings for any authentication provider. - -5. Use the Kibana endpoint URL to log in. - diff --git a/raw-migrated-files/cloud/cloud/ec-secure-clusters-oidc.md b/raw-migrated-files/cloud/cloud/ec-secure-clusters-oidc.md deleted file mode 100644 index ab05eab7b..000000000 --- a/raw-migrated-files/cloud/cloud/ec-secure-clusters-oidc.md +++ /dev/null @@ -1,235 +0,0 @@ -# Secure your clusters with OpenID Connect [ec-secure-clusters-oidc] - -You can secure your deployment using OpenID Connect for single sign-on. OpenID Connect is an identity layer on top of the OAuth 2.0 protocol. The end user identity gets verified by an authorization server and basic profile information is sent back to the client. - -For a detailed description of how to implement OpenID Connect with various OpenID Connect Providers (OPs), check [Set up OpenID Connect with Azure, Google, or Okta](../../../deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md). - - -## Before you begin [ec_before_you_begin_12] - -To prepare for using OpenID Connect for authentication for deployments: - -* Create or use an existing deployment. Make note of the Kibana endpoint URL, it will be referenced as `` in the following steps. -* The steps in this section required a moderate understanding of [OpenID Connect](https://openid.net/specs/openid-connect-core-1_0.md#Authentication) in general and the Authorization Code Grant flow specifically. For more information about OpenID Connect and how it works with the Elastic Stack check: - - * Our [configuration guide for Elasticsearch](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#oidc-elasticsearch-authentication). - - - -## Configure the OpenID Connect Provider [ec-configure-oidc-provider] - -The OpenID *Connect Provider* (OP) is the entity in OpenID Connect that is responsible for authenticating the user and for granting the necessary tokens with the authentication and user information to be consumed by the *Relying Parties* (RP). - -In order for Elasticsearch Service (acting as an RP) to be able use your OpenID Connect Provider for authentication, a trust relationship needs to be established between the OP and the RP. In the OpenID Connect Provider, this means registering the RP as a client. - -The process for registering the Elasticsearch Service RP will be different from OP to OP and following the provider’s relevant documentation is prudent. The information for the RP that you commonly need to provide for registration are the following: - -`Relying Party Name` -: An arbitrary identifier for the relying party. Neither the specification nor our implementation impose any constraints on this value. - -`Redirect URI` -: This is the URI where the OP will redirect the user’s browser after authentication. The appropriate value for this is `/api/security/oidc/callback`. This can also be called the `Callback URI`. - -At the end of the registration process, the OP assigns a Client Identifier and a Client Secret for the RP (Elasticsearch Service) to use. Note these two values as they are used in the cluster configuration. - - -## Configure your cluster to use OpenID Connect [ec-secure-deployment-oidc] - -You’ll need to [add the client secret](../../../deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#ec-oidc-client-secret) to the keystore and then [update the Elasticsearch user settings](../../../deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#ec-oidc-user-settings) to refer to that secret and use the OpenID Connect realm. - - -### Configure the Client Secret [ec-oidc-client-secret] - -Configure the Client Secret that was assigned to the PR by the OP during registration to the Elasticsearch keystore. - -This is a sensitive setting, it won’t be stored in plaintext in the cluster configuration but rather as a secure setting. In order to do so, follow these steps: - -1. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. - - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. - -2. From your deployment menu, select **Security**. -3. Under the **Elasticsearch keystore** section, select **Add settings**. -4. On the **Create setting** window, select the secret **Type** to be `Single string`. -5. Set the **Setting name**` to `xpack.security.authc.realms.oidc..rp.client_secret` and add the Client Secret you received from the OP during registration in the `Secret` field. - - ::::{note} - `` refers to the name of the OpenID Connect Realm. You can select any name that contains alphanumeric characters, underscores and hyphens. Replace `` with the realm name you selected. - :::: - - - ::::{note} - After you configure the Client Secret, any attempt to restart the deployment will fail until you complete the rest of the configuration steps. If you wish to rollback the OpenID Connect related configuration effort, you need to remove the `xpack.security.authc.realms.oidc..rp.client_secret` that was just added by using the "remove" button by the setting name under `Security keys`. - :::: - -6. You must also edit your cluster configuration, sometimes also referred to as the deployment plan, in order to add the appropriate settings. - - -### Configure the user settings [ec-oidc-user-settings] - -The Elasticsearch cluster needs to be configured to use the OpenID Connect realm for user authentication and to map the applicable roles to the users. If you are using machine learning or a deployment with hot-warm architecture, you must include this OpenID Connect related configuration in the user settings section for each node type. - -1. [Update your Elasticsearch user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) for the `oidc` realm and specify the relevant configuration: - - ```sh - xpack: - security: - authc: - realms: - oidc: - oidc-realm-name: <1> - order: 2 <2> - rp.client_id: "client-id" <3> - rp.response_type: "code" - rp.redirect_uri: "/api/security/oidc/callback" <4> - op.issuer: "" <5> - op.authorization_endpoint: "" <6> - op.token_endpoint: "" <7> - op.userinfo_endpoint: "" <8> - op.jwkset_path: "" <9> - claims.principal: sub <10> - claims.groups: "http://example.info/claims/groups" <11> - ``` - - 1. Defines the OpenID Connect realm name. The realm name can only contain alphanumeric characters, underscores, and hyphens - 2. The order of the OpenID Connect realm in your authentication chain. Allowed values are between `2` and `100`. Set to `2` unless you plan on configuring multiple SSO realms for this cluster. - 3. This, usually opaque, arbitrary string, is the Client Identifier that was assigned to the Elasticsearch Service RP by the OP upon registration. - 4. Replace `` with the value noted in the previous step - 5. A url, used as a unique identifier for the OP. The value for this setting should be provided by your OpenID Connect Provider. - 6. The URL for the Authorization Endpoint in the OP. This is where the user’s browser will be redirected to start the authentication process. The value for this setting should be provided by your OpenID Connect Provider. - 7. The URL for the Token Endpoint in the OpenID Connect Provider. This is the endpoint where Elasticsearch Service will send a request to exchange the code for an ID Token, as part of the Authorization Code flow. The value for this setting should be provided by your OpenID Connect Provider. - 8. (Optional) The URL for the UserInfo Endpoint in the OpenID Connect Provider. This is the endpoint of the OP that can be queried to get further user information, if required. The value for this setting should be provided by your OpenID Connect Provider. - 9. The path to a file or an HTTPS URL pointing to a JSON Web Key Set with the key material that the OpenID Connect Provider uses for signing tokens and claims responses. Your OpenID Connect Provider should provide you with this file. - 10. Defines the OpenID Connect claim that is going to be mapped to the principal (username) of the authenticated user in Kibana. In this example, we map the value of the `sub` claim, but this is not a requirement, other claims can be used too. See [the claims mapping documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#oidc-claims-mapping) for details and available options. - 11. Defines the OpenID Connect claim that is going to be used for role mapping. Note that the value `"http://example.info/claims/groups"` that is used here, is an arbitrary example. Check [the claims mapping documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#oidc-claims-mapping) for a very detailed description of how the claim mapping works and how can these be used for role mapping. The name of this claim should be determined by the configuration of your OpenID Connect Provider. NOTE: According to the OpenID Connect specification, the OP should also make their configuration available at a well known URL, which is the concatenation of their `Issuer` value with the `.well-known/openid-configuration` string. To configure the OpenID Connect realm, refer to the `https://op.org.com/.well-known/openid-configuration` documentation. - -2. By default, users authenticating through OpenID Connect have no roles assigned to them. For example, if you want all your users authenticating with OpenID Connect to get access to Kibana, issue the following request to Elasticsearch: - - ```sh - POST /_security/role_mapping/CLOUD_OIDC_TO_KIBANA_ADMIN <1> - { - "enabled": true, - "roles": [ "kibana_admin" ], <2> - "rules": { <3> - "field": { "realm.name": "oidc-realm-name" } <4> - }, - "metadata": { "version": 1 } - } - ``` - - 1. The name of the new role mapping. - 2. The role mapped to the users. - 3. The fields to match against. - 4. The name of the OpenID Connect realm. This needs to be the same value as the one used in the cluster configuration. - -3. Update Kibana in the [user settings configuration](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) to use OpenID Connect as the authentication provider: - - ```sh - xpack.security.authc.providers: - oidc.oidc1: - order: 0 - realm: oidc-realm-name <1> - ``` - - 1. The name of the OpenID Connect realm. This needs to be the same value as the one used in the cluster configuration. - - - This configuration disables all other realms and only allows users to authenticate with OpenID Connect. If you wish to allow your native realm users to authenticate, you need to also enable the `basic` `provider` like this: - - ```sh - xpack.security.authc.providers: - oidc.oidc1: - order: 0 - realm: oidc-realm-name - description: "Log in with my OpenID Connect" <1> - basic.basic1: - order: 1 - ``` - - 1. This arbitrary string defines how OpenID Connect login is titled in the Login Selector UI that is shown when you enable multiple authentication providers in Kibana. If you have a Kibana instance, you can also configure the optional `icon` and `hint` settings for any authentication provider. - -4. Optional: If your OpenID Connect Provider doesn’t publish its JWKS at an https URL, or if you want to use a local copy, you can upload the JWKS as a file. - - 1. Prepare a ZIP file with a [custom bundle](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md) that contains your OpenID Connect Provider’s JWKS file (`op_jwks.json`) inside of an `oidc` folder. - - This bundle allows all Elasticsearch containers to access the metadata file. - - 2. Update your Elasticsearch cluster on the [deployments page](../../../deploy-manage/deploy/elastic-cloud/add-plugins-extensions.md) to use the bundle you prepared in the previous step. - - - Custom bundles are unzipped under the path `/app/config/BUNDLE_DIRECTORY_STRUCTURE`, where `BUNDLE_DIRECTORY_STRUCTURE` is the directory structure in the ZIP file. Make sure to save the file location where custom bundles get unzipped, as you will need it in the next step. - - In our example, the OpenID Connect Provider JWK set file will be located in the path `/app/config/oidc/op_jwks.json`: - - ```sh - $ tree . - . - └── oidc - └── op_jwks.json - ``` - - 3. Adjust your `oidc` realm configuration accordingly: - - ```sh - xpack: - security: - authc: - realms: - oidc: - oidc-realm-name: - ... - op.jwks_path: /app/config/oidc/op_jwks.json <1> - ``` - - 1. The path to the JWKS file that was uploaded - - - -## Configure SSL [ec-oidc-ssl-configuration] - -OpenID Connect depends on TLS to provider security properties such as encryption in transit and endpoint authentication. The RP is required to establish back-channel communication with the OP in order to exchange the code for an ID Token during the Authorization code grant flow and in order to get additional user information from the UserInfo endpoint. As such, it is important that Elasticsearch Service can validate and trust the server certificate that the OP uses for TLS. Since the system truststore is used for the client context of outgoing https connections, if your OP is using a certificate from a trusted CA, no additional configuration is needed. - -However, if your OP uses a certificate that is issued for instance, by a CA used only in your Organization, you must configure Elasticsearch Service to trust that CA. - -1. Prepare a ZIP file with a [custom bundle](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md) that contains the CA certificate (`company-ca.pem`) that signed the certificate your OpenID Connect Provider uses for TLS inside of an `oidc-tls` folder -2. Update your Elasticsearch cluster on the [deployments page](../../../deploy-manage/deploy/elastic-cloud/add-plugins-extensions.md) to use the bundle you prepared in the previous step. - - - Custom bundles are unzipped under the path `/app/config/BUNDLE_DIRECTORY_STRUCTURE`, where `BUNDLE_DIRECTORY_STRUCTURE` is the directory structure in the ZIP file. Make sure to save the file location where custom bundles get unzipped, as you will need it in the next step. - - In our example, the CA certificate file will be located in the path `/app/config/oidc-tls/company-ca.pem`: - - ```sh - $ tree . - . - └── oidc-tls - └── company-ca.pem - ``` - -3. Adjust your `oidc` realm configuration accordingly: - - ```sh - xpack: - security: - authc: - realms: - oidc: - oidc-realm-name: - ... - ssl.certificate_authorities: ["/app/config/oidc-tls/company-ca.pem"] <1> - ``` - - 1. The path where the CA Certificate file was uploaded - - - -## Optional Settings [ec-oidc-optional-settings] - -The following optional oidc realm settings are supported and can be set if needed: - -* `op.endsession_endpoint` The URL to the End Session Endpoint in the OpenID Connect Provider. This is the endpoint where the user’s browser will be redirected after local logout, if the realm is configured for RP initiated Single Logout and the OP supports it. The value for this setting should be provided by your OpenID Connect Provider. -* `rp.post_logout_redirect_uri` The Redirect URL where the OpenID Connect Provider should redirect the user after a successful Single Logout. This should be set to a value that will not trigger a new OpenID Connect Authentication, `/security/logged_out` is a good choice for this parameter. -* `rp.signature_algorithm` The signature algorithm that will be used by {{es}} in order to verify the signature of the ID tokens it will receive from the OpenID Connect Provider. Defaults to `RSA256`. -* `rp.requested_scopes` The scope values that will be requested by the OpenID Connect Provider as part of the Authentication Request. Defaults to `openid`, which is the only required scope for authentication. If your use case requires that you receive additional claims, you might need to request additional scopes, one of `profile`, `email`, `address`, `phone`. Note that `openid` should always be included in the list of requested scopes. - - diff --git a/raw-migrated-files/cloud/cloud/ec-securing-clusters-JWT.md b/raw-migrated-files/cloud/cloud/ec-securing-clusters-JWT.md deleted file mode 100644 index 3455d0008..000000000 --- a/raw-migrated-files/cloud/cloud/ec-securing-clusters-JWT.md +++ /dev/null @@ -1,103 +0,0 @@ -# Secure your clusters with JWT [ec-securing-clusters-JWT] - -These steps show how you can secure your Elasticsearch clusters in a deployment by using a JSON Web Token (JWT) realm for authentication. - - -## Before you begin [ec_before_you_begin_14] - -Elasticsearch Service supports JWT of ID Token format with Elastic Stack version 8.2 and later. Support for JWT of certain access token format is available since 8.7. - - -## Configure your 8.2 or above cluster to use JWT of ID Token format [ec_configure_your_8_2_or_above_cluster_to_use_jwt_of_id_token_format] - -```sh -xpack: - security: - authc: - realms: - jwt: <1> - jwt-realm-name: <2> - order: 2 <3> - client_authentication.type: "shared_secret" <4> - allowed_signature_algorithms: "HS256,HS384,HS512,RS256,RS384,RS512,ES256,ES384,ES512,PS256,PS384,PS512" <5> - allowed_issuer: "issuer1" <6> - allowed_audiences: "elasticsearch1,elasticsearch2" <7> - claims.principal: "sub" <8> - claims.groups: "groups" <9> -``` - -1. Specifies the authentication realm service. -2. Defines the JWT realm name. -3. The order of the JWT realm in your authentication chain. Allowed values are between `2` and `100`, inclusive. -4. Defines the client authenticate type. -5. Defines the JWT `alg` header values allowed by the realm. -6. Defines the JWT `iss` claim value allowed by the realm. -7. Defines the JWT `aud` claim values allowed by the realm. -8. Defines the JWT claim name used for the principal (username). No default. -9. Defines the JWT claim name used for the groups. No default. - - -By default, users authenticating through JWT have no roles assigned to them. If you want all users in the group `elasticadmins` in your identity provider to be assigned the `superuser` role in your Elasticsearch cluster, issue the following request to Elasticsearch: - -```sh -POST /_security/role_mapping/CLOUD_JWT_ELASTICADMIN_TO_SUPERUSER <1> -{ - "enabled": true, - "roles": [ "superuser" ], <2> - "rules": { "all" : [ <3> - { "field": { "realm.name": "jwt-realm-name" } }, <4> - { "field": { "groups": "elasticadmins" } } - ]}, - "metadata": { "version": 1 } -} -``` - -1. The mapping name. -2. The Elastic Stack role to map to. -3. A rule specifying the JWT role to map from. -4. `realm.name` can be any string containing only alphanumeric characters, underscores, and hyphens. - - -::::{note} -In order to use the field `groups` in the mapping rule, you need to have mapped the JWT Attribute that conveys the group membership to `claims.groups` in the previous step. -:::: - - - -## Configure your 8.7 or above cluster to use JWT of access token format [ec_configure_your_8_7_or_above_cluster_to_use_jwt_of_access_token_format] - -```sh -xpack: - security: - authc: - realms: - jwt: - jwt-realm-name: - order: 2 - token_type: "access_token" <1> - client_authentication.type: "shared_secret" - allowed_signature_algorithms: [ "RS256", "HS256" ] - allowed_subjects: [ "123456-compute@developer.example.com" ] <2> - allowed_issuer: "issuer1" - allowed_audiences: [ "elasticsearch1", "elasticsearch2" ] - required_claims: <3> - token_use: "access" - fallback_claims.sub: "client_id" <4> - fallback_claims.aud: "scope" <5> - claims.principal: "sub" <6> - claims.groups: "groups" -``` - -1. Specifies token type accepted by this JWT realm -2. Specifies subjects allowed by the realm. This setting is mandatory for `access_token` JWT realms. -3. Additional claims required for successful authentication. The claim name can be any valid variable names and the claim values must be either string or array of strings. -4. The name of the JWT claim to extract the subject information if the `sub` claim does not exist. This setting is only available for `access_token` JWT realms. -5. The name of the JWT claim to extract the audiences information if the `aud` claim does not exist. This setting is only available for `access_token` JWT realms. -6. Since the fallback claim for `sub` is defined as `client_id`, the principal will also be extracted from `client_id` if the `sub` claim does not exist - - -::::{note} -Refer to [JWT authentication documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/jwt.md) for more details and examples. -:::: - - diff --git a/raw-migrated-files/cloud/cloud/ec-securing-clusters-SAML.md b/raw-migrated-files/cloud/cloud/ec-securing-clusters-SAML.md deleted file mode 100644 index 2d713dd69..000000000 --- a/raw-migrated-files/cloud/cloud/ec-securing-clusters-SAML.md +++ /dev/null @@ -1,172 +0,0 @@ -# Secure your clusters with SAML [ec-securing-clusters-SAML] - -These steps show how you can secure your Elasticsearch clusters and Kibana instances in a deployment by using a Security Assertion Markup Language (SAML) identity provider (IdP) for cross-domain, single sign-on authentication. - -For a detailed walk-through of how to implement SAML authentication for Kibana with Azure AD as an identity provider, refer to our guide [Set up SAML with Microsoft Entra ID](../../../deploy-manage/users-roles/cluster-or-deployment-auth/saml.md). - - -## Configure your 8.0 or above cluster to use SAML [ec_configure_your_8_0_or_above_cluster_to_use_saml] - -You must edit your cluster configuration, sometimes also referred to as the deployment plan, to point to the SAML IdP before you can complete the configuration in Kibana. If you are using machine learning or a deployment with hot-warm architecture, you must include this SAML IdP configuration in the user settings section for each node type. - -1. Create or use an existing deployment that includes a Kibana instance. -2. Copy the Kibana endpoint URL. -3. $$$step-3$$$[Update your Elasticsearch user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) for the `saml` realm and specify your IdP provider configuration: - - ```sh - xpack: - security: - authc: - realms: - saml: <1> - saml-realm-name: <2> - order: 2 <3> - attributes.principal: "nameid:persistent" <4> - attributes.groups: "groups" <5> - idp.metadata.path: "" <6> - idp.entity_id: "" <7> - sp.entity_id: "KIBANA_ENDPOINT_URL/" <8> - sp.acs: "KIBANA_ENDPOINT_URL/api/security/saml/callback" - sp.logout: "KIBANA_ENDPOINT_URL/logout" - ``` - - 1. Specifies the authentication realm service. - 2. Defines the SAML realm name. The SAML realm name can only contain alphanumeric characters, underscores, and hyphens. - 3. The order of the SAML realm in your authentication chain. Allowed values are between `2` and `100`. Set to `2` unless you plan on configuring multiple SSO realms for this cluster. - 4. Defines the SAML attribute that is going to be mapped to the principal (username) of the authenticated user in Kibana. In this non-normative example, `nameid:persistent` maps the `NameID` with the `urn:oasis:names:tc:SAML:2.0:nameid-format:persistent` format from the Subject of the SAML Assertion. You can use any SAML attribute that carries the necessary value for your use case in this setting, such as `uid` or `mail`. Refer to [the attribute mapping documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-attributes-mapping) for details and available options. - 5. Defines the SAML attribute used for role mapping when configured in Kibana. Common choices are `groups` or `roles`. The values for both `attributes.principal` and `attributes.groups` depend on the IdP provider, so be sure to review their documentation. Refer to [the attribute mapping documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-attributes-mapping) for details and available options. - 6. The file path or the HTTPS URL where your IdP metadata is available, such as `https://idpurl.com/sso/saml/metadata`. If you configure a URL you need to make ensure that your Elasticsearch cluster can access it. - 7. The SAML EntityID of your IdP. This can be read from the configuration page of the IdP, or its SAML metadata, such as `https://idpurl.com/entity_id`. - 8. Replace `KIBANA_ENDPOINT_URL` with the one noted in the previous step, such as `sp.entity_id: https://eddac6b924f5450c91e6ecc6d247b514.us-east-1.aws.found.io:443/` including the slash at the end. - -4. By default, users authenticating through SAML have no roles assigned to them. For example, if you want all your users authenticating with SAML to get access to Kibana, issue the following request to Elasticsearch: - - ```sh - POST /_security/role_mapping/CLOUD_SAML_TO_KIBANA_ADMIN <1> - { - "enabled": true, - "roles": [ "kibana_admin" ], <2> - "rules": { <3> - "field": { "realm.name": "saml-realm-name" } <4> - }, - "metadata": { "version": 1 } - } - ``` - - 1. The mapping name. - 2. The Elastic Stack role to map to. - 3. A rule specifying the SAML role to map from. - 4. `realm.name` can be any string containing only alphanumeric characters, underscores, and hyphens. - -5. Alternatively, if you want the users that belong to the group `elasticadmins` in your identity provider to be assigned the `superuser` role in your Elasticsearch cluster, issue the following request to Elasticsearch: - - ```sh - POST /_security/role_mapping/CLOUD_SAML_ELASTICADMIN_TO_SUPERUSER <1> - { - "enabled": true, - "roles": [ "superuser" ], <2> - "rules": { "all" : [ <3> - { "field": { "realm.name": "saml-realm-name" } }, <4> - { "field": { "groups": "elasticadmins" } } - ]}, - "metadata": { "version": 1 } - } - ``` - - 1. The mapping name. - 2. The Elastic Stack role to map to. - 3. A rule specifying the SAML role to map from. - 4. `realm.name` can be any string containing only alphanumeric characters, underscores, and hyphens. - - - ::::{note} - In order to use the field `groups` in the mapping rule, you need to have mapped the SAML Attribute that conveys the group membership to `attributes.groups` in the previous step. - :::: - -6. Update Kibana in the [user settings configuration](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) to use SAML as the authentication provider: - - ```sh - xpack.security.authc.providers: - saml.saml1: - order: 0 - realm: saml-realm-name <1> - ``` - - 1. The name of the SAML realm that you have configured earlier, for instance `saml-realm-name`. The SAML realm name can only contain alphanumeric characters, underscores, and hyphens. - - - This configuration disables all other realms and only allows users to authenticate with SAML. If you wish to allow your native realm users to authenticate, you need to also enable the `basic` `provider` like this: - - ```sh - xpack.security.authc.providers: - saml.saml1: - order: 0 - realm: saml-realm-name - description: "Log in with my SAML" <1> - basic.basic1: - order: 1 - ``` - - 1. This arbitrary string defines how SAML login is titled in the Login Selector UI that is shown when you enable multiple authentication providers in Kibana. You can also configure the optional `icon` and `hint` settings for any authentication provider. - -7. Optional: Generate SAML metadata for the Service Provider. - - The SAML 2.0 specification provides a mechanism for Service Providers to describe their capabilities and configuration using a metadata file. If your SAML Identity Provider requires or allows you to configure it to trust the Elastic Stack Service Provider through the use of a metadata file, you can generate the SAML metadata by issuing the following request to Elasticsearch: - - ```console - GET /_security/saml/metadata/realm_name <1> - ``` - - 1. The name of the SAML realm in Elasticsearch. - - - You can generate the SAML metadata by issuing the API request to Elasticsearch and storing metadata as an XML file using tools like `jq`. - - The following command, for example, generates the metadata for the SAML realm `saml1` and saves it to `metadata.xml` file: - - ```console - curl -X GET -H "Content-Type: application/json" -u user_name:password https://:443/_security/saml/metadata/saml1 <1> - |jq -r '.[]' > metadata.xml - ``` - - 1. The elasticsearch endpoint for the given deployment where the `saml1` realm is configured. - -8. Optional: If your Identity Provider doesn’t publish its SAML metadata at an HTTP URL, or if your Elasticsearch cluster cannot reach that URL, you can upload the SAML metadata as a file. - - 1. Prepare a ZIP file with a [custom bundle](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md) that contains your Identity Provider’s metadata (`metadata.xml`) inside of a `saml` folder. - - This bundle allows all Elasticsearch containers to access the metadata file. - - 2. Update your Elasticsearch cluster on the [deployments page](../../../deploy-manage/deploy/elastic-cloud/add-plugins-extensions.md) to use the bundle you prepared in the previous step. - - - Custom bundles are unzipped under the path `/app/config/BUNDLE_DIRECTORY_STRUCTURE`, where `BUNDLE_DIRECTORY_STRUCTURE` is the directory structure in the ZIP file. Make sure to save the file location where custom bundles get unzipped, as you will need it in the next step. - - In our example, the SAML metadata file will be located in the path `/app/config/saml/metadata.xml`: - - ```sh - $ tree . - . - └── saml - └── metadata.xml - ``` - - 3. Adjust your `saml` realm configuration accordingly: - - ```sh - idp.metadata.path: /app/config/saml/metadata.xml <1> - ``` - - 1. The path to the SAML metadata file that was uploaded. - -9. Use the Kibana endpoint URL to log in. - - -## Configure your 7.x cluster to use SAML [ec-7x-saml] - -For 7.x deployments, the instructions are similar to those for 8.x, but your Elasticsearch request should use `POST /_security/role_mapping/CLOUD_SAML_TO_KIBANA_ADMIN` (for Step 4) or `POST /_security/role_mapping/CLOUD_SAML_ELASTICADMIN_TO_SUPERUSER` (for Step 5). - -All of the other steps are the same. - - - diff --git a/raw-migrated-files/cloud/cloud/ec-securing-clusters-oidc-op.md b/raw-migrated-files/cloud/cloud/ec-securing-clusters-oidc-op.md deleted file mode 100644 index c3bfc6652..000000000 --- a/raw-migrated-files/cloud/cloud/ec-securing-clusters-oidc-op.md +++ /dev/null @@ -1,416 +0,0 @@ -# Set up OpenID Connect with Azure, Google, or Okta [ec-securing-clusters-oidc-op] - -This page explains how to implement OIDC, from the OAuth client credentials generation to the realm configuration for Elasticsearch and Kibana, with the following OpenID Connect Providers (OPs): - -* [Azure](../../../deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#ec-securing-oidc-azure) -* [Google](../../../deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#ec-securing-oidc-google) -* [Okta](../../../deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#ec-securing-oidc-okta) - -For further detail about configuring OIDC, check our [list of references](../../../deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#ec-summary-and-references) at the end of this article. - - -## Setting up OpenID Connect with Azure [ec-securing-oidc-azure] - -Follow these steps to configure OpenID Connect single sign-on on Elasticsearch Service with an Azure OP: - -1. Configure the OAuth client ID: - - 1. Create a new application: - - 1. Sign into the [Azure Portal](https://portal.azure.com/) and go to **Entra** (formerly Azure Active Directory). From there, select **App registrations** > **New registration** to register a new application. - - :::{image} ../../../images/cloud-ec-oidc-new-app-azure.png - :alt: A screenshot of the Azure Owned Applications tab on the New Registration page - ::: - - 2. Enter a **Name** for your application, for example `ec-oauth2`. - 3. Select a **Supported Account Type** according to your preferences. - 4. Set the **Redirect URI** as `KIBANA_ENDPOINT_URL/api/security/oidc/callback`. You can retrieve your `KIBANA_ENDPOINT_URL` by opening the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body) and selecting the Kibana **Copy endpoint** link in your deployment details. - 5. Select **Register**. - 6. Confirm that your new **Application (client) ID** appears in the app details. - - 2. Create a client ID and secret: - - 1. From the application that you created, go to **Certificates & secrets** and create a new secret under **Client secrets** > **New client secret**. - - :::{image} ../../../images/cloud-ec-oidc-oauth-create-credentials-azure.png - :alt: A screenshot of the Azure Add a Client Secret dialog - ::: - - 2. Provide a **Description**, for example `Kibana`. - 3. Select an expiration for the secret. - 4. Select **Add** and copy your newly created client secret for later use. - -2. Add your client secret to the Elasticsearch keystore: - - 1. Follow the steps described in our security settings documentation to [Add a secret value](../../../deploy-manage/security/secure-settings.md#ec-add-secret-values) to the keystore: - - 1. Set the **Setting name** as `xpack.security.authc.realms.oidc.oidc1.rp.client_secret`. - - For OIDC, the client secret setting name in the keystore should be of the form: `xpack.security.authc.realms.oidc..rp.client_secret`. - - 2. For **Type**, select `Single string`. - 3. Paste your client secret into the **Secret** field. - 4. Select **Save**. - -3. Configure Elasticsearch with the OIDC realm: - - To learn more about the available endpoints provided by Microsoft Azure, refer to the Endpoints details in the application that you configured. - - :::{image} ../../../images/cloud-ec-oidc-endpoints-azure.png - :alt: A screenshot of the Azure Endpoints dialog with fields for Diplay Name - ::: - - To configure Elasticsearch for OIDC: - - 1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). - 2. [Update your Elasticsearch user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) with the following configuration: - - ```sh - xpack: - security: - authc: - realms: - oidc: - oidc1: - order: 2 - rp.client_id: "" - rp.response_type: "code" - rp.requested_scopes: ["openid", "email"] - rp.redirect_uri: "KIBANA_ENDPOINT_URL/api/security/oidc/callback" - op.issuer: "https://login.microsoftonline.com//v2.0" - op.authorization_endpoint: "https://login.microsoftonline.com//oauth2/v2.0/authorize" - op.token_endpoint: "https://login.microsoftonline.com//oauth2/v2.0/token" - op.userinfo_endpoint: "https://graph.microsoft.com/oidc/userinfo" - op.endsession_endpoint: "https://login.microsoftonline.com//oauth2/v2.0/logout" - rp.post_logout_redirect_uri: "KIBANA_ENDPOINT_URL/logged_out" - op.jwkset_path: "https://login.microsoftonline.com//discovery/v2.0/keys" - claims.principal: email - claim_patterns.principal: "^([^@]+)@YOUR_DOMAIN\\.TLD$" - ``` - - Where: - - * `` is your Client ID, available in the application details on Azure. - * `` is your Directory ID, available in the application details on Azure. - * `KIBANA_ENDPOINT_URL` is your Kibana endpoint, available from the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). - * `YOUR_DOMAIN` and `TLD` in the `claim_patterns.principal` regular expression are your organization email domain and top level domain. - - - Remember to add this configuration for each node type in the [User settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) if you use several node types based on your deployment architecture (Dedicated Master, High IO, and/or High Storage). - -4. Create a role mapping: - - The following role mapping for OIDC restricts access to a specific user `(firstname.lastname)` based on the `claim_patterns.principal` email address. This prevents other users on the same domain from having access to your deployment. You can remove the rule or adjust it at your convenience. - - More details are available in our [Configuring role mappings documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#oidc-role-mappings). - - ```json - POST /_security/role_mapping/oidc_kibana - { - "enabled": true, - "roles": [ "superuser" ], - "rules" : { - "all" : [ - { - "field" : { - "realm.name" : "oidc1" - } - }, - { - "field" : { - "username" : [ - "" - ] - } - } - ] - }, - "metadata": { "version": 1 } - } - ``` - - If you use an email in the `claim_patterns.principal`, you won’t need to add the domain in the role_mapping (for example, `firstname.lastname@your_domain.tld` should be `firstname.lastname`). - -5. Configure Kibana with the OIDC realm: - - The next step is to configure Kibana, in order to initiate the OpenID authentication: - - 1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). - 2. [Update your Kibana user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) with the following configuration: - - ```sh - xpack.security.authc.providers: - oidc.oidc1: - order: 0 - realm: oidc1 - description: "Log in with Azure" - basic.basic1: - order: 1 - ``` - - - -## Setting up OpenID Connect with Google [ec-securing-oidc-google] - -Follow these steps to configure OpenID Connect single sign-on on Elasticsearch Service with a Google OP: - -1. Configure the OAuth client ID: - - 1. Create a new project: - - 1. Sign in to the Google Cloud and open the [New Project page](https://console.cloud.google.com/projectcreate). Create a new project. - - 2. Create a client ID and secret: - - 1. Navigate to the **APIs & Services** and open the [Credentials](https://console.cloud.google.com/apis/credentials) tab to create your OAuth client ID. - - :::{image} ../../../images/cloud-ec-oidc-oauth-create-credentials-google.png - :alt: A screenshot of the Google Cloud console Create Credentials dialog with the OAuth client ID field highlighted - ::: - - 2. For **Application Type** choose `Web application`. - 3. Choose a **Name** for your OAuth 2 client, for example `ec-oauth2`. - 4. Add an **Authorized redirect URI**. The URI should be defined as `KIBANA_ENDPOINT_URL/api/security/oidc/callback`. You can retrieve your `KIBANA_ENDPOINT_URL` by opening the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body) and selecting the Kibana **Copy endpoint** link in your deployment details. - 5. Select **Create** and copy your client ID and your client secret for later use. - -2. Add your client secret to the Elasticsearch keystore: - - 1. Follow the steps described in our security settings documentation to [Add a secret value](../../../deploy-manage/security/secure-settings.md#ec-add-secret-values) to the keystore: - - 1. Set the **Setting name** as `xpack.security.authc.realms.oidc.oidc1.rp.client_secret`. - - For OIDC, the client secret setting name in the keystore should be of the form: `xpack.security.authc.realms.oidc..rp.client_secret`. - - 2. For **Type**, select `Single string`. - 3. Paste your client secret into the **Secret** field. - 4. Select **Save**. - -3. Configure Elasticsearch with the OIDC realm: - - To learn more about the endpoints provided by Google, refer to this [OpenID configuration](https://accounts.google.com/.well-known/openid-configuration). - - To configure Elasticsearch for OIDC: - - 1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). - 2. [Update your Elasticsearch user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) with the following configuration: - - ```sh - xpack: - security: - authc: - realms: - oidc: - oidc1: - order: 2 - rp.client_id: "YOUR_CLIENT_ID" - rp.response_type: "code" - rp.requested_scopes: ["openid", "email"] - rp.redirect_uri: "KIBANA_ENDPOINT_URL/api/security/oidc/callback" - op.issuer: "https://accounts.google.com" - op.authorization_endpoint: "https://accounts.google.com/o/oauth2/v2/auth" - op.token_endpoint: "https://oauth2.googleapis.com/token" - op.userinfo_endpoint: "https://openidconnect.googleapis.com/v1/userinfo" - op.jwkset_path: "https://www.googleapis.com/oauth2/v3/certs" - claims.principal: email - claim_patterns.principal: "^([^@]+)@YOUR_DOMAIN\\.TLD$" - ``` - - Where: - - * `YOUR_CLIENT_ID` is your Client ID. - * `KIBANA_ENDPOINT_URL` is your Kibana endpoint, available from the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). - * `YOUR_DOMAIN` and `TLD` in the `claim_patterns.principal` regular expression are your organization email domain and top level domain. - - - Remember to add this configuration for each node type in the [User settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) if you use several node types based on your deployment architecture (Dedicated Master, High IO, and/or High Storage). - -4. Create a role mapping: - - The following role mapping for OIDC restricts access to a specific user `(firstname.lastname)` based on the `claim_patterns.principal` email address. This prevents other users on the same domain from having access to your deployment. You can remove the rule or adjust it at your convenience. - - More details are available in our [Configuring role mappings documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#oidc-role-mappings). - - ```json - POST /_security/role_mapping/oidc_kibana - { - "enabled": true, - "roles": [ "superuser" ], - "rules" : { - "all" : [ - { - "field" : { - "realm.name" : "oidc1" - } - }, - { - "field" : { - "username" : [ - "" - ] - } - } - ] - }, - "metadata": { "version": 1 } - } - ``` - - If you use an email in the `claim_patterns.principal`, you won’t need to add the domain in the role_mapping (for example, `firstname.lastname@your_domain.tld` should be `firstname.lastname`). - -5. Configure Kibana with the OIDC realm: - - The next step is to configure Kibana, in order to initiate the OpenID authentication: - - 1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). - 2. [Update your Kibana user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) with the following configuration: - - ```sh - xpack.security.authc.providers: - oidc.oidc1: - order: 0 - realm: oidc1 - description: "Log in with Google" - basic.basic1: - order: 1 - ``` - - - -## Setting up OpenID Connect with Okta [ec-securing-oidc-okta] - -Follow these steps to configure OpenID Connect single sign-on on Elasticsearch Service with an Okta OP: - -1. Configure the OAuth client ID: - - 1. Create a new application: - - 1. Go to **Applications** > **Add Application**. - - :::{image} ../../../images/cloud-ec-oidc-new-app-okta.png - :alt: A screenshot of the Get Started tab on the Okta Create A New Application page - ::: - - 2. For the **Platform** page settings, select **Web** then **Next**. - 3. In the **Application settings** choose a **Name** for your application, for example `Kibana OIDC`. - 4. Set the **Base URI** to `KIBANA_ENDPOINT_URL`. You can retrieve your `KIBANA_ENDPOINT_URL` by opening the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body) and selecting the Kibana **Copy endpoint** link in your deployment details. - 5. Set the **Login redirect URI** as `KIBANA_ENDPOINT_URL/api/security/oidc/callback`. - 6. Set the **Logout redirect URI** as `KIBANA_ENDPOINT_URL/logged_out`. - 7. Choose **Done** and copy your client ID and client secret values for later use. - -2. Add your client secret to the Elasticsearch keystore: - - 1. Follow the steps described in our security settings documentation to [Add a secret value](../../../deploy-manage/security/secure-settings.md#ec-add-secret-values) to the keystore: - - 1. Set the **Setting name** as `xpack.security.authc.realms.oidc.oidc1.rp.client_secret`. - - For OIDC, the client secret setting name in the keystore should be of the form: `xpack.security.authc.realms.oidc..rp.client_secret`. - - 2. For **Type**, select `Single string`. - 3. Paste your client secret into the **Secret** field. - 4. Select **Save**. - -3. Configure Elasticsearch with the OIDC realm: - - To learn more about the available endpoints provided by Okta, refer to the following OpenID configuration: `https://{{yourOktadomain}}/.well-known/openid-configuration` - - To configure Elasticsearch for OIDC: - - 1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). - 2. [Update your Elasticsearch user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) with the following configuration: - - ```sh - xpack: - security: - authc: - realms: - oidc: - oidc1: - order: 2 - rp.client_id: "YOUR_CLIENT_ID" - rp.response_type: "code" - rp.requested_scopes: ["openid", "email"] - rp.redirect_uri: "KIBANA_ENDPOINT_URL/api/security/oidc/callback" - op.issuer: "https://YOUR_OKTA_DOMAIN" - op.authorization_endpoint: "https://YOUR_OKTA_DOMAIN/oauth2/v1/authorize" - op.token_endpoint: "https://YOUR_OKTA_DOMAIN/oauth2/v1/token" - op.userinfo_endpoint: "https://YOUR_OKTA_DOMAIN/oauth2/v1/userinfo" - op.endsession_endpoint: "https://YOUR_OKTA_DOMAIN/oauth2/v1/logout" - op.jwkset_path: "https://YOUR_OKTA_DOMAIN/oauth2/v1/keys" - claims.principal: email - claim_patterns.principal: "^([^@]+)@YOUR_DOMAIN\\.TLD$" - ``` - - Where: - - * `YOUR_CLIENT_ID` is the Client ID that you set up in the previous steps. - * `KIBANA_ENDPOINT_URL` is your Kibana endpoint, available from the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). - * `YOUR_OKTA_DOMAIN` is the URL of your Okta domain shown on your Okta dashboard. - * `YOUR_DOMAIN` and `TLD` in the `claim_patterns.principal` regular expression are your organization email domain and top level domain. - - - Remember to add this configuration for each node type in the [User settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) if you use several node types based on your deployment architecture (Dedicated Master, High IO, and/or High Storage). - -4. Create a role mapping: - - The following role mapping for OIDC restricts access to a specific user `(firstname.lastname)` based on the `claim_patterns.principal` email address. This prevents other users on the same domain from having access to your deployment. You can remove the rule or adjust it at your convenience. - - More details are available in our [Configuring role mappings documentation](/deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md#oidc-role-mappings). - - ```json - POST /_security/role_mapping/oidc_kibana - { - "enabled": true, - "roles": [ "superuser" ], - "rules" : { - "all" : [ - { - "field" : { - "realm.name" : "oidc1" - } - }, - { - "field" : { - "username" : [ - "" - ] - } - } - ] - }, - "metadata": { "version": 1 } - } - ``` - - If you use an email in the `claim_patterns.principal`, you won’t need to add the domain in the role_mapping (for example, `firstname.lastname@your_domain.tld` should be `firstname.lastname`). - -5. Configure Kibana with the OIDC realm: - - The next step is to configure Kibana, in order to initiate the OpenID authentication: - - 1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). - 2. [Update your Kibana user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) with the following configuration: - - ```sh - xpack.security.authc.providers: - oidc.oidc1: - order: 0 - realm: oidc1 - description: "Log in with Okta" - basic.basic1: - order: 1 - ``` - - - -## Summary and References [ec-summary-and-references] - -This topic covered how to authenticate users in Kibana using OpenID Connect and different providers: Azure, Google, and Okta. If you are looking for other authentication methods, Elasticsearch Service also supports [SAML](../../../deploy-manage/users-roles/cluster-or-deployment-auth/saml.md) and [Kerberos](../../../deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md). Please note that OpenID Connect support is only available for Platinum and Enterprise subscriptions. New to Elasticsearch Service? [Sign Up for a Trial](../../../deploy-manage/deploy/elastic-cloud/create-an-organization.md) to try it out. - -To learn more about OIDC configuration consult the following references: - -* [OpenID Foundation](https://openid.net/connect/) -* [Azure OAuth 2.0 and OpenID documentation](https://docs.microsoft.com/en-us/azure/active-directory/develop/active-directory-v2-protocols) -* [Google OpenID Connect documentation](https://developers.google.com/identity/protocols/oauth2/openid-connect) -* [Okta OAuth 2.0 documentation](https://developer.okta.com/docs/guides/implement-oauth-for-okta/create-oauth-app/) - diff --git a/raw-migrated-files/cloud/cloud/ec-securing-clusters-saml-azure.md b/raw-migrated-files/cloud/cloud/ec-securing-clusters-saml-azure.md deleted file mode 100644 index 40bbf4eed..000000000 --- a/raw-migrated-files/cloud/cloud/ec-securing-clusters-saml-azure.md +++ /dev/null @@ -1,141 +0,0 @@ -# Set up SAML with Microsoft Entra ID [ec-securing-clusters-saml-azure] - -This guide provides a walk-through of how to configure Microsoft Entra ID (formerly Azure Active Directory) as an identity provider for SAML single sign-on (SSO) authentication, used for accessing Kibana in Elasticsearch Service. - -Use the following steps to configure SAML access to Kibana: - -* [Configure SAML with Azure AD to access Kibana](../../../deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#ec-securing-clusters-saml-azure-kibana) - -For more information about SAML configuration, you can also refer to: - -* [Secure your clusters with SAML](../../../deploy-manage/users-roles/cluster-or-deployment-auth/saml.md) -* [Single Sign-On SAML protocol](https://docs.microsoft.com/en-us/azure/active-directory/develop/single-sign-on-saml-protocol) - - -## Configure SAML with Azure AD to access Kibana [ec-securing-clusters-saml-azure-kibana] - -Follow these steps to configure SAML with Microsoft Entra ID as an identity provider to access Kibana. - -1. Configure the Azure Identity Provider: - - 1. Log in to the [Azure Portal](https://portal.azure.com/) and navigate to **Entra** (formerly Azure Active Directory). - 2. Click **Enterprise applications** and then **New application** to register a new application. - 3. Click **Create your own application**, provide a name, and select the **Integrate any other application you don’t find in the gallery** option. - - :::{image} ../../../images/cloud-ec-saml-azuread-create-app.png - :alt: The Azure Create your own application flyout - ::: - - 4. Navigate to the new application, click **Users and groups**, and add all necessary users and groups. Only the users and groups that you add here will have SSO access to the Elastic stack. - - :::{image} ../../../images/cloud-ec-saml-azuread-users-and-groups.png - :alt: The Azure User and groups page - ::: - - 5. Navigate to **Single sign-on** and edit the basic SAML configuration, adding the following information: - - 1. `Identifier (Entity ID)` - a string that uniquely identifies a SAML service provider. We recommend using your Kibana URL, but you can use any identifier. - - For example, `https://saml-azure.kb.northeurope.azure.elastic-cloud.com:443`. - - 2. `Reply URL` - This is the Kibana URL with `/api/security/saml/callback` appended. - - For example, `https://saml-azure.kb.northeurope.azure.elastic-cloud.com:443/api/security/saml/callback`. - - 3. `Logout URL` - This is the Kibana URL with `/logout` appended. - - For example, `https://saml-azure.kb.northeurope.azure.elastic-cloud.com:443/logout`. - - :::{image} ../../../images/cloud-ec-saml-azuread-kibana-config.png - :alt: The Azure SAML configuration page with Kibana settings - ::: - - 6. Navigate to **SAML-based Single sign-on**, open the **User Attributes & Claims** configuration, and update the fields to suit your needs. These settings control what information from Azure AD will be made available to the Elastic stack during SSO. This information can be used to identify a user in the Elastic stack and/or to assign different roles to users in the Elastic stack. We suggest that you configure a proper value for the `Unique User Identifier (Name ID)` claim that identifies the user uniquely and is not prone to changes. - - :::{image} ../../../images/cloud-ec-saml-azuread-user-attributes.png - :alt: The Azure User Attributes & Claims page - ::: - - 7. From the SAML configuration page in Azure, make a note of the `App Federation Metadata URL`. - -2. Configure Elasticsearch and Kibana for SAML: - - 1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). - 2. [Update your Elasticsearch user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) with the following configuration: - - ```sh - xpack.security.authc.realms.saml.kibana-realm: - order: 2 - attributes.principal: nameid - attributes.groups: "http://schemas.microsoft.com/ws/2008/06/identity/claims/groups" - idp.metadata.path: "https://login.microsoftonline.com//federationmetadata/2007-06/federationmetadata.xml?appid=" - idp.entity_id: "https://sts.windows.net//" - sp.entity_id: "" - sp.acs: "/api/security/saml/callback" - sp.logout: "/logout" - ``` - - Where: - - * ``` is your Application ID, available in the application details in Azure. - * ``` is your Tenant ID, available in the tenant overview page in Azure. - * `` is your Kibana endpoint, available from the Elasticsearch Service console. Ensure this is the same value that you set for `Identifier (Entity ID)` in the earlier Azure AD configuration step. - - Note that for `idp.metadata.path` we’ve shown the format to construct the URL, but this should be identical to the `App Federation Metadata URL` setting that you made a note of in the previous step. - - Remember to add this configuration for each node type in your [user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) if you use several node types based on your deployment architecture (Dedicated Master, High IO, and/or High Storage). - - 3. Next, configure Kibana in order to enable SAML authentication: - - 1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). - 2. [Update your Kibana user settings](../../../deploy-manage/deploy/elastic-cloud/edit-stack-settings.md) with the following configuration: - - ```yaml - xpack.security.authc.providers: - saml.kibana-realm: - order: 0 - realm: kibana-realm - description: "Log in with Azure AD" - ``` - - The configuration values used in the example above are: - - `xpack.security.authc.providers` - : Add `saml` provider to instruct {{kib}} to use SAML SSO as the authentication method. - - `xpack.security.authc.providers.saml..realm` - : Set this to the name of the SAML realm that you have used in your [Elasticsearch realm configuration](../../../deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-create-realm). For this example, use the realm name that you configured in the previous step: `kibana-realm`. - - 3. Create a role mapping. - - The following role mapping for SAML SSO restricts access to a specific user `(email)` based on the `attributes.principal` email address. This prevents other users on the same domain from having access to your deployment. You can remove the rule or adjust it at your convenience. - - ```json - POST /_security/role_mapping/SAML_kibana - { - "enabled": true, - "roles": [ "superuser" ], - "rules" : { - "all" : [ - { - "field" : { - "realm.name" : "kibana-realm" - } - }, - { - "field" : { - "username" : [ - "" - ] - } - } - ] - }, - "metadata": { "version": 1 } - } - ``` - - For more information, refer to [Configure role mapping](../../../deploy-manage/users-roles/cluster-or-deployment-auth/saml.md#saml-role-mapping) in the Elasticsearch SAML documentation. - - -You should now have successfully configured SSO access to Kibana with Azure AD as the identity provider. diff --git a/raw-migrated-files/cloud/cloud/ec-security.md b/raw-migrated-files/cloud/cloud/ec-security.md index fdd9f099e..1e2dbecb2 100644 --- a/raw-migrated-files/cloud/cloud/ec-security.md +++ b/raw-migrated-files/cloud/cloud/ec-security.md @@ -1,20 +1,20 @@ # Securing your deployment [ec-security] -The security of Elasticsearch Service is described on the [{{ecloud}} security](https://www.elastic.co/cloud/security) page. In addition to the security provided by {{ecloud}}, you can take the following steps to secure your deployments: +The security of {{ech}} is described on the [{{ecloud}} security](https://www.elastic.co/cloud/security) page. In addition to the security provided by {{ecloud}}, you can take the following steps to secure your deployments: * Prevent unauthorized access with password protection and role-based access control: * Reset the [`elastic` user password](../../../deploy-manage/users-roles/cluster-or-deployment-auth/built-in-users.md). * Use third-party authentication providers and services like [SAML](../../../deploy-manage/users-roles/cluster-or-deployment-auth/saml.md), [OpenID Connect](../../../deploy-manage/users-roles/cluster-or-deployment-auth/openid-connect.md), or [Kerberos](../../../deploy-manage/users-roles/cluster-or-deployment-auth/kerberos.md) to provide dynamic [role mappings](../../../deploy-manage/users-roles/cluster-or-deployment-auth/mapping-users-groups-to-roles.md) for role based or attribute based access control. * Use {{kib}} Spaces and roles to [secure access to {{kib}}](../../../deploy-manage/users-roles/cluster-or-deployment-auth/quickstart.md). - * Authorize and authenticate service accounts for {{beats}} by [granting access using API keys](asciidocalypse://docs/beats/docs/reference/ingestion-tools/beats-filebeat/beats-api-keys.md). + * Authorize and authenticate service accounts for {{beats}} by [granting access using API keys](asciidocalypse://docs/beats/docs/reference/filebeat/beats-api-keys.md). * Roles can provide full, or read only, access to your data and can be created in Kibana or directly in Elasticsearch. Check [defining roles](../../../deploy-manage/users-roles/cluster-or-deployment-auth/defining-roles.md) for full details. * Block unwanted traffic with [traffic filter](../../../deploy-manage/security/traffic-filtering.md). * Secure your settings with the Elasticsearch [keystore](../../../deploy-manage/security/secure-settings.md). -In addition, we also enable encryption at rest (EAR) by default. Elasticsearch Service supports EAR for both the data stored in your clusters and the snapshots we take for backup, on all cloud platforms and across all regions. +In addition, we also enable encryption at rest (EAR) by default. {{ech}} supports EAR for both the data stored in your clusters and the snapshots we take for backup, on all cloud platforms and across all regions. ## Should I use organization-level or deployment-level SSO? [ec_should_i_use_organization_level_or_deployment_level_sso] diff --git a/raw-migrated-files/cloud/cloud/ec-select-subscription-level.md b/raw-migrated-files/cloud/cloud/ec-select-subscription-level.md index 32a2591c4..78677c415 100644 --- a/raw-migrated-files/cloud/cloud/ec-select-subscription-level.md +++ b/raw-migrated-files/cloud/cloud/ec-select-subscription-level.md @@ -6,7 +6,7 @@ If, at any time during your monthly subscription with Elastic Cloud, you decide To change your subscription level: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. Select the user icon on the header bar and select **Billing** from the menu. 3. On the **Overview** page, select **Update subscription**. 4. Choose a new subscription level. diff --git a/raw-migrated-files/cloud/cloud/ec-service-status.md b/raw-migrated-files/cloud/cloud/ec-service-status.md index c87b03694..2a769018f 100644 --- a/raw-migrated-files/cloud/cloud/ec-service-status.md +++ b/raw-migrated-files/cloud/cloud/ec-service-status.md @@ -1,6 +1,6 @@ # Service status [ec-service-status] -Elasticsearch Service is a hosted service for the Elastic Stack that runs on different cloud platforms, such as Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. Like any service, it might undergo availability changes from time to time. When availability changes, Elastic makes sure to provide you with a current service status. +{{ech}} is a hosted service for the Elastic Stack that runs on different cloud platforms, such as Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure. Like any service, it might undergo availability changes from time to time. When availability changes, Elastic makes sure to provide you with a current service status. To check current and past service availability, go to [Cloud Status](https://cloud-status.elastic.co/) page. diff --git a/raw-migrated-files/cloud/cloud/ec-sign-outgoing-saml-message.md b/raw-migrated-files/cloud/cloud/ec-sign-outgoing-saml-message.md deleted file mode 100644 index 0a49621eb..000000000 --- a/raw-migrated-files/cloud/cloud/ec-sign-outgoing-saml-message.md +++ /dev/null @@ -1,64 +0,0 @@ -# Sign outgoing SAML messages [ec-sign-outgoing-saml-message] - -If configured, Elastic Stack will sign outgoing SAML messages. - -As a prerequisite, you need to generate a signing key and a self-signed certificate. You need to share this certificate with your SAML Identity Provider so that it can verify the received messages. The key needs to be unencrypted. The exact procedure is system dependent, you can use for example `openssl`: - -```sh -openssl req -new -x509 -days 3650 -nodes -sha256 -out saml-sign.crt -keyout saml-sign.key -``` - -Place the files under the `saml` folder and add them to the existing SAML bundle, or [create a new one](../../../deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md). - -In our example, the certificate and the key will be located in the path `/app/config/saml/saml-sign.{crt,key}`: - -```sh -$ tree . -. -└── saml - ├── saml-sign.crt - └── saml-sign.key -``` - -Make sure that the bundle is included with your deployment. - -Adjust your realm configuration accordingly: - -```sh - signing.certificate: /app/config/saml/saml-sign.crt <1> - signing.key: /app/config/saml/saml-sign.key <2> -``` - -1. The path to the SAML signing certificate that was uploaded. -2. The path to the SAML signing key that was uploaded. - - -When configured with a signing key and certificate, Elastic Stack will sign all outgoing messages (SAML Authentication Requests, SAML Logout Requests, SAML Logout Responses) by default. This behavior can be altered by configuring `signing.saml_messages` appropriately with the comma separated list of messages to sign. Supported values are `AuthnRequest`, `LogoutRequest` and `LogoutResponse` and the default value is `*`. - -For example: - -```sh -xpack: - security: - authc: - realms: - saml-realm-name: - order: 2 - ... - signing.saml_messages: AuthnRequest <1> - ... -``` - -1. This configuration ensures that only SAML authentication requests will be sent signed to the Identity Provider. - - -## Optional settings [ec_optional_settings] - -The following optional realm settings are supported: - -* `force_authn` Specifies whether to set the `ForceAuthn` attribute when requesting that the IdP authenticate the current user. If set to `true`, the IdP is required to verify the user’s identity, irrespective of any existing sessions they might have. Defaults to `false`. -* `idp.use_single_logout` Indicates whether to utilise the Identity Provider’s `` (if one exists in the IdP metadata file). Defaults to `true`. - -After completing these steps, you can log in to Kibana by authenticating against your SAML IdP. If you encounter any issues with the configuration, refer to the [SAML troubleshooting page](/troubleshoot/elasticsearch/security/trb-security-saml.md) which contains information about common issues and suggestions for their resolution. - - diff --git a/raw-migrated-files/cloud/cloud/ec-snapshot-restore.md b/raw-migrated-files/cloud/cloud/ec-snapshot-restore.md index d7245ca76..ebc1664c0 100644 --- a/raw-migrated-files/cloud/cloud/ec-snapshot-restore.md +++ b/raw-migrated-files/cloud/cloud/ec-snapshot-restore.md @@ -2,9 +2,9 @@ Snapshots are an efficient way to ensure that your Elasticsearch indices can be recovered in the event of an accidental deletion, or to migrate data across deployments. -The information here is specific to managing repositories and snapshots in Elasticsearch Service. We also support the Elasticsearch snapshot and restore API to back up your data. For details, consult the [Snapshot and Restore documentation](../../../deploy-manage/tools/snapshot-and-restore.md). +The information here is specific to managing repositories and snapshots in {{ech}}. We also support the Elasticsearch snapshot and restore API to back up your data. For details, consult the [Snapshot and Restore documentation](../../../deploy-manage/tools/snapshot-and-restore.md). -When you create a cluster in Elasticsearch Service, a default repository called `found-snapshots` is automatically added to the cluster. This repository is specific to that cluster: the deployment ID is part of the repository’s `base_path`, i.e., `/snapshots/[cluster-id]`. +When you create a cluster in {{ech}}, a default repository called `found-snapshots` is automatically added to the cluster. This repository is specific to that cluster: the deployment ID is part of the repository’s `base_path`, i.e., `/snapshots/[cluster-id]`. ::::{important} Do not disable or delete the default `cloud-snapshot-policy` SLM policy, and do not change the default `found-snapshots` repository defined in that policy. These actions are not supported. diff --git a/raw-migrated-files/cloud/cloud/ec-traffic-filtering-deployment-configuration.md b/raw-migrated-files/cloud/cloud/ec-traffic-filtering-deployment-configuration.md index d464d1bf1..94ad8ce3e 100644 --- a/raw-migrated-files/cloud/cloud/ec-traffic-filtering-deployment-configuration.md +++ b/raw-migrated-files/cloud/cloud/ec-traffic-filtering-deployment-configuration.md @@ -1,8 +1,8 @@ # Traffic Filtering [ec-traffic-filtering-deployment-configuration] -Traffic filtering is one of the security layers available in Elasticsearch Service. It allows you to limit how your deployments can be accessed. Add another layer of security to your installation and deployments by restricting inbound traffic to *only* the sources that you trust. +Traffic filtering is one of the security layers available in {{ecloud}}. It allows you to limit how your deployments can be accessed. Add another layer of security to your installation and deployments by restricting inbound traffic to *only* the sources that you trust. -Elasticsearch Service supports the following traffic sources: +{{ecloud}} supports the following traffic sources: * [IP addresses and Classless Inter-Domain Routing (CIDR) masks](../../../deploy-manage/security/ip-traffic-filtering.md), e.g. `82.102.25.74` or `199.226.244.0/24`. * [AWS Virtual Private Clouds (VPCs) over AWS PrivateLink](../../../deploy-manage/security/aws-privatelink-traffic-filters.md), supported only in AWS regions. @@ -44,7 +44,7 @@ By default, all your deployments are accessible over the public internet. They a Once you associate at least one traffic filter with a deployment, traffic that does not match any rules (for this deployment) is denied. ::::{note} -This only applies to external traffic. Internal traffic is managed by Elasticsearch Service. For example, Kibana can connect to Elasticsearch, as well as internal services which manage the deployment. Other deployments can’t connect to deployments protected by traffic filters. +This only applies to external traffic. Internal traffic is managed by {{ecloud}}. For example, Kibana can connect to Elasticsearch, as well as internal services which manage the deployment. Other deployments can’t connect to deployments protected by traffic filters. :::: @@ -83,7 +83,7 @@ Jane creates a deployment. At this point the deployment is accessible over inter Jane wants to restrict access to the deployment so that only the traffic originating from Jane’s VPC is allowed. -* They create a Traffic Filter *Private Link Endpoint* rule set, thus registering their VPC with Elasticsearch Service. +* They create a Traffic Filter *Private Link Endpoint* rule set, thus registering their VPC with {{ecloud}}. * They associate this rule set with the deployment. * At this point, their deployment is only accessible over PrivateLink from Jane’s VPC. This does not affect other security layers, so Jane’s users need to authenticate with username+password. * The deployment is no longer accessible over the public internet endpoint. @@ -120,10 +120,10 @@ This section offers suggestions on how to troubleshoot your traffic filters. Bef ### Review the rule sets associated with a deployment [ec-review-rule-sets] -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. Select the **Security** tab on the left-hand side menu bar. 4. Traffic filter rule sets are listed under **Traffic filters**. @@ -135,8 +135,8 @@ On this screen you can view and remove existing filters and attach new filters. To identify which rule sets are automatically applied to new deployments in your account: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. 3. Under the **Features** tab, open the **Traffic filters** page. 4. You can find the list of traffic filter rule sets. 5. Select each of the rule sets — **Include by default** is checked when this rule set is automatically applied to all new deployments in its region. diff --git a/raw-migrated-files/cloud/cloud/ec-traffic-filtering-ip.md b/raw-migrated-files/cloud/cloud/ec-traffic-filtering-ip.md index 5b79ad013..e6e40fab6 100644 --- a/raw-migrated-files/cloud/cloud/ec-traffic-filtering-ip.md +++ b/raw-migrated-files/cloud/cloud/ec-traffic-filtering-ip.md @@ -1,13 +1,13 @@ # IP traffic filters [ec-traffic-filtering-ip] -Traffic filtering, by IP address or CIDR block, is one of the security layers available in Elasticsearch Service. It allows you to limit how your deployments can be accessed. We have two types of filters available for filtering by IP address or CIDR block: Ingress/Inbound and Egress/Outbound (Beta, API only). +Traffic filtering, by IP address or CIDR block, is one of the security layers available in {{ecloud}}. It allows you to limit how your deployments can be accessed. We have two types of filters available for filtering by IP address or CIDR block: Ingress/Inbound and Egress/Outbound (Beta, API only). -* **Ingress or inbound IP filters** - These restrict access to your deployments from a set of IP addresses or CIDR blocks. These filters are available through the Elasticsearch Service console. +* **Ingress or inbound IP filters** - These restrict access to your deployments from a set of IP addresses or CIDR blocks. These filters are available through the {{ecloud}} Console. * **Egress or outbound IP filters** - These restrict the set of IP addresses or CIDR blocks accessible from your deployment. These might be used to restrict access to a certain region or service. This feature is in beta and is currently only available through the [Traffic Filtering API](../../../deploy-manage/security/manage-traffic-filtering-through-api.md). -Read more about [Traffic Filtering](../../../deploy-manage/security/traffic-filtering.md) for the general concepts behind traffic filtering in Elasticsearch Service. +Read more about [Traffic Filtering](../../../deploy-manage/security/traffic-filtering.md) for the general concepts behind traffic filtering in {{ecloud}}. -Follow the step described here to set up ingress or inbound IP filters through the Elasticsearch Service console. +Follow the step described here to set up ingress or inbound IP filters through the {{ecloud}} Console. ## Create an IP filter rule set [ec-create-traffic-filter-ip-rule-set] @@ -16,8 +16,8 @@ You can combine any rules into a set, so we recommend that you group rules accor To create a rule set: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. 3. Under the **Features** tab, open the **Traffic filters** page. 4. Select **Create filter**. 5. Select **IP filtering rule set**. @@ -60,8 +60,8 @@ If you want to remove any traffic restrictions from a deployment or delete a rul You can edit a rule set name or change the allowed traffic sources using IPv4, or a range of addresses with CIDR. -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. 3. Under the **Features** tab, open the **Traffic filters** page. 4. Find the rule set you want to edit. 5. Select the **Edit** icon. diff --git a/raw-migrated-files/cloud/cloud/ec-traffic-filtering-psc.md b/raw-migrated-files/cloud/cloud/ec-traffic-filtering-psc.md index 16eb03ba3..eb6d7e051 100644 --- a/raw-migrated-files/cloud/cloud/ec-traffic-filtering-psc.md +++ b/raw-migrated-files/cloud/cloud/ec-traffic-filtering-psc.md @@ -1,8 +1,8 @@ # GCP Private Service Connect traffic filters [ec-traffic-filtering-psc] -Traffic filtering, to allow only Private Service Connect connections, is one of the security layers available in Elasticsearch Service. It allows you to limit how your deployments can be accessed. +Traffic filtering, to allow only Private Service Connect connections, is one of the security layers available in {{ecloud}}. It allows you to limit how your deployments can be accessed. -Read more about [Traffic Filtering](../../../deploy-manage/security/traffic-filtering.md) for the general concepts behind traffic filtering in Elasticsearch Service. +Read more about [Traffic Filtering](../../../deploy-manage/security/traffic-filtering.md) for the general concepts behind traffic filtering in {{ecloud}}. ::::{note} Private Service Connect filtering is supported only for Google Cloud regions. diff --git a/raw-migrated-files/cloud/cloud/ec-traffic-filtering-through-the-api.md b/raw-migrated-files/cloud/cloud/ec-traffic-filtering-through-the-api.md index 174063b7e..f1fd7effd 100644 --- a/raw-migrated-files/cloud/cloud/ec-traffic-filtering-through-the-api.md +++ b/raw-migrated-files/cloud/cloud/ec-traffic-filtering-through-the-api.md @@ -1,6 +1,6 @@ # Manage traffic filtering through the API [ec-traffic-filtering-through-the-api] -This example demonstrates how to use the Elasticsearch Service RESTful API to manage different types of traffic filters. We cover the following examples: +This example demonstrates how to use the {{ecloud}} RESTful API to manage different types of traffic filters. We cover the following examples: * [Create a traffic filter rule set](../../../deploy-manage/security/manage-traffic-filtering-through-api.md#ec-create-a-traffic-filter-rule-set) @@ -15,7 +15,7 @@ This example demonstrates how to use the Elasticsearch Service RESTful API to ma * [Delete a rule set association with a deployment](../../../deploy-manage/security/manage-traffic-filtering-through-api.md#ec-delete-rule-set-association-with-a-deployment) * [Delete a traffic filter rule set](../../../deploy-manage/security/manage-traffic-filtering-through-api.md#ec-delete-a-rule-set) -Read through the main [Traffic Filtering](../../../deploy-manage/security/traffic-filtering.md) page to learn about the general concepts behind filtering access to your Elasticsearch Service deployments. +Read through the main [Traffic Filtering](../../../deploy-manage/security/traffic-filtering.md) page to learn about the general concepts behind filtering access to your {{ech}} deployments. ## Create a traffic filter rule set [ec-create-a-traffic-filter-rule-set] @@ -52,7 +52,7 @@ https://api.elastic-cloud.com/api/v1/deployments/traffic-filter/rulesets \ ``` `region` -: The region is always the same region as the deployment you want to associate with a traffic filter rule set. For details, check the [list of available regions](asciidocalypse://docs/cloud/docs/reference/cloud/cloud-hosted/ec-regions-templates-instances.md). +: The region is always the same region as the deployment you want to associate with a traffic filter rule set. For details, check the [list of available regions](asciidocalypse://docs/cloud/docs/reference/cloud-hosted/ec-regions-templates-instances.md). `type` : The type of the rule set. In the JSON object, we use `ip` for the ingress IP traffic filter. Currently, we support `ip`, `egress_firewall`, `vpce` (AWS Private Link), `azure_private_endpoint` and `gcp_private_service_connect_endpoint`. These are described in further detail below. diff --git a/raw-migrated-files/cloud/cloud/ec-traffic-filtering-vnet.md b/raw-migrated-files/cloud/cloud/ec-traffic-filtering-vnet.md index 055438aa9..b2c967468 100644 --- a/raw-migrated-files/cloud/cloud/ec-traffic-filtering-vnet.md +++ b/raw-migrated-files/cloud/cloud/ec-traffic-filtering-vnet.md @@ -1,8 +1,8 @@ # Azure Private Link traffic filters [ec-traffic-filtering-vnet] -Traffic filtering, to allow only Azure Private Link connections, is one of the security layers available in Elasticsearch Service. It allows you to limit how your deployments can be accessed. +Traffic filtering, to allow only Azure Private Link connections, is one of the security layers available in {{ecloud}}. It allows you to limit how your deployments can be accessed. -Read more about [Traffic Filtering](../../../deploy-manage/security/traffic-filtering.md) for the general concepts behind traffic filtering in Elasticsearch Service. +Read more about [Traffic Filtering](../../../deploy-manage/security/traffic-filtering.md) for the general concepts behind traffic filtering in {{ecloud}}. ::::{note} Azure Private Link filtering is supported only for Azure regions. @@ -282,5 +282,5 @@ This means your deployment on Elastic Cloud can be in a different region than th 1. Create your Private Endpoint using the service alias for region 2 in the region 1 VNET (let’s call this VNET1). 2. Create a Private Hosted Zone for region 2, and associate it with VNET1 similar to the step [Create a Private Link endpoint and DNS](../../../deploy-manage/security/azure-private-link-traffic-filters.md#ec-private-link-azure-dns). Note that you are creating these resources in region 1, VNET1. -2. [Create a traffic filter rule set](../../../deploy-manage/security/azure-private-link-traffic-filters.md#ec-azure-create-traffic-filter-private-link-rule-set) and [Associate the rule set](../../../deploy-manage/security/aws-privatelink-traffic-filters.md#ec-associate-traffic-filter-private-link-rule-set) through the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body), just as you would for any deployment. +2. [Create a traffic filter rule set](../../../deploy-manage/security/azure-private-link-traffic-filters.md#ec-azure-create-traffic-filter-private-link-rule-set) and [Associate the rule set](../../../deploy-manage/security/aws-privatelink-traffic-filters.md#ec-associate-traffic-filter-private-link-rule-set) through the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body), just as you would for any deployment. 3. [Test the connection](../../../deploy-manage/security/azure-private-link-traffic-filters.md#ec-azure-access-the-deployment-over-private-link) from a VM or client in region 1 to your Private Link endpoint, and it should be able to connect to your Elasticsearch cluster hosted in region 2. diff --git a/raw-migrated-files/cloud/cloud/ec-traffic-filtering-vpc.md b/raw-migrated-files/cloud/cloud/ec-traffic-filtering-vpc.md index b0a30fd7a..417eb32ca 100644 --- a/raw-migrated-files/cloud/cloud/ec-traffic-filtering-vpc.md +++ b/raw-migrated-files/cloud/cloud/ec-traffic-filtering-vpc.md @@ -1,8 +1,8 @@ # AWS PrivateLink traffic filters [ec-traffic-filtering-vpc] -Traffic filtering, to only AWS PrivateLink connections, is one of the security layers available in Elasticsearch Service. It allows you to limit how your deployments can be accessed. +Traffic filtering, to only AWS PrivateLink connections, is one of the security layers available in {{ecloud}}. It allows you to limit how your deployments can be accessed. -Read more about [Traffic Filtering](../../../deploy-manage/security/traffic-filtering.md) for the general concepts behind traffic filtering in Elasticsearch Service. +Read more about [Traffic Filtering](../../../deploy-manage/security/traffic-filtering.md) for the general concepts behind traffic filtering in {{ecloud}}. ::::{note} PrivateLink filtering is supported only for AWS regions. AWS does not support cross-region PrivateLink connections. Your PrivateLink endpoint needs to be in the same region as your target deployments. Additional details can be found in the [AWS VPCE Documentation](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.md#vpce-interface-limitations). AWS interface VPC endpoints get created in availability zones (AZ). In some regions, our VPC endpoint *service* is not present in all the possible AZs that a region offers. You can only choose AZs that are common on both sides. As the *names* of AZs (for example `us-east-1a`) differ between AWS accounts, the following list of AWS regions shows the *ID* (e.g. `use1-az4`) of each available AZ for the service. Check [interface endpoint availability zone considerations](https://docs.aws.amazon.com/vpc/latest/userguide/vpce-interface.md#vpce-interface-availability-zones) for more details. @@ -187,8 +187,8 @@ Having trouble finding your VPC endpoint ID? You can find it in the AWS console. Once you know your VPC endpoint ID you can create a private link traffic filter rule set. -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. 3. Under the **Features** tab, open the **Traffic filters** page. 4. Select **Create filter**. 5. Select **Private link endpoint**. @@ -248,8 +248,8 @@ The settings `xpack.fleet.agents.fleet_server.hosts` and `xpack.fleet.outputs` t You can edit a rule set name or to change the VPC endpoint ID. -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. 3. Under the **Features** tab, open the **Traffic filters** page. 4. Find the rule set you want to edit. 5. Select the **Edit** icon. diff --git a/raw-migrated-files/cloud/cloud/ec-upgrade-deployment.md b/raw-migrated-files/cloud/cloud/ec-upgrade-deployment.md index 926a360a6..7c6102fa0 100644 --- a/raw-migrated-files/cloud/cloud/ec-upgrade-deployment.md +++ b/raw-migrated-files/cloud/cloud/ec-upgrade-deployment.md @@ -1,7 +1,7 @@ # Upgrade versions [ec-upgrade-deployment] ::::{important} -Beginning with Elastic Stack version 8.0, instructions for upgrading your Elasticsearch Service stack version can be found in [Upgrading on Elastic Cloud](../../../deploy-manage/upgrade/deployment-or-cluster.md). The following instructions apply for upgrading to Elastic Stack versions 7.x and previous. +Beginning with Elastic Stack version 8.0, instructions for upgrading {{ech}} deployments can be found in [Upgrading on Elastic Cloud](../../../deploy-manage/upgrade/deployment-or-cluster.md). The following instructions apply for upgrading to Elastic Stack versions 7.x and previous. :::: @@ -39,12 +39,12 @@ To successfully replace and override a plugin which is being upgraded, the `name ## Perform the upgrade [ec_perform_the_upgrade] -To upgrade a cluster in Elasticsearch Service: +To upgrade a cluster in {{ech}}: -1. Log in to the [Elasticsearch Service Console](https://cloud.elastic.co?page=docs&placement=docs-body). -2. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). +2. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. - On the deployments page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. + On the **Deployments** page you can narrow your deployments by name, ID, or choose from several other filters. To customize your view, use a combination of filters, or change the format from a grid to a list. 3. In the **Deployment version** section, select **Upgrade**. 4. Select a new version. @@ -56,7 +56,7 @@ To upgrade a cluster in Elasticsearch Service: 7. If you are upgrading to version 6.6 and earlier, major upgrades require a full cluster restart to complete the upgrade process. 8. If you had Kibana enabled, the UI will prompt you to also upgrade Kibana. The Kibana upgrade takes place separately from the Elasticsearch version upgrade and needs to be triggered manually: - 1. Find your deployment on the home page in the Elasticsearch Service card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the deployments page to view all of your deployments. + 1. Find your deployment on the home page in the **Hosted deployments** card and select **Manage** to access it directly. Or, select **Hosted deployments** to go to the **Deployments** page to view all of your deployments. 2. From your deployment menu, select **Kibana**. 3. If the button is available, select **Upgrade Kibana**. If the button is not available, Kibana does not need to be upgraded further. 4. Confirm the upgrade. diff --git a/raw-migrated-files/docs-content/serverless/elasticsearch-dev-tools.md b/raw-migrated-files/docs-content/serverless/elasticsearch-dev-tools.md deleted file mode 100644 index f7fa379f5..000000000 --- a/raw-migrated-files/docs-content/serverless/elasticsearch-dev-tools.md +++ /dev/null @@ -1,7 +0,0 @@ -# Developer tools [elasticsearch-dev-tools] - -A number of developer tools are available in your project’s UI under the **Dev Tools** section. - -* [Console](https://www.elastic.co/guide/en/serverless/current/devtools-run-api-requests-in-the-console.html): Make API calls to your {{es}} instance using the Query DSL and view the responses. -* [Search Profiler](https://www.elastic.co/guide/en/serverless/current/devtools-profile-queries-and-aggregations.html): Inspect and analyze your search queries to identify performance bottlenecks. -* [Grok Debugger](https://www.elastic.co/guide/en/serverless/current/devtools-debug-grok-expressions.html): Build and debug grok patterns before you use them in your data processing pipelines. diff --git a/raw-migrated-files/docs-content/serverless/elasticsearch-differences.md b/raw-migrated-files/docs-content/serverless/elasticsearch-differences.md index 20480732c..25035f5cb 100644 --- a/raw-migrated-files/docs-content/serverless/elasticsearch-differences.md +++ b/raw-migrated-files/docs-content/serverless/elasticsearch-differences.md @@ -147,7 +147,7 @@ The following features are planned for future support in all {{serverless-full}} The following features are not available in {{es-serverless}} and are not planned for future support: * [Custom plugins and bundles](/deploy-manage/deploy/elastic-cloud/upload-custom-plugins-bundles.md) -* [{{es}} for Apache Hadoop](asciidocalypse://docs/elasticsearch-hadoop/docs/reference/ingestion-tools/elasticsearch-hadoop/elasticsearch-for-apache-hadoop.md) +* [{{es}} for Apache Hadoop](asciidocalypse://docs/elasticsearch-hadoop/docs/reference/elasticsearch-for-apache-hadoop.md) * [Scripted metric aggregations](asciidocalypse://docs/elasticsearch/docs/reference/data-analysis/aggregations/search-aggregations-metrics-scripted-metric-aggregation.md) * Managed web crawler: You can use the [self-managed web crawler](https://github.com/elastic/crawler) instead. * Managed Search connectors: You can use [self-managed Search connectors](asciidocalypse://docs/elasticsearch/docs/reference/ingestion-tools/search-connectors/self-managed-connectors.md) instead. diff --git a/raw-migrated-files/docs-content/serverless/elasticsearch-ingest-data-through-api.md b/raw-migrated-files/docs-content/serverless/elasticsearch-ingest-data-through-api.md index 4e7ec0ace..be1fb2220 100644 --- a/raw-migrated-files/docs-content/serverless/elasticsearch-ingest-data-through-api.md +++ b/raw-migrated-files/docs-content/serverless/elasticsearch-ingest-data-through-api.md @@ -1,6 +1,6 @@ # Ingest data through API [elasticsearch-ingest-data-through-api] -The {{es}} APIs enable you to ingest data through code. You can use the APIs of one of the [language clients](../../../solutions/search/site-or-app/clients.md) or the {{es}} HTTP APIs. The examples on this page use the HTTP APIs to demonstrate how ingesting works in {{es}} through APIs. If you want to ingest timestamped data or have a more complex ingestion use case, check out [Beats](asciidocalypse://docs/beats/docs/reference/ingestion-tools/index.md) or [Logstash](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/index.md). +The {{es}} APIs enable you to ingest data through code. You can use the APIs of one of the [language clients](../../../solutions/search/site-or-app/clients.md) or the {{es}} HTTP APIs. The examples on this page use the HTTP APIs to demonstrate how ingesting works in {{es}} through APIs. If you want to ingest timestamped data or have a more complex ingestion use case, check out [Beats](asciidocalypse://docs/beats/docs/reference/index.md) or [Logstash](asciidocalypse://docs/logstash/docs/reference/index.md). ## Using the bulk API [elasticsearch-ingest-data-through-api-using-the-bulk-api] diff --git a/raw-migrated-files/docs-content/serverless/elasticsearch-manage-project.md b/raw-migrated-files/docs-content/serverless/elasticsearch-manage-project.md deleted file mode 100644 index 87e7aa231..000000000 --- a/raw-migrated-files/docs-content/serverless/elasticsearch-manage-project.md +++ /dev/null @@ -1,84 +0,0 @@ ---- -navigation_title: "Performance and general settings" ---- - -# Manage performance and general settings [elasticsearch-manage-project] - - -{{serverless-full}} projects are fully managed and automatically scaled by Elastic. You have the option of {{es}}, {{observability}}, or {{elastic-sec}} for your project. - -Your project’s performance and general data retention are controlled by the **Search AI Lake settings**. To manage these settings: - -1. Navigate to [cloud.elastic.co](https://cloud.elastic.co/). -2. Log in to your Elastic Cloud account. -3. Select your project from the **Serverless projects** panel and click **Manage**. - - -## Search AI Lake settings [elasticsearch-manage-project-search-ai-lake-settings] - -Once ingested, your data is stored in cost-efficient, general storage. A cache layer is available on top of the general storage for recent and frequently queried data that provides faster search speed. Data in this cache layer is considered **search-ready**. - -Together, these data storage layers form your project’s **Search AI Lake**. - -The total volume of search-ready data is the sum of the following: - -1. The volume of non-time series project data -2. The volume of time series project data included in the Search Boost Window - -::::{note} -Time series data refers to any document in standard indices or data streams that includes the `@timestamp` field. This field must be present for data to be subject to the Search Boost Window setting. - -:::: - - -Each project type offers different settings that let you adjust the performance and volume of search-ready data, as well as the features available in your projects. - -$$$elasticsearch-manage-project-search-power-settings$$$ - -| Setting | Description | Available in | -| --- | --- | --- | -| **Search Power** | Search Power controls the speed of searches against your data. With Search Power, you can improve search performance by adding more resources for querying, or you can reduce provisioned resources to cut costs. Choose from three Search Power settings:

**On-demand:*** Autoscales based on data and search load, with a lower minimum baseline for resource use. This flexibility results in more variable query latency and reduced maximum throughput.

***Performant:** Delivers consistently low latency and autoscales to accommodate moderately high query throughput.

**High-throughput:** Optimized for high-throughput scenarios, autoscaling to maintain query latency even at very high query volumes.
| [![Elasticsearch](../../../images/serverless-es-badge.svg "")](../../../solutions/search.md) | -| **Search Boost Window** | Non-time series data is always considered search-ready. The **Search Boost Window** determines the volume of time series project data that will be considered search-ready.

Increasing the window results in a bigger portion of time series project data included in the total search-ready data volume.
| [![Elasticsearch](../../../images/serverless-es-badge.svg "")](../../../solutions/search.md) | -| **Data Retention** | Data retention policies determine how long your project data is retained.

You can specify different retention periods for specific data streams in your project.
| [![Elasticsearch](../../../images/serverless-es-badge.svg "")](../../../solutions/search.md)[![Observability](../../../images/serverless-obs-badge.svg "")](../../../solutions/observability.md)[![Security](../../../images/serverless-sec-badge.svg "")](../../../solutions/security/elastic-security-serverless.md) | -| | **Maximum data retention period**

When enabled, this setting determines the maximum length of time that data can be retained in any data streams of this project.

Editing this setting replaces the data retention set for all data streams of the project that have a longer data retention defined. Data older than the new maximum retention period that you set is permanently deleted.
| [![Security](../../../images/serverless-sec-badge.svg "")](../../../solutions/security/elastic-security-serverless.md) | -| | **Default data retention period**

When enabled, this setting determines the default retention period that is automatically applied to all data streams in your project that do not have a custom retention period already set.
| [![Security](../../../images/serverless-sec-badge.svg "")](../../../solutions/security/elastic-security-serverless.md) | -| **Project features** | Controls [feature tiers and add-on options](../../../deploy-manage/deploy/elastic-cloud/project-settings.md#project-features-add-ons) for your {{elastic-sec}} project. | [![Security](../../../images/serverless-sec-badge.svg "")](../../../solutions/security/elastic-security-serverless.md) | - - -## Project features and add-ons [project-features-add-ons] - -[![Security](../../../images/serverless-sec-badge.svg "")](../../../solutions/security/elastic-security-serverless.md) For {{elastic-sec}} projects, edit the **Project features** to select a feature tier and enable add-on options for specific use cases. - -| Feature tier | Description and add-ons | -| --- | --- | -| **Security Analytics Essentials** | Standard security analytics, detections, investigations, and collaborations. Allows these add-ons:

* **Endpoint Protection Essentials**: endpoint protections with {{elastic-defend}}.
* **Cloud Protection Essentials**: Cloud native security features.
| -| **Security Analytics Complete** | Everything in **Security Analytics Essentials*** plus advanced features such as entity analytics, threat intelligence, and more. Allows these add-ons:

* ***Endpoint Protection Complete***: Everything in ***Endpoint Protection Essentials*** plus advanced endpoint detection and response features.
* ***Cloud Protection Complete**: Everything in **Cloud Protection Essentials** plus advanced cloud security features.
| - - -### Downgrading the feature tier [elasticsearch-manage-project-downgrading-the-feature-tier] - -When you downgrade your Security project features selection from **Security Analytics Complete** to **Security Analytics Essentials**, the following features become unavailable: - -* All Entity Analytics features -* The ability to use certain entity analytics-related integration packages, such as: - - * Data Exfiltration detection - * Lateral Movement detection - * Living off the Land Attack detection - -* Intelligence Indicators page -* External rule action connectors -* Case connectors -* Endpoint response actions history -* Endpoint host isolation exceptions -* AI Assistant -* Attack discovery - -And, the following data may be permanently deleted: - -* AI Assistant conversation history -* AI Assistant settings -* Entity Analytics user and host risk scores -* Entity Analytics asset criticality information -* Detection rule external connector settings -* Detection rule response action settings diff --git a/raw-migrated-files/docs-content/serverless/general-billing-stop-project.md b/raw-migrated-files/docs-content/serverless/general-billing-stop-project.md index 69e79e7fa..f6a3ac6c1 100644 --- a/raw-migrated-files/docs-content/serverless/general-billing-stop-project.md +++ b/raw-migrated-files/docs-content/serverless/general-billing-stop-project.md @@ -9,6 +9,6 @@ All data is lost. Billing for usage is by the hour and any outstanding charges f To stop being charged for a project: -1. Log in to the [{{ess-console-name}}](https://cloud.elastic.co?page=docs&placement=docs-body). +1. Log in to the [{{ecloud}} Console](https://cloud.elastic.co?page=docs&placement=docs-body). 2. Find your project on the home page in the **Serverless Projects** card and select **Manage** to access it directly. Or, select **Serverless Projects** to go to the projects page to view all of your projects. 3. Select **Actions**, then select **Delete project** and confirm the deletion. diff --git a/raw-migrated-files/docs-content/serverless/general-serverless-status.md b/raw-migrated-files/docs-content/serverless/general-serverless-status.md deleted file mode 100644 index 69de3b0a7..000000000 --- a/raw-migrated-files/docs-content/serverless/general-serverless-status.md +++ /dev/null @@ -1,23 +0,0 @@ -# Monitor serverless status [general-serverless-status] - -Serverless projects run on cloud platforms, which may undergo changes in availability. When availability changes, Elastic makes sure to provide you with a current service status. - -To check current and past service availability, go to the Elastic [service status](https://status.elastic.co/?section=serverless) page. - - -## Subscribe to updates [general-serverless-status-subscribe-to-updates] - -You can be notified about changes to the service status automatically. - -To receive service status updates: - -1. Go to the Elastic [service status](https://status.elastic.co/?section=serverless) page. -2. Select **SUBSCRIBE TO UPDATES**. -3. You can be notified in the following ways: - - * Email - * Slack - * Atom or RSS feeds - - -After you subscribe, you’ll be notified whenever a service status update is posted. diff --git a/raw-migrated-files/docs-content/serverless/general-sign-up-trial.md b/raw-migrated-files/docs-content/serverless/general-sign-up-trial.md index be29442a6..f0dd973ee 100644 --- a/raw-migrated-files/docs-content/serverless/general-sign-up-trial.md +++ b/raw-migrated-files/docs-content/serverless/general-sign-up-trial.md @@ -1,6 +1,6 @@ # Sign up for Elastic Cloud [general-sign-up-trial] -The following page provides information on how to sign up for an Elastic Cloud Serverless account, for information on how to sign up for hosted deployments, see [Elasticsearch Service - How do i sign up?](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md). +The following page provides information on how to sign up for an Elastic Cloud Serverless account, for information on how to sign up for hosted deployments, see [{{ech}} - How do i sign up?](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md). ## Trial features [general-sign-up-trial-what-is-included-in-my-trial] @@ -11,7 +11,7 @@ Your free 14-day trial includes: A deployment lets you explore Elastic solutions for Search, Observability, and Security. Trial deployments run on the latest version of the Elastic Stack. They includes 8 GB of RAM spread out over two availability zones, and enough storage space to get you started. If you’re looking to evaluate a smaller workload, you can scale down your trial deployment. Each deployment includes Elastic features such as Maps, SIEM, machine learning, advanced security, and much more. You have some sample data sets to play with and tutorials that describe how to add your own data. -To learn more about Elastic Cloud Hosted, check our [Elasticsearch Service documentation](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md). +To learn more about Elastic Cloud Hosted, check our [{{ech}} documentation](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md). **One serverless project** @@ -41,7 +41,7 @@ During the free 14 day trial, Elastic provides access to one hosted deployment a * Machine learning nodes are available up to 4GB RAM * Custom {{es}} plugins are not enabled -To learn more about Elastic Cloud Hosted, check our [Elasticsearch Service documentation](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md). +To learn more about Elastic Cloud Hosted, check our [{{ech}} documentation](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md). **Serverless projects** diff --git a/raw-migrated-files/docs-content/serverless/intro.md b/raw-migrated-files/docs-content/serverless/intro.md index 64f49f4e5..25f770079 100644 --- a/raw-migrated-files/docs-content/serverless/intro.md +++ b/raw-migrated-files/docs-content/serverless/intro.md @@ -1,35 +1,5 @@ # Elastic Cloud Serverless [intro] -{{serverless-full}} is a fully managed solution that allows you to deploy and use Elastic for your use cases without managing the underlying infrastructure. It represents a shift in how you interact with {{es}} - instead of managing clusters, nodes, data tiers, and scaling, you create **serverless projects** that are fully managed and automatically scaled by Elastic. This abstraction of infrastructure decisions allows you to focus solely on gaining value and insight from your data. - -{{serverless-full}} automatically provisions, manages, and scales your {{es}} resources based on your actual usage. Unlike traditional deployments where you need to predict and provision resources in advance, serverless adapts to your workload in real-time, ensuring optimal performance while eliminating the need for manual capacity planning. - -Serverless projects use the core components of the {{stack}}, such as {{es}} and {{kib}}, and are based on an architecture that decouples compute and storage. Search and indexing operations are separated, which offers high flexibility for scaling your workloads while ensuring a high level of performance. - -Elastic provides three serverless solutions available on {{ecloud}}: - -* **/solutions/search.md[{{es-serverless}}]**: Build powerful applications and search experiences using a rich ecosystem of vector search capabilities, APIs, and libraries. -* **/solutions/observability.md[{{obs-serverless}}]**: Monitor your own platforms and services using powerful machine learning and analytics tools with your logs, metrics, traces, and APM data. -* **/solutions/security/elastic-security-serverless.md[{{sec-serverless}}]**: Detect, investigate, and respond to threats with SIEM, endpoint protection, and AI-powered analytics capabilities. - -[Learn more about {{serverless-full}} in our blog](https://www.elastic.co/blog/elastic-cloud-serverless). - - -## Benefits of serverless projects [_benefits_of_serverless_projects] - -**Management free.** Elastic manages the underlying Elastic cluster, so you can focus on your data. With serverless projects, Elastic is responsible for automatic upgrades, data backups, and business continuity. - -**Autoscaled.** To meet your performance requirements, the system automatically adjusts to your workloads. For example, when you have a short time spike on the data you ingest, more resources are allocated for that period of time. When the spike is over, the system uses less resources, without any action on your end. - -**Optimized data storage.** Your data is stored in cost-efficient, general storage. A cache layer is available on top of the general storage for recent and frequently queried data that provides faster search speed. The size of the cache layer and the volume of data it holds depend on [settings](../../../deploy-manage/deploy/elastic-cloud/project-settings.md) that you can configure for each project. - -**Dedicated experiences.** All serverless solutions are built on the Elastic Search Platform and include the core capabilities of the Elastic Stack. They also each offer a distinct experience and specific capabilities that help you focus on your data, goals, and use cases. - -**Pay per usage.** Each serverless project type includes product-specific and usage-based pricing. - -**Data and performance control**. Control your project data and query performance against your project data. * Data. Choose the data you want to ingest and the method to ingest it. By default, data is stored indefinitely in your project, and you define the retention settings for your data streams. * Performance. For granular control over costs and query performance against your project data, serverless projects come with a set of predefined settings you can edit. - - ## Differences between serverless projects and hosted deployments on {{ecloud}} [general-what-is-serverless-elastic-differences-between-serverless-projects-and-hosted-deployments-on-ecloud] You can run [hosted deployments](/deploy-manage/deploy/elastic-cloud/cloud-hosted.md) of the {{stack}} on {{ecloud}}. These hosted deployments provide more provisioning and advanced configuration options. @@ -48,39 +18,3 @@ You can run [hosted deployments](/deploy-manage/deploy/elastic-cloud/cloud-hoste | **Backups** | Projects automatically backed up by Elastic. | Your responsibility with Snapshot & Restore. | | **Data retention** | Editable on data streams. | Index Lifecycle Management. | - -## Answers to common serverless questions [general-what-is-serverless-elastic-answers-to-common-serverless-questions] - -**Is there migration support between hosted deployments and serverless projects?** - -Migration paths between hosted deployments and serverless projects are currently unsupported. - -**How can I move data to or from serverless projects?** - -We are working on data migration tools! In the interim, [use Logstash](asciidocalypse://docs/logstash/docs/reference/ingestion-tools/logstash/index.md) with Elasticsearch input and output plugins to move data to and from serverless projects. - -**How does serverless ensure compatibility between software versions?** - -Connections and configurations are unaffected by upgrades. To ensure compatibility between software versions, quality testing and API versioning are used. - -**Can I convert a serverless project into a hosted deployment, or a hosted deployment into a serverless project?** - -Projects and deployments are based on different architectures, and you are unable to convert. - -**Can I convert a serverless project into a project of a different type?** - -You are unable to convert projects into different project types, but you can create as many projects as you’d like. You will be charged only for your usage. - -**How can I create serverless service accounts?** - -Create API keys for service accounts in your serverless projects. Options to automate the creation of API keys with tools such as Terraform will be available in the future. - -To raise a Support case with Elastic, raise a case for your subscription the same way you do today. In the body of the case, make sure to mention you are working in serverless to ensure we can provide the appropriate support. - -**Where can I learn about pricing for serverless?** - -See serverless pricing information for [Search](https://www.elastic.co/pricing/serverless-search), [Observability](https://www.elastic.co/pricing/serverless-observability), and [Security](https://www.elastic.co/pricing/serverless-security). - -**Can I request backups or restores for my projects?** - -It is not currently possible to request backups or restores for projects, but we are working on data migration tools to better support this. diff --git a/raw-migrated-files/docs-content/serverless/observability-apm-get-started.md b/raw-migrated-files/docs-content/serverless/observability-apm-get-started.md index 49377f9fa..dae217945 100644 --- a/raw-migrated-files/docs-content/serverless/observability-apm-get-started.md +++ b/raw-migrated-files/docs-content/serverless/observability-apm-get-started.md @@ -81,14 +81,14 @@ To send APM data to Elastic, you must install an APM agent and configure it to s Instrumentation is the process of extending your application’s code to report trace data to Elastic APM. Go applications must be instrumented manually at the source code level. To instrument your applications, use one of the following approaches: - * [Built-in instrumentation modules](asciidocalypse://docs/apm-agent-go/docs/reference/ingestion-tools/apm-agent-go/builtin-modules.md). - * [Custom instrumentation](asciidocalypse://docs/apm-agent-go/docs/reference/ingestion-tools/apm-agent-go/custom-instrumentation.md) and context propagation with the Go Agent API. + * [Built-in instrumentation modules](asciidocalypse://docs/apm-agent-go/docs/reference/builtin-modules.md). + * [Custom instrumentation](asciidocalypse://docs/apm-agent-go/docs/reference/custom-instrumentation.md) and context propagation with the Go Agent API. **Learn more in the {{apm-agent}} reference** - * [Supported technologies](asciidocalypse://docs/apm-agent-go/docs/reference/ingestion-tools/apm-agent-go/supported-technologies.md) - * [Advanced configuration](asciidocalypse://docs/apm-agent-go/docs/reference/ingestion-tools/apm-agent-go/configuration.md) - * [Detailed guide to instrumenting Go source code](asciidocalypse://docs/apm-agent-go/docs/reference/ingestion-tools/apm-agent-go/set-up-apm-go-agent.md) + * [Supported technologies](asciidocalypse://docs/apm-agent-go/docs/reference/supported-technologies.md) + * [Advanced configuration](asciidocalypse://docs/apm-agent-go/docs/reference/configuration.md) + * [Detailed guide to instrumenting Go source code](asciidocalypse://docs/apm-agent-go/docs/reference/set-up-apm-go-agent.md)