diff --git a/develop-docs/application-architecture/dynamic-sampling/fidelity-and-biases.mdx b/develop-docs/application-architecture/dynamic-sampling/fidelity-and-biases.mdx index 703795d0f905c..f2d450eebda67 100644 --- a/develop-docs/application-architecture/dynamic-sampling/fidelity-and-biases.mdx +++ b/develop-docs/application-architecture/dynamic-sampling/fidelity-and-biases.mdx @@ -3,7 +3,7 @@ title: Fidelity and Biases sidebar_order: 2 --- -Dynamic Sampling is a feature that allows Sentry to automatically adjust the amount of data retained based on the value of the data. This is technically achieved by applying a **sample rate** to every event, which is determined by a **set of rules** that are evaluated for each event. +Dynamic Sampling allows Sentry to automatically adjust the amount of data retained based on how valuable the data is to the user. This is technically achieved by applying a **sample rate** to every event, which is determined by a **set of rules** that are evaluated for each event. @@ -13,19 +13,29 @@ A sample rate is a number in the interval `[0.0, 1.0]` that will determine the l ## The Concept of Fidelity -At the core of Dynamic Sampling there is the concept of **fidelity**, which translates to an overall **target sample rate** that should be applied across all transactions of an organization. +At the core of Dynamic Sampling there is the concept of **fidelity**, which translates to an overall **target sample rate** that should be applied across all spans and transactions of an organization. -The **determination** of the target sample rate is done dynamically by analyzing the volume of data received by Sentry in a specific time window (configurable [here](https://github.com/getsentry/sentry/blob/f3a2220ccd3a2118a1255a4c96a9ec2010dab0d8/src/sentry/options/defaults.py#L690)) and then calling the `get_sampling_tier_for_volume` function (defined [here](https://github.com/getsentry/sentry/blob/f3a2220ccd3a2118a1255a4c96a9ec2010dab0d8/src/sentry/quotas/base.py#L481)) which takes as input the volume in the time window and returns a sampling tier in the form of (`volume`, `sample_rate`). +### Dynamic Sampling Modes +There are two available modes to govern the target sample rates for Dynamic Sampling. The definition of both the mode and the target sample rates are implemented using the organization options `sentry:sampling_mode` and `sentry:target_sample_rate` as well as the project option `sentry:target_sample_rate`. -_The `get_sampling_tier_for_volume`, like the `get_blend_sample_rate` function (defined [here](https://github.com/getsentry/sentry/blob/f3a2220ccd3a2118a1255a4c96a9ec2010dab0d8/src/sentry/quotas/base.py#L466)), is a function that must be overridden by the user to customize the behavior of Dynamic Sampling._ +- **Automatic Mode** dynamically manages the target sample rate for each project based on the target sample rate for the organization, prioritizing lower volume projects to increase visibility. Automatic Mode is active if the organization option `sentry:sampling_mode` is set to `organization`. The target sample rate for the organization is stored in the **organization** option `sentry:target_sample_rate`, and project target sample rates are calculated based on the organization target sample rate. +- **Manual Mode** allows the user to set static target sample rates on a per-project basis that serve as the baseline sample rate before applying the dynamic biases outlined below. Target sample rates are not adjusted by the system. Manual Mode is active if the organization option `sentry:sampling_mode` is set to `project`. The target sample rates for projects are stored in the **project** option `sentry:target_sample_rate`. -Within this target sample rate, Dynamic Sampling can create a **bias toward more meaningful data**. This is achieved by constantly updating and communicating special rules to Relay, via a project configuration, which then applies targeted sampling to every event. +All functionality defaults to Automatic Mode if the option `sentry:sampling_mode` is not set, and all target sample rates default to 1 if the option `sentry:target_sample_rate` is not set. + +When the user switches between modes, target sample rates are transferred unless changed explicitly. For example, if the user switches from Automatic Mode to Manual Mode, the sample rates calculated during Automatic Mode are persisted in the project option `sentry:target_sample_rate`. Conversely, if the user switches from Manual Mode to Automatic Mode, the project target sample rates are recalculated based on the overall organization target sample rate. + +The [sample rates are periodically recalibrated](https://github.com/getsentry/sentry/blob/9b98be6b97323a78809a829e06dcbef26a16365c/src/sentry/dynamic_sampling/rules/biases/recalibration_bias.py#L11-L44) to ensure that the overall target sample rate is met. This recalibration is done on a project level or organization level, depending on the dynamic sampling mode. Within the target sample rate, Dynamic Sampling **biases towards more meaningful data**. This is achieved by constantly updating and communicating special rules to Relay, via a project configuration, which then applies targeted sampling to every event. ![Concept of Fidelity](./images/fidelityAndPriorities.png) + +For orgs under AM2, Dynamic sampling uses a [sliding window function](https://github.com/getsentry/sentry/blob/cc8cc38c8a108719d068e5622b24a8d0c744e84c/src/sentry/dynamic_sampling/tasks/sliding_window_org.py#L37-L61) over the incoming volume to calculate the target sample rate. + + ### Approximate Fidelity -It is important to note that fidelity only determines an **approximate target sample rate**, so there is flexibility in creating exact sample rates. The ingestion pipeline, composed on [Relay](https://docs.sentry.io/product/relay/) and other components, does not have the infrastructure to track volume, so it cannot create an actual weighted distribution within the target sample rate. +It is important to note that fidelity only determines an **approximate target sample rate**, so there is flexibility in creating exact sample rates. The ingestion pipeline, composed of [Relay](https://docs.sentry.io/product/relay/) and other components, does not have the infrastructure to track volume, so it cannot create an actual weighted distribution within the target sample rate. Instead, the Sentry backend **computes a set of rules** whose goal is to cooperatively achieve the target sample rate. Determining when and how to set these rules is part of the Dynamic Sampling infrastructure. @@ -41,11 +51,11 @@ Sentry supports **two fundamentally different types of sampling**. While this is ### Trace Sampling -A trace is a **collection of transactions that are related to each other**. For example a trace could contain transactions started from your frontend that are then generating transactions in your backend. +A trace is a **collection of events that are related to each other**. For example, a trace could contain events started from your frontend that are then generating events in your backend. -Trace sampling ensures that **either all transactions of a trace are sampled, or none**. That is, these rules **always yield the same sampling decision** for every transaction in the same trace. This requires the cooperation of SDKs and thus allows sampling only by `project`, `release`, `environment`, and `transaction` name. +Trace sampling ensures that **either all events of a trace are sampled, or none**. That is, these rules **always yield the same sampling decision** for every event in the same trace. This requires the cooperation of SDKs and thus allows sampling only by `project`, `release`, `environment`, and `transaction` name. -To achieve trace sampling, SDKs pass all fields that can be sampled by [Dynamic Sampling Context (DSC)](/sdk/performance/dynamic-sampling-context/) (defined [here](https://getsentry.github.io/relay/relay_sampling/dsc/struct.DynamicSamplingContext.html)) as they propagate traces. _This ensures that every transaction from the same trace comes with the same DSC._ +To achieve trace sampling, SDKs pass all fields that can be sampled by [Dynamic Sampling Context (DSC)](/sdk/performance/dynamic-sampling-context/) (defined [here](https://getsentry.github.io/relay/relay_sampling/dsc/struct.DynamicSamplingContext.html)) as they propagate traces. _This ensures that every event from the same trace comes with the same DSC._ ![Trace Sampling](./images/traceSampling.png) @@ -61,7 +71,7 @@ Transaction Sampling **does not guarantee complete traces** and instead **applie ## Biases for Sampling -A bias is a set of one or more rules that are evaluated for each event. More specifically, when we define a bias, we want to achieve a specific objective, which **can be expressed as a set of rules**. To learn more about rules, check out the architecture page [here](/dynamic-sampling/architecture/). +A bias is a set of one or more rules that are evaluated for each event. More specifically, when we define a bias, we want to achieve a specific objective, which **can be expressed as a set of rules**. You learn more about rules on the architecture page [here](/dynamic-sampling/architecture/). Sentry has already defined a set of biases that are available to all customers. These biases have different goals, but they can be combined to express more complex semantics. @@ -71,30 +81,19 @@ An example of how the UI looks is shown in the following screenshot (the content ![Biases in the UI](./images/biasesUI.png) -### Deprioritize Health Checks -This bias is used to de-prioritize transactions that are classified as health checks. The goal is to reduce the amount of data retained for health checks, since they are not very useful for debugging. -In order to mark a transaction as a health check, we leverage a list of known health check endpoints, which is maintained by Sentry and updated regularly. +### Prioritize New Releases -```python -HEALTH_CHECK_GLOBS = [ - "*healthcheck*", - "*healthy*", - "*live*", - "*ready*", - "*heartbeat*", - "*/health", - "*/healthz", - # ... -] -``` +This bias is used to prioritize traces that are coming from a new release. The goal is to increase the sample rate in the time window that occurs between the creation of a release and its adoption by users. _The identification of a new release is done in the `event_manager` defined [here](https://github.com/getsentry/sentry/blob/43d7c41aee2b22ca9f51916afac40f6cbdcd2b15/src/sentry/event_manager.py#L739-L773)._ -The list of health check endpoints is available [here](https://github.com/getsentry/sentry/blob/4cb0d863de1ef8e3440153cb440eaca8025dee0d/src/sentry/dynamic_sampling/rules/biases/ignore_health_checks_bias.py#L14). +Since the adoption of a release is not constant, we created a system of _decaying_ rules which can interpolate between two sample rates in a given time window with a given function (e.g. `linear`). The idea being that we want to reduce the sample rate since the amount of samples will increase as the release gets adopted by users. -For deprioritizing health checks, we compute a new sample rate by dividing the base sample rate of the project by a factor, which is defined [here](https://github.com/getsentry/sentry/blob/master/src/sentry/dynamic_sampling/rules/utils.py#L13-L13). +![Sample Rate and Adoption](./images/sampleRateAndAdoption.png) -### Boost Dev Environments +The latest release bias uses a decaying rule to interpolate between a starting sample rate and an ending sample rate over a time window that is statically defined for each platform (the list of time to adoptions is define [here](https://github.com/getsentry/sentry/blob/9b98be6b97323a78809a829e06dcbef26a16365c/src/sentry/dynamic_sampling/rules/helpers/time_to_adoptions.py#L25). For example, Android has a bigger time window than Javascript because on average Android apps take more time to get adopted by users. + +### Prioritize Dev Environments This bias is used to prioritize traces coming from a development environment in order to increase the amount of data retained for such environments, since they are more likely to be useful for debugging. @@ -115,34 +114,52 @@ The list of development environments is available [here](https://github.com/gets For prioritizing dev environments, we use a sample rate of `1.0` (100%), which results in all traces being sampled. -### Boost New Releases +### Prioritize Low Volume Projects + +This bias is only active in Automatic Mode (and not in Manual Mode). It applies to any incoming trace and is defined on a per-project basis. + + +The algorithm used in this bias computes a new sample rate with the goal of prioritizing low-volume projects, which can be drowned out by high-volume projects. The mechanism used for prioritizing is similar to the low-volume transactions bias in which given the sample rate of the organization and the counts of each project, it computes a new sample rate for each project, assuming an ideal distribution of the counts. The sample rate of the boost low volume projects bias is computed using an algorithm that leverages a dynamic sample rate obtained by measuring the incoming volume of transactions in a sliding time window, known as the target fidelity rate. This rate is obtained by calling, at fixed intervals, the `get_sampling_tier_for_volume` function (defined [here](https://github.com/getsentry/sentry/blob/f3a2220ccd3a2118a1255a4c96a9ec2010dab0d8/src/sentry/quotas/base.py#L481)). -This bias is used to prioritize traces that are coming from a new release. The goal is to increase the sample rate in the time window that occurs between the creation of a release and its adoption by users. _The identification of a new release is done in the `event_manager` defined [here](https://github.com/getsentry/sentry/blob/master/src/sentry/event_manager.py#L937-L937)._ -Since the adoption of a release is not constant, we created a system of _decaying_ rules which can interpolate between two sample rates in a given time window with a given function (e.g. `linear`). The idea being that we want to reduce the sample rate since the amount of samples will increase as the release gets adopted by users. -![Sample Rate and Adoption](./images/sampleRateAndAdoption.png) +### Prioritize Low Volume Transactions +This bias is used to prioritize low-volume transactions that can be drowned out by high-volume transactions. The goal is to rebalance sample rates of the individual transactions so that low-volume transactions are more likely to have representative samples. The bias is of type trace, which means that the transaction considered for rebalancing will be the root transaction of the trace. -The latest release bias uses a decaying rule to interpolate between a starting sample rate and an ending sample rate over a time window that is statically defined for each platform (the list of time to adoptions is define [here](https://github.com/getsentry/sentry/blob/master/src/sentry/dynamic_sampling/rules/helpers/time_to_adoptions.py#L26-L26). For example, Android has a bigger time window than Javascript because on average Android apps take more time to get adopted by users. +Prioritization of low volume transactions works slightly differently depending on the dynamic sampling mode: +- In **Automatic Mode** (`sentry:sampling_mode` == `organization`), the output of the [boost_low_volume_projects](https://github.com/getsentry/sentry/blob/dee539472e999bf590cfc4e99b9b12981963defb/src/sentry/dynamic_sampling/tasks/boost_low_volume_transactions.py#L183) task is used as the base sample rate for the balancing algorithm. +- In **Manual Mode** (`sentry:sampling_mode` == `project`), the project target sample rate is used as the base sample rate for the balancing algorithm. -### Boost Low Volume Transactions +In order to rebalance transactions, the system retrieves the counts of the transactions for each project and calculates a new sample rate for each transaction. -This bias is used to prioritize low-volume transactions that can be drowned out by high-volume transactions. The goal is to rebalance sample rates of the individual transactions so that low-volume transactions are more likely to have representative samples. The bias is of type trace, which means that the transaction considered for rebalancing will be the root transaction of the trace. + -In order to rebalance transactions, the system computes the counts of the transactions for each project and runs an algorithm that, given the sample rate of the organization and the counts of each transaction, computes a new sample rate for each transaction assuming an ideal distribution of the counts. +The algorithms for boosting low volume events are run periodically (with cron jobs) with a sliding window to account for changes in the incoming volume. -### Boost Low Volume Projects + -This bias is the simplest one that can be defined. It applies to any incoming trace and is defined on a per-project basis. +### Deprioritize Health Checks -_The sample rate of the boost low volume projects bias is computed using an algorithm that leverages a dynamic sample rate obtained by measuring the incoming volume of transactions in a sliding time window, known as the target fidelity rate. This rate is obtained by calling, at fixed intervals, the `get_sampling_tier_for_volume` function (defined [here](https://github.com/getsentry/sentry/blob/f3a2220ccd3a2118a1255a4c96a9ec2010dab0d8/src/sentry/quotas/base.py#L481)), which given the volume in a time window, will determine the appropriate target fidelity rate for the entire organization._ +This bias is used to de-prioritize transactions that are classified as health checks. The goal is to reduce the amount of data retained for health checks, since they are not very useful for debugging. -The algorithm used in this bias, computes a new sample rate with the goal of prioritizing low-volume projects, which can be drowned out by high-volume projects. The mechanism used for prioritizing is similar to the low-volume transactions bias in which given the sample rate of the organization and the counts of each project, it computes a new sample rate for each project, assuming an ideal distribution of the counts. +In order to mark a transaction as a health check, we leverage a list of known health check endpoints, which is maintained by Sentry and updated regularly. - +```python +HEALTH_CHECK_GLOBS = [ + "*healthcheck*", + "*healthy*", + "*live*", + "*ready*", + "*heartbeat*", + "*/health", + "*/healthz", + # ... +] +``` + +The list of health check endpoints is available [here](https://github.com/getsentry/sentry/blob/4cb0d863de1ef8e3440153cb440eaca8025dee0d/src/sentry/dynamic_sampling/rules/biases/ignore_health_checks_bias.py#L14). -The algorithms for boosting low volume transactions and projects are run periodically (with cron jobs) with a sliding window to account for changes in the incoming volume. +For deprioritizing health checks, we compute a new sample rate by dividing the base sample rate of the project by a factor, which is defined [here](https://github.com/getsentry/sentry/blob/master/src/sentry/dynamic_sampling/rules/utils.py#L13-L13). - If you want to learn more about the architecture behind Dynamic Sampling, continue to the [next page](/dynamic-sampling/architecture/). diff --git a/develop-docs/application-architecture/dynamic-sampling/the-big-picture.mdx b/develop-docs/application-architecture/dynamic-sampling/the-big-picture.mdx index 6ddfaca1a38fd..00ba3b1203546 100644 --- a/develop-docs/application-architecture/dynamic-sampling/the-big-picture.mdx +++ b/develop-docs/application-architecture/dynamic-sampling/the-big-picture.mdx @@ -6,29 +6,38 @@ sidebar_order: 1 ![Sequencing](./images/sequencing.png) + + + +Dynamic Sampling currently operates on either spans or transactions to measure data throughput. This is controlled by the feature flag `organizations:dynamic-sampling-spans` and usually set to what the organization's subscription is metered by. In development, this currently defaults to transactions. +The logic between the two data categories is identical, so most of this documentation is kept at a generic level and important differences are pointed out explicitly. + + + + ## Sequencing Dynamic Sampling occurs at the edge of our ingestion pipeline, precisely in [Relay](https://github.com/getsentry/relay). -When transaction events arrive, in a simplified model, they go through the following steps (some of which won't apply if you self-host Sentry): +When events arrive, in a simplified model, they go through the following steps: -1. **Inbound data filters**: every transaction runs through inbound data filters as configured in project settings, such as legacy browsers or denied releases. Transactions dropped here do not count for quota and are not included in “total transactions” data. -2. **Quota enforcement**: Sentry charges for all further transactions sent in, before events are passed on to dynamic sampling. -3. **Metrics extraction**: after passing quotas, Sentry extracts metrics from the total incoming transactions. These metrics provide granular numbers for the performance and frequency of every application transaction. -4. **Dynamic Sampling**: based on an internal set of rules, Relay determines a sample rate for every incoming transaction event. A random number generator finally decides whether this payload should be kept or dropped. -5. **Rate limiting**: transactions that are sampled by Dynamic Sampling will be stored and indexed. To protect the infrastructure, internal rate limits apply at this point. Under normal operation, this **rate limit is never reached** since dynamic sampling already reduces the volume of stored events. +1. **Inbound data filters**: every event runs through inbound data filters as configured in project settings, such as legacy browsers or denied releases. Events dropped here are not counted towards quota and are not included in "total events" data. +2. **Quota enforcement**: Sentry charges for all further events sent in, before they are passed on to dynamic sampling. +3. **Metrics extraction**: after passing quotas, Sentry extracts metrics from the total incoming events. These metrics provide granular numbers for the performance and frequency of every event. +4. **Dynamic Sampling**: based on an internal set of rules, Relay determines a sample rate for every incoming event. A random number generator finally decides whether a payload should be kept or dropped. +5. **Rate limiting**: events that are sampled by Dynamic Sampling will be stored and indexed. To protect the infrastructure, internal rate limits apply at this point. Under normal operation, this **rate limit is never reached** since dynamic sampling already reduces the volume of events stored. -A client is sending 1000 transactions per second to Sentry: -1. 100 transactions per second are from old browsers and get dropped through an inbound data filter. -2. The remaining 900 transactions per second show up as total transactions in Sentry. -3. Their current overall sample rate is at 20%, which statistically samples 180 transactions per second. -4. Since this is above the 100/s limit, about 80 transactions per second are randomly dropped, and the rest is stored. +A client is sending 1000 events per second to Sentry: +1. 100 events per second are from old browsers and get dropped through an inbound data filter. +2. The remaining 900 events per second show up as total events in Sentry. +3. Their current overall sample rate is at 20%, which statistically samples 180 events per second. +4. Since this is above the 100/s limit, about 80 events per second are randomly dropped, and the rest is stored. -## Rate Limiting and Total Transactions +## Rate Limiting and Total Events The ingestion pipeline has two kinds of rate limits that behave differently compared to organizations without dynamic sampling: @@ -37,49 +46,49 @@ The ingestion pipeline has two kinds of rate limits that behave differently com -There is a dedicated rate limit for stored transactions after inbound filters and dynamic sampling. However, it does not affect total transactions since the fidelity decreases with higher total transaction volumes and this rate limit is not expected to trigger since Dynamic Sampling already reduces the stored transaction throughput. +There is a dedicated rate limit for stored events after inbound filters and dynamic sampling. However, it does not affect total events since the fidelity decreases with higher total event volumes and this rate limit is not expected to trigger since Dynamic Sampling already reduces the stored event throughput. ## Rate Limiting and Trace Completeness -Dynamic sampling ensures complete traces by retaining all transactions associated with a trace if the head transaction is preserved. +Dynamic sampling ensures complete traces by retaining all events associated with a trace if the head event is preserved. -Despite dynamic sampling providing trace completeness, transactions or other items (errors, replays, ...) may still be missing from a trace when rate limiting drops one or more transactions. Rate limiting drops items without regard for the trace, making each decision independently and potentially resulting in broken traces. +Despite dynamic sampling providing trace completeness, events or other items (errors, replays, ...) may still be missing from a trace when rate limiting drops one or more of them. Rate limiting drops items without regard for the trace, making each decision independently and potentially resulting in broken traces. -For example, if there is a trace from `Project A` to `Project B` and `Project B` is subject to rate limiting or quota enforcement, transactions of `Project B` from the trace initiated by `Project A` are lost. +For example, if there is a trace from `Project A` to `Project B` and `Project B` is subject to rate limiting or quota enforcement, events of `Project B` from the trace initiated by `Project A` are lost. ## Client Side Sampling and Dynamic Sampling -Clients have their own [traces sample rate](https://docs.sentry.io/platforms/javascript/performance/#configure-the-sample-rate). The client sample rate is a number in the range `[0.0, 1.0]` (from 0% to 100%) that controls **how many transactions arrive at Sentry**. While documentation will generally suggest a sample rate of `1.0`, for some use cases it might be better to reduce it. +Clients have their own [traces sample rate](https://docs.sentry.io/platforms/javascript/tracing/#configure). The client sample rate is a number in the range `[0.0, 1.0]` (from 0% to 100%) that controls **how many events arrive at Sentry**. While documentation will generally suggest a sample rate of `1.0`, for some use cases it might be better to reduce it. -Dynamic Sampling further reduces how many transactions get stored internally. **While many-to-most graphs and numbers in Sentry are based on total transactions**, accessing spans and tags requires stored transactions. The sample rates apply on top of each other. +Dynamic Sampling further reduces how many events get stored internally. **While most graphs and numbers in Sentry are based on metrics**, accessing spans and tags requires stored events. The sample rates apply on top of each other. -An example of client side sampling and Dynamic Sampling starting from 100k transactions which results in 15k stored transactions is shown below: +An example of client side sampling and Dynamic Sampling starting from 100k events which results in 15k stored events is shown below: ![Client and Dynamic Sampling](./images/clientAndDynamicSampling.png) ## Total Transactions -To collect unsampled information for “total” transactions in Performance, Alerts, and Dashboards, Relay extracts [metrics](https://getsentry.github.io/relay/relay_metrics/index.html) from transactions. In short, these metrics comprise: +To collect unsampled information for “total” transactions in Performance, Alerts, and Dashboards, Relay extracts [metrics](https://getsentry.github.io/relay/relay_metrics/index.html) from spans and transactions. In short, these metrics comprise: -- Counts and durations for all transactions. +- Counts and durations for all events. - A distribution (histogram) for all measurements, most notably the web vitals. - The number of unique users (set). Each of these metrics can be filtered and grouped by a number of predefined tags, [implemented in Relay](https://github.com/getsentry/relay/blob/master/relay-server/src/metrics_extraction/transactions/types.rs#L142-L157). -For more granular queries, **stored transaction events are needed**. _The purpose of dynamic sampling here is to ensure that enough representatives are always available._ +For more granular queries, **stored events are needed**. _The purpose of dynamic sampling here is to ensure that there are always sufficient representative sample events._ -If Sentry applies a 1% dynamic sample rate, you can still receive accurate TPM (transactions per minute) and web vital quantiles through total transaction data backed by metrics. There is also a listing of each of these numbers by the transaction. +If Sentry applies a 1% dynamic sample rate, you can still receive accurate events per minute (SPM or TPM, depending on event type) and web vital quantiles through total event data backed by metrics. There is also a listing of each of these numbers by the transaction. -When you go into transaction summary or Discover, you might want to now split the data by a custom tag you’ve added to your transactions. This granularity is not offered by metrics, so **these queries need to use stored transactions**. +When you go into the trace explorer or Discover, you might want to now split the data by a custom tag you’ve added to your events. This granularity is not offered by metrics, so **these queries need to use stored events**.