Skip to content

Commit

Permalink
Major refactoring to support tokio/rs-tracing (#8)
Browse files Browse the repository at this point in the history
* Major refactoring to support tokio/rs-tracing

Removed manual activation of tracing and instead use rust tracing events
to instrument and send datadog tracing evens as a subscriber.

Also reimplemented log with support for trace-id and span-id.

Use thread local storage to account for current trace and span IDs so
that new spans get the appropriate parents.

* Added events for errors, http, and tags

Added handling of event!() macros.  To cause error metadata, the
event!() must contain the key "error_msg", "error_stack", and/or
"error_type".  If any are present, error will be set to true for
the span, and error metadata will be added for the values of the keys
(missing keys will have "" values added).

Likewise, if "http_url", "http_status_code", and/or "http_method" will
have HTTP metadata attached (and likewise, missing keys will use ""
values).

Any tags not for http or error will be considered "custom tags" and
added to the meta data as the key/value pair given.

Events can be sent as one large event, or split into multiple.  The last
event will take precendence if any key/values overwrite previous
key/values.

Lastly, cleaned up warnings, and formatted code.

* Add "get_thread_trace_id" to get thread-local trace ID

* Change sending pattern to require event

User must send a "send_trace" event with a true value (true, "true", 1,
"1", "TRUE", etc.) to cause the trace to send to datadog.

This allows more flexibility about WHEN the trace gets sent.

* Add test to make sure event outside of span still sends

* Make sure highest level for traces is "INFO"

This makes sure that even with Error and Warn logs, we can still process
"INFO" traces, as this is what the send_trace will likely go out as.

* Update README

* Don't panic if global sub is set, just warn and return
  • Loading branch information
kitsuneninetails authored Jun 4, 2020
1 parent 6df48aa commit 79defa8
Show file tree
Hide file tree
Showing 6 changed files with 573 additions and 206 deletions.
6 changes: 5 additions & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[package]
name = "datadog-apm-sync"
version = "0.1.2"
version = "0.2.0"
authors = ["Michael Micucci <9975355+kitsuneninetails@users.noreply.github.com>", "Fernando Gonçalves <fernando.goncalves@pipefy.com> (original base code)"]
edition = "2018"
license = "MIT"
Expand All @@ -15,6 +15,10 @@ filter-logger = "*"
log = "0.4"
serde = { version = "1", features = ["derive"] }
serde_json = "1"
tracing = "0.1"
tracing-subscriber = "0.2.4"
tracing-futures = "*"

[dev-dependencies]
rand = "0.3"
tokio = { version = "0.2", features = ["rt-core", "sync", "rt-threaded", "macros"] }
56 changes: 44 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,27 +1,59 @@
# Datadog apm (sync-bsed) for Rust (fork from datadog-apm)
# Datadog apm (sync-bsed) for Rust (original fork from datadog-apm)

[![MIT licensed](https://img.shields.io/badge/license-MIT-blue.svg)](./LICENSE)
![CI](https://github.com/kitsuneninetails/datadog-apm-rust-sync/workflows/CI/badge.svg)

Based on a fork from <https://github.com/pipefy/datadog-apm-rust>.
Credits
-------
Originally based on a fork from <https://github.com/pipefy/datadog-apm-rust>.
Original code written by Fernando Gonçalves (github: <https://github.com/fhsgoncalves>).

Credit and my gratitude go to the original author for the original code. This repo only builds on top
Credit and my gratitude go to the original author for the original code. This repo builds on top
of his hard work and research.

Changes
-------

As this was a fairly big change to the design and implementation, I decided to make a new repo, rather
than a fork, as a PR against the original would be ill-advised, as it changes some basic assumptions and design
decisions which I don't feel fair to impose upon the original author (his vision should guide the path the original
repo proceeds upon; this repo is just my vision for the original idea he came up with).
Usage
------

Add to your `Cargo.toml`:
```toml
tracing = "0.1"
tracing-futures = "*"
datadog-apm-sync = {version = "0.2", git = "http://github.com/kitsuneninetails/datadog-apm-rust-sync"}
```

In your Rust code, instantiate a DatadogTracer:

```text
# {
let config = Config {
service: "service_name".into(),
logging_config: Some(LoggingConfig {
level: Level::Debug,
..LoggingConfig::default()
}),
enable_tracing: true,
..Default::default()
};
let _client = DatadogTracing::new(config);
#}
```

Then, just use the tracing library normally (with #[instrument] tags, and span! + span.enter() code) and your data
will log with trace-id/span-id where applicable (due to the `logging_config` beign passed in; pass in `None` to disable
the DatadogTracing as a logger) and will prepare datadog traces via the tracing::Subscriber calls (due to `enable_tracing`
being set; set to `false` to disable the DatadogTracing as a tracing subscriber).

This tracer has also been extended to become a Logger, allowing logs to be printed out with span and trace IDs. This
step also brings it closer to compatibility with rust-tracing and open-telemetry APIs.
Use events (event! macro) to send error information (use the keys: error_msg, error_stack, and error_type) or
HTTP metdata information (use http_url, http_method, and http_status_code keys). Also, to actually force through the
send to Datadog, send an event:

Modifications made to use Hyper 0.10 and remove all Tokio/Async+Await functionality:
```text
event!(tracing::Level::INFO, send_trace=true);
```

Other Changes from Original
------
* Removed tokio crate.
* Removed all mention of async/await.
* Changed MPSC to std::sync version rather than tokio::sync version.
Expand Down
23 changes: 8 additions & 15 deletions src/api.rs
Original file line number Diff line number Diff line change
@@ -1,9 +1,6 @@
use crate::model::Span;
use serde::Serialize;
use std::{
collections::HashMap,
time::{Duration, UNIX_EPOCH},
};
use std::collections::HashMap;

fn fill_meta(span: &Span, env: Option<String>) -> HashMap<String, String> {
let mut meta = HashMap::new();
Expand Down Expand Up @@ -38,10 +35,6 @@ fn fill_metrics() -> HashMap<String, f64> {
metrics
}

fn duration_to_nanos(duration: Duration) -> u64 {
duration.as_secs() * 1_000_000_000 + duration.subsec_nanos() as u64
}

#[derive(Debug, Serialize, Clone, PartialEq)]
pub struct RawSpan {
service: String,
Expand All @@ -67,8 +60,8 @@ impl RawSpan {
name: span.name.clone(),
resource: span.resource.clone(),
parent_id: span.parent_id,
start: duration_to_nanos(span.start.duration_since(UNIX_EPOCH).unwrap()),
duration: duration_to_nanos(span.duration),
start: span.start.timestamp_nanos() as u64,
duration: span.duration.num_nanoseconds().unwrap_or(0) as u64,
error: if span.error.is_some() { 1 } else { 0 },
r#type: "custom".to_string(),
meta: fill_meta(&span, env.clone()),
Expand All @@ -83,7 +76,7 @@ mod tests {

use super::*;
use crate::model::HttpInfo;
use std::time::SystemTime;
use chrono::{Duration, Utc};

use rand::Rng;

Expand All @@ -100,8 +93,8 @@ mod tests {
trace_id: rng.gen::<u64>(),
name: String::from("request"),
resource: String::from("/home/v3"),
start: SystemTime::now(),
duration: Duration::from_secs(2),
start: Utc::now(),
duration: Duration::seconds(2),
parent_id: None,
http: Some(HttpInfo {
url: String::from("/home/v3/2?trace=true"),
Expand Down Expand Up @@ -132,8 +125,8 @@ mod tests {
resource: span.resource.clone(),
service: config.service.clone(),
r#type: "custom".into(),
start: duration_to_nanos(span.start.duration_since(UNIX_EPOCH).unwrap()),
duration: duration_to_nanos(span.duration),
start: span.start.timestamp_nanos() as u64,
duration: span.duration.num_nanoseconds().unwrap_or(0) as u64,
error: 0,
meta,
metrics,
Expand Down
Loading

0 comments on commit 79defa8

Please sign in to comment.