Skip to content

Commit

Permalink
Merge pull request #92 from ocean-tracking-network/ticket74-tweaks
Browse files Browse the repository at this point in the history
Additional tweaks for ticket 74.
  • Loading branch information
CaitlinBate authored Jan 3, 2024
2 parents 3ca09c5 + f182722 commit dbffb8f
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 5 deletions.
8 changes: 4 additions & 4 deletions _episodes/04-r-telemetry-report-import.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ View(proj58_tag)

## FACT Node

## Importing all the datasets
### Importing all the datasets
Now that we have an idea of what an exploratory workflow might look like with Tidyverse libraries like `dplyr` and `ggplot2`, let's look at how we might implement a common telemetry workflow using these tools.

We are going to use OTN-style detection extracts for this lesson. If you're unfamiliar with detection extracts formats from OTN-style database nodes, see the documentation [here](https://members.oceantrack.org/data/otn-detection-extract-documentation-matched-to-animals).
Expand Down Expand Up @@ -121,7 +121,7 @@ View(tqcs_tag)

## GLATOS Network

## Importing all the datasets
### Importing all the datasets
Now that we have an idea of what an exploratory workflow might look like with Tidyverse libraries like `dplyr` and `ggplot2`, let's look at how we might implement a common telemetry workflow using these tools.

For the GLATOS Network you will receive Detection Extracts which include all the Tag matches for your animals. These can be used to create many meaningful summary reports.
Expand Down Expand Up @@ -198,7 +198,7 @@ View(glatos_receivers)

## MigraMar Node

## Importing all the datasets
### Importing all the datasets
Now that we have an idea of what an exploratory workflow might look like with Tidyverse libraries like `dplyr` and `ggplot2`, let's look at how we might implement a common telemetry workflow using these tools.

We are going to use OTN-style detection extracts for this lesson. If you're unfamiliar with detection extracts formats from OTN-style database nodes, see the documentation [here](https://members.oceantrack.org/data/otn-detection-extract-documentation-matched-to-animals).
Expand Down Expand Up @@ -254,7 +254,7 @@ view(gmr_tag)
## OTN Node


## Importing all the datasets
### Importing all the datasets
Let's look at how we might implement a common telemetry workflow using Tidyverse libraries like `dplyr` and `ggplot2`.

We are going to use OTN-style detection extracts for this lesson. If you're unfamiliar with detection extracts formats from OTN-style database nodes, see the documentation [here](https://members.oceantrack.org/data/otn-detection-extract-documentation-matched-to-animals).
Expand Down
2 changes: 1 addition & 1 deletion _episodes/07-introduction-to-glatos.md
Original file line number Diff line number Diff line change
Expand Up @@ -470,7 +470,7 @@ library(lubridate)
~~~
{: .language-r}

If you are following along with the workshop in the workshop repository, there should be a folder in 'data/' containing data corresponding to your node (at time of writing, FACT, ACT, or GLATOS). `glatos` can function with both GLATOS and OTN Node-formatted data, but the functions are different for each. Both, however, provide a marked performance boost over base R, and both ensure that the resulting data set will be compatible with the rest of the `glatos` framework.
If you are following along with the workshop in the workshop repository, there should be a folder in 'data/' containing data corresponding to your node (at time of writing, FACT, ACT, GLATOS, or MigraMar). `glatos` can function with both GLATOS and OTN Node-formatted data, but the functions are different for each. Both, however, provide a marked performance boost over base R, and both ensure that the resulting data set will be compatible with the rest of the `glatos` framework.

We'll start by importing one of the `glatos` package's built-in datasets. `glatos` comes with a few datasets that are useful for testing code on a dataset known to work with the package's functions. For this workshop, we'll continue using the walleye data that we've been working with in previous lessons. First, we'll use the `system.file` function to build the filepath to the walleye data. This saves us having to track down the file in the `glatos` package's file structure- R can find it for us automatically.

Expand Down

0 comments on commit dbffb8f

Please sign in to comment.