You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
SLIM is a node.js web app providing an easy Graphical User Interface (GUI) to wrap bioinformatics tools for amplicon sequencing data analysis (from illumina paired-end or nanopore FASTQ to annotated ASV/OTU matrix).
6
+
**What is it?**
7
+
8
+
SLIM is a web application that aims to facilitate the access to state-of-art bioinfirmatic tools to non-specialist and to command-line reluctant for the processing of raw amplicon sequencing data, i.e. DNA metabarcoding, from illumina paired-end or nanopore FASTQ to annotated ASV/OTU matrix.
9
+
10
+
SLIM is based on the node.js framework, and provide a Graphical User Interface (GUI) to interact with bioinformatic softwares. It simplifies the creation and deployment of a processing pipeline and is accessible within an internet browser over the internet. It is maintened by [Adrià Antich](mailto:a.antich@ceab.csic.es) and [Tristan Cordier](mailto:tristan.cordier@gmail.com).
7
11
The application is embedded in a [podman](https://podman.io/).
8
12
13
+
The full documentation is available [here](https://github.com/adriantich/SLIM/blob/master/man/README.md#tutorials).
14
+
9
15
# Install and deploy the web app
10
16
11
17
First of all, podman needs to be installed on the machine. You can find instructions here :
12
18
*[podman for Ubuntu](https://podman.io/docs/installation#ubuntu)
13
19
*[podman for Debian](https://podman.io/docs/installation#debian)
14
20
*[podman for macOS](https://podman.io/docs/installation#macos)
15
21
16
-
To install SLIM, get the last stable release [here](https://github.com/trtcrd/SLIM/archive/v0.6.2.tar.gz) or, using terminal :
22
+
To install SLIM, get the last stable release [here](https://github.com/trtcrd/SLIM/archive/v1.0.0.tar.gz) or, using terminal :
<!-- Before deploying SLIM, you need to configure the mailing account that will be used for mailing service.
@@ -38,13 +44,13 @@ exports.mailer = {
38
44
``` -->
39
45
40
46
41
-
As soon as podman is installed and running and the SLIM archive downloaded, it can be deployed by using the two scripts `get_dependencies_slim_v0.6.2.sh` and `start_slim_v0.6.2.sh`.
42
-
*`get_dependencies_slim_v0.6.2.sh` fetches all the bioinformatics tools needed from their respective repositories.
43
-
*`start_slim_v0.6.2.sh` destroys the current running webserver to replace it with a new one. **/!\\** All the files previously uploaded and the results of analysis will be detroyed during the process.
47
+
As soon as podman is installed and running and the SLIM archive downloaded, it can be deployed by using the two scripts `get_dependencies_slim_v1.0.0.sh` and `start_slim_v1.0.0.sh`.
48
+
*`get_dependencies_slim_v1.0.0.sh` fetches all the bioinformatics tools needed from their respective repositories.
49
+
*`start_slim_v1.0.0.sh` destroys the current running webserver to replace it with a new one. **/!\\** All the files previously uploaded and the results of analysis will be detroyed during the process.
44
50
45
51
```bash
46
-
bash get_dependencies_slim_v0.6.2.sh
47
-
bash start_slim_v0.6.2.sh
52
+
bash get_dependencies_slim_v1.0.0.sh
53
+
bash start_slim_v1.0.0.sh
48
54
```
49
55
50
56
The server is configured to use up to 8 CPU cores per job. The amount of available cores will determine the amount of job that can be executed in parallel (1-8 -> 1 job, 16 -> 2 jobs, etc.). The number of cores is defined in the [scheduler.js](https://github.com/adriantich/SLIM/blob/master/server/scheduler.js) script in the line:
@@ -55,14 +61,14 @@ const CORES_BY_RUN = 8;
55
61
56
62
# Accessing the webserver
57
63
58
-
The execution of the `start_slim_v0.6.2.sh` script deploys and start the webserver.
64
+
The execution of the `start_slim_v1.0.0.sh` script deploys and start the webserver.
59
65
By default, the webserver is accessible on the 8080 port but can be modified using the -P option:
60
66
```
61
-
> bash start_slim_v0.6.2.sh -h
62
-
start_slim_v0.6.2.sh destroys the current running webserver to replace it with a new one.
67
+
> bash start_slim_v1.0.0.sh -h
68
+
start_slim_v1.0.0.sh destroys the current running webserver to replace it with a new one.
63
69
/!\ All the files previously uploaded and the results of analysis will be detroyed during the process.
@@ -83,9 +89,15 @@ If the server is correctly set, you should see this:
83
89
84
90
# Prepare and upload your data
85
91
92
+
You may check by yourself the files and their required format:
93
+
- download an illimina [toy dataset](https://github.com/trtcrd/SLIM/blob/gh-pages/assets/tuto/exemple_tuto.zip).
94
+
- download an nanopore [toy dataset](https://github.com/trtcrd/SLIM/blob/gh-pages/assets/tuto/nanopore_tuto.zip).
95
+
96
+
86
97
The "file uploader" section allows you to upload all the required files. Usually it consists of:
87
-
- one (or multiple) pair(s) of FASTQ files corresponding to the library(ies) (can be zipped)
98
+
- one (or multiple) pair(s) of FASTQ files corresponding to the multiplexed library(ies) (can be zipped)
88
99
- a CSV (Comma-separated values) file containing the correspondance between library, tagged-primers pairs and samples (the so-called tag-to-sample file, see below for an example)
100
+
- alternatively, a list of fastq files that each correspond to a sample (nanopore or illumina)
89
101
- a FASTA file containing the tagged primers sequences and name (see below for an example)
90
102
- a FASTA file containing sequence reference database (see below for an example)
91
103
@@ -146,7 +158,7 @@ CTT
146
158
## Metabarcoding
147
159
148
160
Usually, a typical Metabarcoding workflow would include:
149
-
1. Demultiplexing the libraries (if each library corresponds to a single sample, adapt your tag-to-sample file accordingly, and proceed to the joining step)
161
+
1. Demultiplexing the libraries (if each file corresponds to a single sample, use the wildcard-creator module, and proceed to the joining step)
150
162
2. Joining the paired-end reads
151
163
3. Chimera removal
152
164
4. ASVs inference / OTUs clustering
@@ -157,11 +169,11 @@ Pick one and hit the "+" button. This will add the module at the bottom of the f
157
169
158
170
**The use of wildcard '*' for file pointing**
159
171
160
-
The chaining between module is made through the files names used as input / output. To avoid having to select mannually all the samples to be included in an analysis, wildcards '*' (meaning 'all') are generated and used by the application.
172
+
The chaining between module is made through the files names used as input / output. To avoid having to select mannually all the samples to be included in an analysis, wildcards '*' (meaning 'all') are generated during demultiplexing (or by using the wildcard-creator module, see below) and used by the application.
161
173
Such wildcards are generated from the compressed libraries fastq files (tar.gz) and by the tag-to-sample file.
162
-
**Users cannot type on their own wildcards in the file names of some modules**. Instead, the application has an autocompletion feature and will make wildcards suggestions for the user to select within the GUI.
174
+
**Users cannot type on their own wildcards in the file names of modules**. Instead, the application has an autocompletion feature and will make wildcards suggestions for the user to select within the GUI.
163
175
164
-
However, when working with demultiplexed libraries, the demultiplexing step is not needed and in substitution we need to create this wildcard pattern to proced thoughout the different steps. To do so, we have created the module [wildcard-creator](https://github.com/adriantich/SLIM/blob/master/man/sections/wildcard_creator.md).
176
+
However, when uploading demultiplexed libraries (each fastq correspond to a single sample), the demultiplexing step is not needed and in substitution we need to create this wildcard pattern to proceed throughout the different processing steps. To do so, we have created the module [wildcard-creator](https://github.com/adriantich/SLIM/blob/master/man/sections/wildcard_creator.md).
165
177
166
178
To point to a set of samples (all samples from the tag-to-sample, or all the samples from the library_1 for instance), there will be a '*', and the application adds the processing step as a suffix incrementaly:
167
179
- all samples from the tag-to-sample file that have been demultiplexed: 'tag_to_sample*_fwd.fastq' and 'tag_to_sample*_rev.fastq'
@@ -185,14 +197,16 @@ and below for an OTU clustering using vsearch and taxonomic assignement
185
197
</p>
186
198
187
199
188
-
Once your workflow is set, please fill the email field and click on the start button.
200
+
<!--Once your workflow is set, please fill the email field and click on the start button.
189
201
Your job will automatically be scheduled on the server.
190
202
You will receive an email when your job starts, if you job aborted and when your job is over.
191
-
This email contains a direct link to your job so that the internet browser tab can be closed once the execution started.
203
+
This email contains a direct link to your job so that the internet browser tab can be closed once the execution started. -->
204
+
205
+
Once your workflow is set, click on the start button, and bookmark the url to allow returning to the job.
192
206
193
207
When the job is over, you will have small icons of download on the right of each output field.
194
208
All the uploaded, intermediate and results files are available to download.
195
-
Your files will remain available on the server during 24h, after what they will be removed for disk usage optimisation
209
+
Your files will remain available on the server during 24h, after what they will be removed for storage optimisation
196
210
197
211
Each module status is displayed besides its names:
198
212
- waiting: the execution started, the module is waiting for files input.
@@ -215,6 +229,16 @@ In the [manual](https://github.com/adriantich/SLIM/blob/master/man/README.md#lis
215
229
216
230
# Version history
217
231
232
+
### v1.0.0
233
+
234
+
Moved to podman container by default (docker kept as an option)
235
+
Added modules for processing nanopore amplicon data (CHOPPER, MSI, ASHURE, OPTICS)
236
+
Added module to create wildcard grouping of files
237
+
Added SWARM3 module
238
+
Emailing service hidden, until a viable option is identified
239
+
Documentation moved from the wiki to the tutos folder.
240
+
Various interface bug fixes
241
+
218
242
### v0.6.2
219
243
220
244
Dockerfile: updated systeminformation and docker recipe
0 commit comments