nf-core/rnaseq
RNA sequencing analysis pipeline using STAR, RSEM, HISAT2 or Salmon with gene/isoform counts and extensive quality control.
2.0
). The latest
stable release is
3.17.0
.
Samplesheet input
You will need to create a samplesheet file with information about the samples in your experiment before running the pipeline. Use this parameter to specify its location. It has to be a comma-separated file with 5 columns, and a header row as shown in the examples below.
Multiple replicates
The group
identifier is the same when you have multiple replicates from the same experimental group, just increment the replicate
identifier appropriately. The first replicate value for any given experimental group must be 1. Below is an example for a single experimental group in triplicate:
Multiple runs of the same library
The group
and replicate
identifiers are the same when you have re-sequenced the same sample more than once (e.g. to increase sequencing depth). The pipeline will concatenate the raw reads before alignment. Below is an example for two samples sequenced across multiple lanes:
Full design
A final design file consisting of both single- and paired-end data may look something like the one below. This is for two experimental groups in triplicate, where the last replicate of the treatment
group has been sequenced twice.
Column | Description |
---|---|
group | Group identifier for sample. This will be identical for replicate samples from the same experimental group. |
replicate | Integer representing replicate number. Must start from 1..<number of replicates> . |
fastq_1 | Full path to FastQ file for read 1. File has to be zipped and have the extension “.fastq.gz” or “.fq.gz”. |
fastq_2 | Full path to FastQ file for read 2. File has to be zipped and have the extension “.fastq.gz” or “.fq.gz”. |
strandedness | Sample strand-specificity. Must be one of unstranded , forward or reverse . |
An example samplesheet has been provided with the pipeline.
Direct download of public repository data
NB: This is an experimental feature but should work beautifully when it does! :)
The pipeline has been set-up to automatically download and process the raw FastQ files from public repositories. Identifiers can be provided in a file, one-per-line via the --public_data_ids
parameter. Currently, the following identifiers are supported:
SRA | ENA | GEO |
---|---|---|
SRR11605097 | ERR4007730 | GSM4432381 |
SRX8171613 | ERX4009132 | GSE147507 |
SRS6531847 | ERS4399630 | |
SAMN14689442 | SAMEA6638373 | |
SRP256957 | ERP120836 | |
SRA1068758 | ERA2420837 | |
PRJNA625551 | PRJEB37513 |
If SRR
/ERR
run ids are provided then these will be resolved back to their appropriate SRX
/ERX
ids to be able to merge multiple runs from the same experiment. This is conceptually the same as merging multiple libraries sequenced from the same sample.
The final sample information for all identifiers is obtained from the ENA which provides direct download links for FastQ files as well as their associated md5 sums. If download links exist, the files will be downloaded in parallel by FTP otherwise they will NOT be downloaded. This is intentional because the tools such as parallel-fastq-dump
, fasterq-dump
, prefetch
etc require pre-existing configuration files in the users home directory which makes automation tricky across different platforms and containerisation.
As a bonus, the pipeline will also generate a valid samplesheet with paths to the downloaded data that can be used with the --input
parameter, however, it is highly recommended that you double-check that all of the identifiers you defined using --public_data_ids
are represented in the samplesheet. Also, public databases don’t reliably hold information such as experimental group, replicate identifiers or strandedness information so you may need to amend these entries too. All of the sample metadata obtained from the ENA has been appended as additional columns to help you manually curate the samplesheet before you run the pipeline.
Alignment options
By default, the pipeline uses STAR (i.e. --aligner star
) to align the raw FastQ reads to the reference genome. STAR is fast but requires a lot of memory to run, typically around 38GB for the Human GRCh37 reference genome. Since the RSEM (i.e. --aligner star_rsem
) workflow in the pipeline also uses STAR you should use the HISAT2 aligner (i.e. --aligner hisat2
) if you have memory limitations.
You also have the option to pseudo-align and quantify your data with Salmon by providing the --pseudo_aligner salmon
parameter. Salmon will then be run in addition to the standard alignment workflow defined by --aligner
, mainly because it allows you to obtain QC metrics with respect to the genomic alignments. However, you can provide the --skip_alignment
parameter if you would like to run Salmon in isolation. By default, the pipeline will use the genome fasta and gtf file to generate the transcripts fasta file, and then to build the Salmon index. You can override these parameters using the --transcript_fasta
and --salmon_index
parameters, respectively.
Reference genome files
The minimum reference genome requirements are a FASTA and GTF file, all other files required to run the pipeline can be generated from these files. However, it is more storage and compute friendly if you are able to re-use reference genome files as efficiently as possible. It is recommended to use the --save_reference
parameter if you are using the pipeline to build new indices (e.g. those unavailable on AWS iGenomes) so that you can save them somewhere locally. The index building step can be quite a time-consuming process and it permits their reuse for future runs of the pipeline to save disk space. You can then either provide the appropriate reference genome files on the command-line via the appropriate parameters (e.g. --star_index '/path/to/STAR/index/'
) or via a custom config file.
- If
--genome
is provided then the FASTA and GTF files (and existing indices) will be automatically obtained from AWS-iGenomes unless these have already been downloaded locally in the path specified by--igenomes_base
. - If
--gff
is provided as input then this will be converted to a GTF file, or the latter will be used if both are provided. - If
--gene_bed
is not provided then it will be generated from the GTF file. - If
--additional_fasta
is provided then the features in this file (e.g. ERCC spike-ins) will be automatically concatenated onto both the reference FASTA file as well as the GTF annotation before building the appropriate indices.
NB: Compressed reference files are also supported by the pipeline i.e. standard files with the
.gz
extension and indices folders with thetar.gz
extension.
If you are using GENCODE reference genome files please specify the --gencode
parameter because the format of these files is slightly different to ENSEMBL genome files:
- The
--fc_group_features_type
parameter will automatically be set togene_type
as opposed togene_biotype
, respectively. - If you are running Salmon, the
--gencode
flag will also be passed to the index building step to overcome parsing issues resulting from the transcript IDs in GENCODE fasta files being separated by vertical pipes (|
) instead of spaces (see this issue).
Running the pipeline
The typical command for running the pipeline is as follows:
This will launch the pipeline with the docker
configuration profile. See below for more information about profiles.
Note that the pipeline will create the following files in your working directory:
Updating the pipeline
When you run the above command, Nextflow automatically pulls the pipeline code from GitHub and stores it as a cached version. When running the pipeline after this, it will always use the cached version if available - even if the pipeline has been updated since. To make sure that you’re running the latest version of the pipeline, make sure that you regularly update the cached version of the pipeline:
Reproducibility
It’s a good idea to specify a pipeline version when running the pipeline on your data. This ensures that a specific version of the pipeline code and software are used when you run your pipeline. If you keep using the same tag, you’ll be running the same version of the pipeline, even if there have been changes to the code since.
First, go to the nf-core/rnaseq releases page and find the latest version number - numeric only (eg. 1.3.1
). Then specify this when running the pipeline with -r
(one hyphen) - eg. -r 1.3.1
.
This version number will be logged in reports when you run the pipeline, so that you’ll know what you used when you look back in the future.
Core Nextflow arguments
NB: These options are part of Nextflow and use a single hyphen (pipeline parameters use a double-hyphen).
-profile
Use this parameter to choose a configuration profile. Profiles can give configuration presets for different compute environments.
Several generic profiles are bundled with the pipeline which instruct the pipeline to use software packaged using different methods (Docker, Singularity, Podman, Conda) - see below.
We highly recommend the use of Docker or Singularity containers for full pipeline reproducibility, however when this is not possible, Conda is also supported.
The pipeline also dynamically loads configurations from https://github.com/nf-core/configs when it runs, making multiple config profiles for various institutional clusters available at run time. For more information and to see if your system is available in these configs please see the nf-core/configs documentation.
Note that multiple profiles can be loaded, for example: -profile test,docker
- the order of arguments is important!
They are loaded in sequence, so later profiles can overwrite earlier profiles.
If -profile
is not specified, the pipeline will run locally and expect all software to be installed and available on the PATH
. This is not recommended.
docker
- A generic configuration profile to be used with Docker
- Pulls software from Docker Hub:
nfcore/rnaseq
singularity
- A generic configuration profile to be used with Singularity
- Pulls software from Docker Hub:
nfcore/rnaseq
podman
- A generic configuration profile to be used with Podman
- Pulls software from Docker Hub:
nfcore/rnaseq
conda
test
- A profile with a complete configuration for automated testing
- Includes links to test data so needs no other parameters
-resume
Specify this when restarting a pipeline. Nextflow will used cached results from any pipeline steps where the inputs are the same, continuing from where it got to previously.
You can also supply a run name to resume a specific run: -resume [run-name]
. Use the nextflow log
command to show previous run names.
-c
Specify the path to a specific config file (this is a core Nextflow command). See the nf-core website documentation for more information.
Custom resource requests
Each step in the pipeline has a default set of requirements for number of CPUs, memory and time. For most of the steps in the pipeline, if the job exits with an error code of 143
(exceeded requested resources) it will automatically resubmit with higher requests (2 x original, then 3 x original). If it still fails after three times then the pipeline is stopped.
Whilst these default requirements will hopefully work for most people with most data, you may find that you want to customise the compute resources that the pipeline requests. You can do this by creating a custom config file. For example, to give the workflow process star
32GB of memory, you could use the following config:
See the main Nextflow documentation for more information.
If you are likely to be running nf-core
pipelines regularly it may be a good idea to request that your custom config file is uploaded to the nf-core/configs
git repository. Before you do this please can you test that the config file works with your pipeline of choice using the -c
parameter (see definition above). You can then create a pull request to the nf-core/configs
repository with the addition of your config file, associated documentation file (see examples in nf-core/configs/docs
), and amending nfcore_custom.config
to include your custom profile.
If you have any questions or issues please send us a message on Slack on the #configs
channel.
Running in the background
Nextflow handles job submissions and supervises the running jobs. The Nextflow process must run until the pipeline is finished.
The Nextflow -bg
flag launches Nextflow in the background, detached from your terminal so that the workflow does not stop if you log out of your session. The logs are saved to a file.
Alternatively, you can use screen
/ tmux
or similar tool to create a detached session which you can log back into at a later time.
Some HPC setups also allow you to run nextflow within a cluster job submitted your job scheduler (from where it submits more jobs).
Nextflow memory requirements
In some cases, the Nextflow Java virtual machines can start to request a large amount of memory.
We recommend adding the following line to your environment to limit this (typically in ~/.bashrc
or ~./bash_profile
):