Introduction

This document describes the output produced by the pipeline.

The directories listed below will be created in the results directory after the pipeline has finished. All paths are relative to the top-level results directory.

Pipeline overview

The pipeline is built using Nextflow and the results are organized as follow:

Module output

Preprocessing

FastQC

FastQC gives general quality metrics about your sequenced reads. It provides information about the quality score distribution across your reads, per base sequence content (%A/T/G/C), adapter contamination and overrepresented sequences. For further reading and documentation see the FastQC help pages.

Output files
  • trimgalore/fastqc/
    • *_fastqc.html: FastQC report containing quality metrics for your untrimmed raw fastq files.
    • *_fastqc.zip: Zipped FastQC data.
  • summary_tables/

Trim galore!

Trimgalore is a fastq preprocessor for read/adapter trimming and quality control. It is used in this pipeline for trimming adapter sequences and discard low-quality reads. Its output is in the results folder and part of the MultiQC report.

Output files
  • trimgalore/: directory containing log files with retained reads, trimming percentage, etc. for each sample.
    • *trimming_report.txt: report of read numbers that pass trimgalore.
  • summary_tables/

MultiQC

MultiQC is a visualization tool that generates a single HTML report summarising all samples in your project. Most of the pipeline QC results are visualised in the report and further statistics are available in the report data directory.

Results generated by MultiQC collate pipeline QC from supported tools e.g. FastQC. The pipeline has special steps which also allow the software versions to be reported in the MultiQC output for future traceability. For more information about how to use MultiQC reports, see http://multiqc.info.

Output files
  • multiqc/
    • multiqc_report.html: a standalone HTML file that can be viewed in your web browser.
    • multiqc_data/: directory containing parsed statistics from the different tools used in the pipeline.
    • multiqc_plots/: directory containing static images from the report in various formats.
  • summary_tables/
Note

The FastQC plots displayed in the MultiQC report shows untrimmed reads. They may contain adapter sequence and potentially regions with low quality.

BBduk

BBduk is a filtering tool that removes specific sequences from the samples using a reference fasta file. BBduk is built-in tool from BBmap.

Output files
  • bbmap/
    • bbduk/*.fastq.gz: fastq files after removal of sequences specified with --sequence_filter. Only saved if you set --save_bbduk_fastq.
    • bbduk/*.bbduk.log: text files with the results from BBduk analysis. The number of filtered reads can be found here.
  • summary_tables/

BBnorm

BBnorm is a tool from BBmap that allows to reduce the coverage of highly abundant sequence kmers and remove sequences representing kmers that are below a threshold. It can be useful if the data set is too large to assemble but also potentially improve an assembly. N.B. the digital normalization is done only for the assembly and the non-normalized sequences will be used for quantification. BBnorm is a BBmap tool.

Output files
  • bbmap/bbnorm/
    • all_samples.bbnorm.log: Log file for the run.
    • all_samples.fastq.gz: Sequences kept after normalization. Only saved if you set --save_bbnorm_fastq.
  • summary_tables/

Assembly step

Megahit

Megahit is used to assemble the cleaned and trimmed FastQ reads into contigs.

Output files
  • megahit/megahit_out/
    • *.log: log file of Megahit run.
    • megahit_assembly.contigs.fa.gz: contigs created by Megahit.

Spades

Spades is another option to assemble reads into contigs.

Output files
  • spades/
    • spades.assembly.gfa.gz: gfa file output from Spades.
    • spades.spades.log: log file output from Spades.
    • spades.transcripts.fa.gz: contigs created by Spades.
    • spades.yaml: configuration file used by Spades.

ORF caller step

Prodigal

As default, Prodigal is used to identify ORFs in the assembly.

Output files
  • prodigal/
    • <assembly_name>.prodigal_all.txt.gz: ORF summary
    • <assembly_name>.prodigal.fna.gz: ORFs in nucleotide format fasta file
    • <assembly_name>.prodigal.faa.gz: ORFs in amino acids format fasta file
    • <assembly_name>.prodigal.gff.gz: ORFs in genome feature file format
    • <assembly_name>.prodigal_format.gff.gz: ORFs in alternative genome feature file format

Prokka

As one alternative, you can use Prokka to identify ORFs in the assembly. In addition to calling ORFs (done with Prodigal) Prokka will filter ORFs to only retain quality ORFs and will functionally annotate the ORFs. NB: Prodigal or Prokka are recomended for prokaryotic samples

Output files
  • prokka/
    • *.ffn.gz: ORFs in nucleotide format fasta file
    • *.faa.gz: ORFs in amino acids format fasta file
    • *.gff.gz: all features in genome feature file format
  • summary_tables/
    • <assembly_name>.<orfcaller_name>.prokka-annotations.tsv.gz: functional annotation of ORFs parsed from the .gff.gz file

TransDecoder

Another alternative is TransDecoder to find ORFs in the assembly. N.B. TransDecoder is recommended for eukaryotic samples

Output files
  • transdecoder/
    • *.bed.gz: ORFs in bed format
    • *.cds.gz: ORFs in nucleotide format fasta file
    • *.pep.gz: ORFs in amino acid format fasta file
    • *.gff3.gz: ORFs in genome feature format

Quantification

BBMap

Align reads to contigs with BBMap

Output files
  • bbmap/bbmap/
    • *.bam: alignments in bam format if enabled with --save_bam.
    • *.bbmap.log: log files

FeatureCounts

Quantification of CDS features with featureCounts from the subread package

Output files
  • featurecounts/
    • *.featureCounts.tsv: tab separated file with count data
    • *.featureCounts.tsv.summar: summary statistics
  • summary_tables/
    • <assembly_name>.<orfcaller_name>.counts.tsv.gz: reformatted count data

Functional and taxonomical annotation

EggNOG

EggNOG-mapper will perform an analysis to assign functions to the ORFs.

Output files
  • eggnog/
    • <assembly_name>.<orfcaller_name>.emapper.annotations.gz: a file with the results from the annotation phase, see the EggNOG-mapper documentation.
    • <assembly_name>.<orfcaller_name>.emapper.hits.gz: a file with the results from the search phase, from HMMER, Diamond or MMseqs2.
    • <assembly_name>.<orfcaller_name>.emapper.seed_orthologs.gz: a file with the results from parsing the hits. Each row links a query with a seed ortholog. This file has the same format independently of which searcher was used, except that it can be in short format (4 fields), or full.
    • <assembly_name>.<orfcaller_name>.emapper.hits.gz: the *.emapper.annotations.gz a little bit tidied up.
  • summary_tables/
    • <assembly_name>.<orfcaller_name>.emapper.tsv.gz: reformatted eggnog-mapper output

KOfamScan

KOfamScan will perform an analysis to assign KEGG orthologs to ORFs.

Output files
  • kofamscan/
    • <assembly_name>.<orfcaller_name>.kofamscan_output.tsv.gz: kofamscan output.
  • summary_tables/
    • <assembly_name>.<orfcaller_name>.kofamscan.tsv.gz: reformatted kofamscan output
    • <assembly_name>.<orfcaller_name>.kofamscan-uniq.tsv.gz: reformatted kofamscan output subset to the best hit for each ORF

EUKulele

EUKulele will perform an analysis to assign taxonomy to the ORFs. A number of databases are supported: MMETSP, PhyloDB and GTDB. GTDB currently only works as a user provided database, i.e. data must be downloaded before running nf-core/metatdenovo.

Output files
  • eukulele/<assembly_name>.<orfcaller_name>_<database>
    • <assembly_name>.<orfcaller_name>/mets_full/diamond/proteins.diamond.out.gz: Diamond output
    • <assembly_name>.<orfcaller_name>/taxonomy_counts/*.csv.gz: counts for different ranks, see EUKulele documentation
    • <assembly_name>.<orfcaller_name>/taxonomy_estimation/proteins-estimated-taxonomy.out.gz: EUKulele taxonomy assignment
  • summary_tables/
    • <assembly_name>.<orfcaller_name>.<database>.eukulele.tsv.gz: reformatted EUKulele output

Diamond taxonomy

Diamond is a fast protein sequence aligner that can also assign taxonomy based on a “Last Common Ancestor” (LCA) algorithm. At the time of writing, users of the pipeline need to craft their own databases, see the usage documentation.

Output files
  • diamond_taxonomy/
    • <assembly_name>.<orfcaller_name>.<database>.tsv.gz: Output directly from the Diamond aligner
    • <assembly_name>.<orfcaller_name>.<database>.lineage.tsv.gz: Output after taxonkit lineage added the full taxonomic lineage to the above
  • summary_tables/
    • <assembly_name>.<orfcaller_name>.<database>.diamond.taxonomy.tsv.gz: Cleaned up output, including addition of a header row and, if a list of ranks was given, the taxonomy parsed into individual taxa
    • <assembly_name>.<orfcaller_name>.<database>.taxonomy-taxdump.tsv.gz: Only if parse_with_taxdump was set in the input file; like the above, but with the taxonomy parsed using the taxonomy dump files

Hmmsearch

You can run hmmsearch on ORFs using a set of HMM profiles provided to the pipeline (see the --hmmdir, --hmmpatern and --hmmfiles parameters).

Output files
  • hmmer/
    • *.tbl.gz: results from individual HMMER runs in tabular format.
    • hits/*.faa.gz: Sequences of the best ranked hits to the different HMMER profiles.
  • summary_tables/
    • <assembly_name>.<orfcaller_name>.hmmrank.tsv.gz: ranked hmmsearch hits

After the search, hits for each ORF and HMM will be summarised and ranked based on scores for the hits (see output in summary tables).

Metatdenovo output

Summary tables

Consistently named and formated output tables in tsv format ready for further analysis. Filenames start with assembly program and ORF caller, to allow reruns of the pipeline with different parameter settings without overwriting output files.

Output files
  • summary_tables/
    • <assembly_name>.<orfcaller_name>.overall_stats.tsv.gz: overall statistics from the pipeline, e.g. number of reads, number of called ORFs, number of reads mapping back to contigs/ORFs etc.
    • <assembly_name>.<orfcaller_name>.counts.tsv.gz: read counts per ORF and sample.
    • <assembly_name>.<orfcaller_name>.emapper.tsv.gz: reformatted output from EggNOG-mapper.
    • <assembly_name>.<orfcaller_name>.kofamscan.tsv.gz: reformatted output from Kofamscan.
    • <assembly_name>.<orfcaller_name>.kofamscan-uniq.tsv.gz: reformatted output from Kofamscan with a single row per ORF in contrast to the above.
    • <assembly_name>.<orfcaller_name>.{db}_eukulele.tsv.gz: taxonomic annotation per ORF for specific database.
    • <assembly_name>.<orfcaller_name>.prokka-annotations.tsv.gz: reformatted annotation output from Prokka.
    • <assembly_name>.<orfcaller_name>.<database>.diamond.taxonomy.tsv.gz: diamond taxonomy parsed into individual taxa
    • <assembly_name>.<orfcaller_name>.<database>.taxonomy-taxdump.tsv.gz: diamond taxonomy parse with taxdump data. Only if parse_with_taxdump was set in the input file
    • <assembly_name>.<orfcaller_name>.hmmrank.tsv.gz: ranked summary table from HMMER results.

Pipeline information

Output files
  • pipeline_info/
    • Reports generated by Nextflow: execution_report.html, execution_timeline.html, execution_trace.txt and pipeline_dag.dot/pipeline_dag.svg.
    • Reports generated by the pipeline: pipeline_report.html, pipeline_report.txt and software_versions.yml. The pipeline_report* files will only be present if the --email / --email_on_fail parameter’s are used when running the pipeline.
    • Reformatted samplesheet files used as input to the pipeline: samplesheet.valid.csv.
    • Parameters used by the pipeline run: params.json.