nf-core/configs: BinAC 2 Configuration
All nf-core pipelines have been successfully configured for use on the BinAC 2 at the University of Tübingen.
To use, run the pipeline with -profile binac2
. This will download and launch the binac2.config
which has been pre-configured with a setup suitable for the BinAC 2 cluster. Using this profile, a docker image containing all of the required software will be downloaded, and converted to a Singularity image before execution of the pipeline.
Below are non-mandatory information e.g. on modules to load etc
Before running the pipeline you will need to install Nextflow on BinAC 2. You can do this by issuing the commands below:
## Load Miniforge and create a Conda environment with Nextflow installed
module purge
module load devel/miniforge
conda create --name nextflow nextflow
conda activate nextflow
Apptainer is installed on every login and compute node.
You can specify which compute project to use for resource accounting with the --project
parameter.
If omitted, the default compute project will be used.
nextflow run nf-core/rnaseq -profile binac2 --project bw16f003
Below are non-mandatory information on iGenomes specific configuration
A local copy of the iGenomes resource has been made available on BinAC 2 so you should be able to run the pipeline against any reference available in the igenomes.config
specific to the nf-core pipeline.
You can do this by simply using the --genome <GENOME_ID>
parameter.
You will need an account on BinAC 2 in order to run the pipeline. If in doubt contact IT.
Nextflow will need to submit the jobs via the SLURM job scheduler to the HPC cluster and as such the commands above will have to be executed on one of the login nodes. If in doubt contact IT.
Config file
profiles {
binac2 {
params {
config_profile_name = 'binac2'
config_profile_description = 'BinAC 2 cluster profile provided by nf-core/configs.'
config_profile_contact = 'Felix Bartusch (@fbartusch)'
config_profile_url = 'https://wiki.bwhpc.de/e/BinAC2'
igenomes_base = '/pfs/10/project/db/igenomes'
project = null // project for SLURM accounting
max_memory = 2000.GB
max_cpus = 128
max_time = 336.h
schema_ignore_params = "max_memory,max_cpus,max_time,project"
validationSchemaIgnoreParams = "max_memory,max_cpus,max_time,project,schema_ignore_params"
}
validation {
ignoreParams = ["schema_ignore_params", "validationSchemaIgnoreParams", "project", "max_memory", "max_cpus", "max_time"]
}
apptainer {
enabled = true
autoMounts = true
pullTimeout = '120m'
cacheDir = "/pfs/10/project/apptainer_cache/${USER}"
envWhitelist = 'CUDA_VISIBLE_DEVICES'
}
process {
resourceLimits = [
memory: 2000.GB,
cpus: 128,
time: 336.h
]
executor = 'slurm'
clusterOptions = { params.project ? "-A ${params.project}": ""}
queue = { task.accelerator ? 'gpu' : 'compute' }
maxRetries = 2
scratch = '$TMPDIR'
}
executor {
queueSize = 200
submitRateLimit = '50/2min'
pollInterval = '10sec'
queueStatInterval = '1min'
}
}
}