McCleary Configuration
All nf-core pipelines have been successfully configured for use on the Yale University McCleary cluster. To use, run the pipeline with -profile mccleary.
NB: You will need an account to use the HPC cluster on the McCleary cluster in order to run the pipeline. If in doubt contact IT. To use nf-core pipelines on McCleary:
- Install Nextflow for your user. Move the Nextflow executable to a folder in your
$PATH
variable (e.g.~/bin
).
module load Java/17.0.4
curl -s https://get.nextflow.io | bash
- Submit your pipeline script via
sbatch script.sh
. With the following script. Update--job-name
,—time, and
—partition` as needed for your head job. 2 CPUs and 5GB of memory is usually sufficient for the Nextflow head job but you can also update as needed.
#! /bin/bash
#SBATCH --job-name=nf-core
#SBATCH --out="slurm-%j.out"
#SBATCH --time=07-00:00:00
#SBATCH --cpus-per-task=2
#SBATCH --mem=5G
#SBATCH --mail-type=ALL
#SBATCH --partition=week
module load Java/17.0.4
export NXF_WRAPPER_STAGE_FILE_THRESHOLD='40000'
nextflow pull nf-core/<pipeline> -r <release>
nextflow run nf-core/<pipeline> -r <release> \
-profile mccleary \
--outdir "results" \
...
Pipeline Specific profiles
There are no specific profiles added for now
Config file
//Profile config names for nf-core/configs
params {
config_profile_description = 'McCleary Cluster at Yale'
config_profile_contact = 'Gisela Gabernet'
config_profile_email = 'gisela.gabernet@yale.edu'
config_profile_github = '@ggabernet'
config_profile_url = 'https://docs.ycrc.yale.edu/clusters/mccleary/'
}
singularity {
enabled = true
}
env {
NXF_WRAPPER_STAGE_FILE_THRESHOLD = '40000'
}
executor {
name = 'slurm'
queueSize = 50
submitRateLimit = '190/60min'
}
process {
resourceLimits = [
memory: 983.GB,
cpus: 64
]
queue = { task.time > 24.h ? 'week' : 'day' }
scratch = 'true'
executor = 'slurm'
}
params {
max_memory = 983.GB
max_cpus = 64
}