Skip to content

Changelog

All notable changes to Orfeo infrastructure will be documented here


2025-04-24

Changed

  • Moved all computational nodes without accelerator from F40 to F41, the following core package were affected:
module previous version new version
Python 3.12 3.13
Linux kernel <=6.12 6.13
Glibc 2.39 2.40
  • Cluster wide slurm update from 24.05 to 24.11. The number of changes featurewise is considerable; please consider visiting the official news file for more details. For more details on the server deployment visit this link

  • Modules update, among the most relevant:

module old version new version comments
openmpi 4.1.6 5.0.5
openblas 0.3.26 0.3.29
cuda 12.6 12.8
cutapt 4.2 5.0
R 4.3.3 4.5.0
hwloc 2.10.0 2.12.0
picard 3.2.0 3.4.0
foldseek 9-427df8a 10-941cd33
singularity 3.11.5 4.3.1
trim_galore 0.6.10
bcftools 1.17 1.21
bcl2fastq2 2.20.0
bedtools2 2.31.1
bwa-mem2 2.2.1
igv 2.16.2 2.18.0
samtools 1.17 1.21
sambamba 1.0 1.0.1
fastp 0.23.4 0.24.1
fastqc 0.12.1
foldseek 8-ef4e960 10-941cd33
gromacs 2022.6 2025.1
guppy-cpu 6.2.1 6.5.7
guppy-gpu 6.2.1 6.5.7
plink 1.90
star 2.7.9a 2.7.11b

If you are interested in a particular module and how it was compiled please visit the following link. We will open source repositories as they reach a stable state.

  • GPU nodes are equipped with the following CUDA drivers:
Card CUDA driver
H100 12.7
A100 12.2 (as per DGX OS)
V100 >= 12.3

We expect the CUDA runtime 12.2 to work everywhere on the cluster, but compatibility with other runtimes may vary.

Tested

  • Slurm/MPI essential feature performances, results are in line with previous measurements
  • IO performances, results are in line with previous measurements

User Impact

  • Codes compiled by you that rely on older versions of libraries will need to be recompiled (this might include some R modules)
  • Python virtual environment that relies on the OS python version will need to be recompiled
  • Some older codes may stop working due to their deprecated dependency. Please use newer versions whenever possible or transition to a containerized approach

2025-04-17

Changed

  • Service Migration:
    • HOME storage migrated from HDD to SSD
    • FAST storage migrated from SSD to NVMe

User Impact

  • Reduced storage latency and increased throughput

2025-04-15

Added

  • Ceph Metadata Server (MDS) Update:
    • Dedicated node for Ceph Metadata Server (MDS)
    • Increased RAM allocation for MDS to cache more metadata

User Impact

  • Faster metadata operations
  • More reliable file system access during heavy usage
  • Resolved bug where commands like find or ls would hang in certain directories

2025-01-20

Added

  • DGX H100 Production Deployment:
    • 8 NVIDIA H100 GPUs per node (Hopper architecture)
    • Internal NVLink and InfiniBand connectivity to the cluster
    • Up to 32 PFLOPS FP8 performance per node

User Impact

  • Enhanced throughput for large-scale model workloads

2025-01-10

Added

“Genoa” Servers Production Deployment:

  • AMD EPYC 9654 “Genoa” CPUs with 96 cores each
  • 768 GB of high-speed DDR5 RAM per node