Research Support Handbook
HPC Updates
This page tracks user-facing ADA platform updates. For stable usage instructions, see Quick Start, Slurm, and Open OnDemand.
March 27, 2026
NoteMarch 27, 2026 update
- SLURM memory: memory is now scheduled explicitly. Request
--memor--mem-per-cpuin your jobs and interactive sessions. If you do not mention memory in your sbatch script, you will get allocated a default amount of memory per CPU. - Partitions: For the majority of the ADA users, a few partitions will be added. These follow the format
<name>-<type>. For example, there is adefqpartition covering all nodes within the current partition, adefq-fatpartition for high-memory nodes, and adefq-gpupartition for GPU nodes. The use of the specific partitions for high-memory and GPU is encouraged through priority allocation. In this way we discourage, for example, users with CPU-only jobs from occupying a GPU node. Check the current layout with/ada-software/ada-info.sh. - Open OnDemand: Open OnDemand is now officially supported. Open OnDemand jobs also have specific limits on jobs, CPU, memory, and walltime through the
oodQOS. See Open OnDemand. - Temporary working directory:
$TMPDIRnow points to/tmp, local disk space on the node. Use it for temporary data and I/O-heavy computation instead of/home, which is NFS-mounted and slower.$TMPDIRis private per user and cleaned up automatically after jobs finish. - First login: the bug where new users did not get the password change prompt has been fixed.
- Software & Kernel & CUDA updates: RHEL 5.14.0-611.30.1.el9_7.x86_64 with CUDA 13.1 (driver 590.48.01); general software updates.
Before Submitting Your Next Job Script
- Add an explicit memory request to older job scripts if needed.
- Check that your scripts still use the right partition.
- Use
$TMPDIRfor temporary files and active computation, then copy important output back to persistent storage.