This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision | ||
pub:foundry [2021/02/24 15:38] blspcy [Ansys] |
pub:foundry [2023/08/22 15:15] (current) lantzer |
||
---|---|---|---|
Line 15: | Line 15: | ||
Dell C6525: 4 node chassis with each node containing dual 32 core AMD EPYC Rome 7452 CPUs with 256 GB DDR4 ram and 6 480GB SSD drives in raid 0. | Dell C6525: 4 node chassis with each node containing dual 32 core AMD EPYC Rome 7452 CPUs with 256 GB DDR4 ram and 6 480GB SSD drives in raid 0. | ||
+ | |||
+ | As of 06/17/21 we currently have over 11,000 cores of compute capacity on the Foundry. | ||
===GPU nodes=== | ===GPU nodes=== | ||
Line 20: | Line 22: | ||
The newly added GPU nodes are Dell C4140s configured as follows. | The newly added GPU nodes are Dell C4140s configured as follows. | ||
- | Dell C4140: 1 node chassis with 4 Nvidia V100 GPUs connected via NV-link and interconnect with other nodes via HDR-100 infiniband. | + | Dell C4140: 1 node chassis with 4 Nvidia V100 GPUs connected via NV-link and interconnect with other nodes via HDR-100 infiniband. Each has dual 20 core intel processors and 192GB of DDR4 ram. |
+ | As of 06/17/21 we currently have 24 V100 GPUs available for use. | ||
===Storage=== | ===Storage=== | ||
Line 196: | Line 199: | ||
Missouri S&T users can mount their web volumes and S Drives with the < | Missouri S&T users can mount their web volumes and S Drives with the < | ||
+ | |||
+ | You can un-mount your user directories with the < | ||
=== Windows === | === Windows === | ||
Line 396: | Line 401: | ||
===== Applications ===== | ===== Applications ===== | ||
** The applications portion of this wiki is currently a Work in progress, not all applications are currently here, nor will they ever be as the applications we support continually grows. ** | ** The applications portion of this wiki is currently a Work in progress, not all applications are currently here, nor will they ever be as the applications we support continually grows. ** | ||
+ | |||
+ | ==== Abaqus ==== | ||
+ | |||
+ | * Default Version = 2022 | ||
+ | * Other versions available: 2020 | ||
+ | |||
+ | |||
+ | === Using Abaqus === | ||
+ | |||
+ | Abaqus should not be operated on the login node at all. | ||
+ | \\ | ||
+ | Be sure you are connected to the Foundry with X forwarding enabled, and running inside an interactive job using command | ||
+ | sinteractive | ||
+ | Before you attempt to run Abaqus. Running sinteractive without any switches will give you 1 cpu for 10 minutes, if you need more time or resources you may request it. See [[pub: | ||
+ | \\ | ||
+ | Once inside an interactive job you need to load the Abaqus module. | ||
+ | module load abaqus | ||
+ | Now you may run abaqus. | ||
+ | ABQLauncher cae -mesa | ||
+ | |||
+ | |||
+ | |||
====Anaconda==== | ====Anaconda==== | ||
If you would like to install python modules via conda, you may load the anaconda module to get access to conda for this purpose. After loading the module you will need to initialize conda to work with your shell. | If you would like to install python modules via conda, you may load the anaconda module to get access to conda for this purpose. After loading the module you will need to initialize conda to work with your shell. | ||
Line 601: | Line 628: | ||
Once inside the interactive job you will need to load the ansys module. < | Once inside the interactive job you will need to load the ansys module. < | ||
+ | |||
+ | ==== Comsol ==== | ||
+ | |||
+ | Comsol Multiphysics is available for general usage through a comsol/ | ||
+ | |||
+ | <file bash comsol.sub> | ||
+ | #!/bin/bash | ||
+ | #SBATCH -J Comsol_job | ||
+ | #SBATCH --ntasks-per-node=1 | ||
+ | #SBATCH --cpus-per-task=64 | ||
+ | #SBATCH --mem=0 | ||
+ | #SBATCH --time=1-00: | ||
+ | #SBATCH --export=ALL | ||
+ | |||
+ | module load comsol/ | ||
+ | ulimit -s unlimited | ||
+ | ulimit -c unlimited | ||
+ | |||
+ | comsol batch -mpibootstrap slurm -inputfile input.mph -outputfile out.mph | ||
+ | |||
+ | </ | ||
==== Cuda ==== | ==== Cuda ==== | ||
Line 646: | Line 694: | ||
Matlab is available to run in batch form or interactively on the cluster. | Matlab is available to run in batch form or interactively on the cluster. | ||
- | * Default version = 2019b | + | * Default version = 2021a |
- | * Other installed version: | + | * Other installed version(s): 2019b, 2020a, 2020b (run " |
=== Interactive Matlab === | === Interactive Matlab === | ||
- | The simplest way to get up and running | + | To get started |
- | Please note that by default | + | < |
+ | module load matlab | ||
+ | matlab | ||
+ | </ | ||
+ | |||
+ | Please note that by default | ||
=== Batch Submit === | === Batch Submit === | ||
Line 724: | Line 777: | ||
Another thing to note is the flexibility of singularity. It can run containers from it's own library, docker, dockerhub, singularityhub, | Another thing to note is the flexibility of singularity. It can run containers from it's own library, docker, dockerhub, singularityhub, | ||
+ | |||
+ | ====StarCCM+==== | ||
+ | |||
+ | Engineering Simulation Software\\ | ||
+ | |||
+ | Default version | ||
+ | |||
+ | Other working versions: | ||
+ | * 2020.1 | ||
+ | * 12.02.010 | ||
+ | |||
+ | |||
+ | |||
+ | Job Submission Information | ||
+ | |||
+ | Copy your .sim file from the workstation to your cluster home profile.\\ | ||
+ | Once copied, create your job file. | ||
+ | |||
+ | Example job file: | ||
+ | |||
+ | <file bash starccm.sub> | ||
+ | |||
+ | #!/bin/bash | ||
+ | #SBATCH --job-name=starccm_test | ||
+ | #SBATCH --nodes=1 | ||
+ | #SBATCH --ntasks=12 | ||
+ | #SBATCH --mem=40000 | ||
+ | #SBATCH --partition=requeue | ||
+ | #SBATCH --time=12: | ||
+ | #SBATCH --mail-type=BEGIN | ||
+ | #SBATCH --mail-type=FAIL | ||
+ | #SBATCH --mail-type=END | ||
+ | #SBATCH --mail-user=username@mst.edu | ||
+ | |||
+ | module load starccm/ | ||
+ | |||
+ | time starccm+ -batch -np 12 / | ||
+ | </ | ||
+ | |||
+ | ** It's prefered that you keep the ntasks and -np set to the same processor count.**\\ | ||
+ | |||
+ | Breakdown of the script:\\ | ||
+ | This job will use **1** node, asking for **12** processors, **40,000 MB** of memory for a total wall time of **12 hours** and will email you when the job starts, finishes or fails. | ||
+ | |||
+ | The StarCCM commands:\\ | ||
+ | |||
+ | |||
+ | |-batch| tells Star to utilize more than one processor| | ||
+ | |-np| number of processors to allocate| | ||
+ | |/ | ||
+ | |||
+ | |||
+ | ====TensorFlow with GPU support==== | ||
+ | |||
+ | https:// | ||
+ | |||
+ | We have been able to get TensorFlow to work with GPU support if we install it within an anacoda environment. Other methods do not seem to work as smoothly (if they even work at all). | ||
+ | |||
+ | First use [[# | ||
+ | |||
+ | conda install tensorflow-gpu | ||
+ | |||
+ | At this point you should be able to activate that anaconda environment and run TensorFlow with GPU support. | ||
+ | |||
+ | Job Submission Information | ||
+ | |||
+ | Copy your python script to the cluster. Once copied, create your job file. | ||
+ | |||
+ | Example job file: | ||
+ | |||
+ | <file bash tensorflow-gpu.sub> | ||
+ | #!/bin/bash | ||
+ | #SBATCH --job-name=tensorflow_gpu_test | ||
+ | #SBATCH --nodes=1 | ||
+ | #SBATCH --ntasks=1 | ||
+ | #SBATCH --partition=cuda | ||
+ | #SBATCH --time=01: | ||
+ | #SBATCH --gres=gpu: | ||
+ | #SBATCH --mail-type=BEGIN | ||
+ | #SBATCH --mail-type=FAIL | ||
+ | #SBATCH --mail-type=END | ||
+ | #SBATCH --mail-user=username@mst.edu | ||
+ | |||
+ | module load anaconda/ | ||
+ | conda activate tensorflow | ||
+ | python tensorflow_script_name.py | ||
+ | </ | ||
+ | |||
+ | ==== Thermo-Calc ==== | ||
+ | |||
+ | * Default Version = 2021a | ||
+ | * Other versions available: none yet | ||
+ | |||
+ | === Accessing Thermo-Calc === | ||
+ | |||
+ | Thermo-Calc is a restricted software. If you need access please email nic-cluster-admins@mst.edu for more info. | ||
+ | |||
+ | === Using Thermo-Calc === | ||
+ | |||
+ | Thermo-Calc will not operate on the login node at all. | ||
+ | \\ | ||
+ | Be sure you are connected to the Foundry with X forwarding enabled, and running inside an interactive job using command | ||
+ | sinteractive | ||
+ | before you attempt to run Thermo-Calc. Running sinteractive without any switches will give you 1 cpu for 10 minutes, if you need more time or resources you may request it. See [[pub: | ||
+ | \\ | ||
+ | Once inside an interactive job you need to load the Thermo-Calc module. | ||
+ | module load thermo-calc | ||
+ | Now you may run thermo-calc. | ||
+ | Thermo-Calc.sh | ||
+ | |||
+ | ====Vasp==== | ||
+ | |||
+ | To use our site installation of Vasp you must first prove that you have a license to use it by emailing your vasp license confirmation to < | ||
+ | |||
+ | Once you have been granted access to using vasp you may load the vasp module < | ||
+ | |||
+ | and create a vasp job file, in the directory that your input files are, that will look similar to the one below. | ||
+ | |||
+ | <file bash vasp.sub> | ||
+ | #!/bin/bash | ||
+ | |||
+ | #SBATCH -J Vasp | ||
+ | #SBATCH -o Foundry-%j.out | ||
+ | #SBATCH --time=1: | ||
+ | #SBATCH --ntasks=8 | ||
+ | |||
+ | module load vasp | ||
+ | module load libfabric | ||
+ | |||
+ | srun vasp | ||
+ | |||
+ | </ | ||
+ | |||
+ | This example will run the standard vasp compilation on 8 cpus for 1 hour. \\ | ||
+ | |||
+ | If you need the gamma only version of vasp use < | ||
+ | |||
+ | If you need the non-colinear version of vasp use < | ||
+ | |||
+ | It might work to launch vasp with " | ||
+ | |||
+ | There are some globally available Psudopoetentials available, the module sets the environment variable $POTENDIR to the global directory. | ||