MATLAB

What versions of MATLAB are available?

Run one of the commands below, which will list available versions and the corresponding module files:

On Grace:

grace$ module avail matlab

On Omega:

omega$ modulefind matlab

How do I select a version?

Load the appropriate module file. For example, to run version R2014a:

$ module load Apps/Matlab/R2014a
$ matlab

The module load command sets up your environment, including the PATH to find the proper version of the MATLAB program.

How do I run MATLAB?

Be sure to run MATLAB on a compute node, not a login node. You can use up to all the cores on a single node without any special Matlab setup (though your qsub or sbatch command will need to specify the number of cores you intend to use).

To run MATLAB interactively:

To run Matlab interactively, you need to create an interactive session on a compute node.

On Omega, you could start an interactive session using 8 cores on 1 node using something like

omega$ qsub -I -X -l nodes=1:ppn=8,mem=34gb,walltime=24:00 -q fas_normal

On Grace, you could start an interactive session using 4 cores on 1 node using something like

grace$ srun --pty --x11 -c 4  -t 4:00 /bin/bash

Once your interactive session starts, you can load the appropriate module file and start Matlab as described above.

To run MATLAB in batch mode (without a GUI):

Create a batch script containing both instructions to the scheduler (either Torque on Omega, or Slurm on Grace) and shell instructions to set up directories and start Matlab. At the point you wish to start Matlab, use a command like:

$ matlab -nodisplay -nosplash -r YourFunction < /dev/null > MatlabRun.out 2> MatlabRun.err

This command will run the contents of YourFunction.m, sending the console output to MatlabRun.out and error messages to MatlabRun.err.

Note: Your batch submission script must cd to the directory containing YourFunction.m for this to work, and, as things stand, the output and error files will be written to the same directory. If you prefer that these files go somewhere else, you could change the redirections (“>” and “2>”) to use paths like:

> /my/output/directory/MatlabRun.out

Below is a sample batch script to run Matlab in batch mode on Omega. If the name of the script is runit.sh, you would submit it using

qsub runit.sh

Here's the script for Omega:

#PBS -N myjob
#PBS -l nodes=1:ppn=8,walltime=24:00:00,mem=34gb
#PBS -q fas_normal
cd $PBS_WORKDIR
module load Apps/Matlab/R2014a
matlab -nodisplay -nosplash -r YourFunction < /dev/null > MatlabRun.out 2> MatlabRun.err

Below is a sample batch script to run Matlab in batch mode on Grace. If the name of the script is runit.sh, you would submit it using

sbatch  runit.sh

Here's the script for Grace:

#SBATCH -J myjob
#SBATCH -n 5
#SBATCH -t 24:00:00
#SBATCH -N 1
#SBATCH -p day

module load Apps/Matlab/R2014a
matlab -nodisplay -nosplash -r YourFunction < /dev/null > MatlabRun.out 2> MatlabRun.err

Using More than 12 Cores with Matlab

In Matlab, 12 workers is a poorly documented default limit (seemingly for historical reasons) when setting up the parallel environment. You can override it by explicitly setting up your parpool before calling parfor or other parallel functions.

parpool(feature('NumCores'));

What is MCDE?

The Matlab Distributed Computing Engine (MDCE) allows users to run parallel Matlab computations over multiple cluster compute nodes. To run parallel Matlab computations on any number of cores of a single compute node, please use ordinary Matlab (not MDCE) as described above to avoid tying up our limited number of licenses for MDCE.

MDCE is installed on all the HPC clusters, and we provide scripts to make it easy to use. Currently, our license for MDCE is restricted to a total of 32 concurrent labs per cluster (aggregated over all jobs using MDCE on the cluster), plus an additional 128 licenses that float and are available on any of the clusters when the cluster-specific licenses are already in use.

How do I get started using the MCDE?

The first step required for use of MDCE is to develop a parallel Matlab program using the Parallel Computing Toolbox (PCT). The PCT allows you to run parallel computations either on a single node or across multiple nodes. In most cases, before running on multiple nodes, you should develop and test your algorithm on a single node using multiple cores.

For the single-node case, you simply run ordinary Matlab as described above (either interactively or in batch mode) and make use of the PCT commands using the "local" cluster configuration. This capability is enabled for any Matlab invocation on any of the clusters, and there are no limitations on the number of concurrent PCT users in this case. If you intend to run on a single node, therefore, please do not use MDCE, since that would consume some of our limited quantity of multi-node MDCE licenses.

For the multi-node case, you must run an MDCE server that is private to your job. We provide a script (yale_mdce_start.sh) that starts the server and the Matlab workers (known as "labs") for you. The yale_mdce_start.sh script has parameters that allow you to control the number of labs on each node, subject to license availability. (For details, see the comments in the file runit.sh shown later on this page.) The MDCE server will be terminated automatically when your cluster job ends, though we also provide a script (yale_mdce_stop.sh) to terminate it earlier if you wish. The yale_mdce scripts will be in your PATH once you have loaded a Matlab module file (e.g., Apps/Matlab/R2015a). To use the yale_mdce_start.sh script, you need to load module files for both Matlab and OpenMPI (see the runit.sh script below for an example).

We have also developed a template batch script (runit.sh) that you can submit to the job scheduler (either Torque or Slurm) to run your parallel Matlab program. The script loads module files, invokes yale_mdce_start.sh, and then runs Matlab in batch mode. You can copy the template and and customize it to meet your needs. If you prefer to run interactively, you can start an ordinary multi-node interactive session (similar to what's shown above) and run the setup commands in runit.sh by hand.

runit.sh (for Slurm)

#!/bin/bash

#SBATCH -J MDCE_JOB
#SBATCH --ntasks=25
#SBATCH --time=24:00:00
#SBATCH --partition=day

# Load Matlab and MPI module files
module load Apps/Matlab/R2015a MPI/OpenMPI

# Invoke yale_mdce_start.sh to set up a job manager and MDCE server
# Note: yale_mdce_start.sh and runscript.m are in the MDCE_SCRIPTS subdirectory of the root Matlab directory (e.g., /home/apps/fas/Apps/Matlab on Omega). 
#       The MDCE_SCRIPTS directory is added to your PATH by the Matlab module file loaded above.

# Options for yale_mdce_start.sh:

# -jmworkers: number of labs to run on same node as the job manager. 
# "-jmworkers NN" runs NN labs on job manager node
# "-jmworkers -1" run 1 fewer than labs than the number of cores allocated on the node
# Default: -1

# -nodeworkers: number of labs to run on nodes other than the job manager node.
# "-nodeworkers NN" runs NN labs on each node
# "-nodeworkers -1" run same number of labs as the number of cores allocated on each node
# Default: -1

yale_mdce_start.sh -jmworkers -1 -nodeworkers -1

export MDCE_JM_NAME=`cat MDCE_JM_NAME`

# invoke either runscript.m or your own M-file
# (You need to modify runscript.m first to run your computations!!)

# runscript.m uses the parallel cluster created by yale_mdce_start.sh. 
#
matlab -nodisplay < runscript.m

runit.sh (for Omega)

#!/bin/bash
#PBS -l nodes=4:ppn=8,mem=170gb,walltime=24:00:00
#PBS -N MDCE_JOB
#PBS -r n
#PBS -j oe
#PBS -q fas_normal

cd $PBS_O_WORKDIR

# Load Matlab and MPI module files
module load Apps/Matlab/R2015a MPI/OpenMPI
module list

yale_mdce_start.sh -jmworkers -1 -nodeworkers -1

export MDCE_JM_NAME=`cat MDCE_JM_NAME`

# invoke either runscript.m or your own M-file
matlab -nodisplay < runscript.m

runscript.m

clear

% CD TO PROPER DIRECTORY HERE, IF NECESSARY

% FOLLOWING ASSUMES USE OF STANDARD YALE MDCE STARTUP SCRIPT
p=parallel.cluster.MJS('Name',getenv('MDCE_JM_NAME'))
nw = p.NumIdleWorkers
ppool=p.parpool(nw)
ppool.NumWorkers

% INVOKE YOUR OWN SCRIPT HERE

ppool.delete
exit