6. Running Paraview in Client Server Mode via Slurm

In this example, we are running the paraview server on a compute node with 4 cores and basic memory. You may need to adjust the salloc command flags as necessary for your job.

For example: salloc -N 1 -n 4 –mem=64GB –partition=[partition] –time=2:00:00

  1. Open 2 terminals on your local computer. Leave both of these terminals running for the duration of your work with paraview!

  2. In Terminal 1 (ssh shell on a headnode) we are going to start the paraview server. Run the following commands in order:

    • salloc -N 1 -n 4 --constraint=centos7 --partition=sched_mit_hill --time=2:00:00
    • module add paraview/5.10.1_headless_server
    • hostname <- remember what node you are running on
    • mpiexec -np 4 pvserver --mpi --force-offscreen-rendering we are running pvserver in a 4 core process to take advantage of parallel processing, that is why the srun command has -n4, change this as you see fit.
  3. In Terminal 2, connect to eofe7 with -L 11111:[node name]:11111 added to your ssh command. Replace “[node name]” with the name of the node you have the server running on (hostname command output from previous step). Your ssh command should be similar to: ssh -L 11111:[nodename]:11111 [username]@eofe7.mit.edu

  4. Now you can start paraview on your local machine and go to File > Connect… Then click “add server” and set the options like so:

name: eofe cluster
server type: client/server
host: localhost
port: 11111

Note: Any loss in internet connection may break the tunnel between your local paraview client and the cluster.