Difference between revisions of "Scale"

From NEClusterWiki
Jump to navigation Jump to search
Line 4: Line 4:
  
 
=== Running Scale (6.1) ===
 
=== Running Scale (6.1) ===
 +
 +
Make sure you do the following either in a PBS script (for Torque), or in an interactive job.  Never run a job on the head node.
  
 
==== Basic Run ====
 
==== Basic Run ====

Revision as of 19:24, 19 March 2013

How to run Scale

The first step is to ensure that you have permission to use the current version of Scale from RSICC. Once you have permission, let me know so I can add you to the scale61 (or other version) group on the cluster. You can then load the Scale module with module load scale.

Running Scale (6.1)

Make sure you do the following either in a PBS script (for Torque), or in an interactive job. Never run a job on the head node.

Basic Run

Running Scale in it's most basic form involves calling the Scale 6.1 script with several arguments. As an example, you can run Scale 6.1 with the input file called myInput.inp as such:

user@node:~/$ batch6.1 myInput.inp

This will run your input on the node that you're currently logged in on (don't use the head node!).

Running Scale (dev)

Basic Run

You can run the Scale development version the same way as version 6.1 after loading the scale/dev module:

module load scale/dev

KENO MPI

Currently, KENO MPI doesn't work with the version of OpenMPI provided by Fedora. I have provided a version of OpenMPI (v1.4.3) that works with KENO MPI as a module for those users who are adventurous enough to compile Scale on their own. The following system third-party libraries will continue to work with the old version of OpenMPI (as they don't require MPI):

BLAS LAPACK Qt GSL Zlib

The following TPLs will *not* work as compiled with the old version of OpenMPI:

HDF5 SILO

The adventurous user can check out Scale, load the module openmpi/1.4.3 (after unloading the system OpenMPI implementation), and use the configuration script /opt/scale_dev/script/necluster-gnu-cmake-mpi to compile Scale with KENO MPI support.