Gaussian is a versatile program for electronic structure modelling.
Homepage: www.gaussian.com
Gaussian 03 For Windows. 4/20/2020 DescriptionGaussian is software for computational chemists. The first version of the program was released in 1970, but over time it. 2 programs for 'gaussian 03'. With Server & Application Monitor, you can pinpoint replication failures within active directory! Server and Application Monitor helps you discover application dependencies to help identify relationships between application servers. Drill into those connections to view the associated network performance such as. Download gaussian view 5.0.8 for free. Education software downloads - GaussView by gaussian.com and many more programs are available for instant and free download. WinMOPAC (Windows 95 and NT). Spartan (Windows and Macintosh). Chem3D, implementing the MOPAC97 codes. ArgusLab 3.0: Molecular Mechanics, Semi-empirical in a package for Windows machines. Ab initio Molecular Orbital; GAMESS, available for Intel Mac and Windows; Dalton, Unix/Windows. Gaussian 03(G03W). The program that won its originator the. Gaussian performance on Windows and on Linux As our Institute has licenses for Gaussian ™ 09, Revision D.01 on both Windows ™ and Linux ®, it was interesting to compare performance on a single machine where both operating systems are installed.
NSC can help you with how to run your Gaussian jobs and to some extend help you with how to set up your Gaussian jobs. We can, for example, help with setting up scripts for running your jobs and help with trouble shooting if you experience issues with running your jobs. If you suspect that you have found a bug in Gaussian, then please contact NSC, so we can investigate and submit a bug report to Gaussian, Inc.
Please contact NSC Support (support at nsc dot liu dot se) if you have any questions or problems.
These are the most important differences regarding Gaussian compared with Triolith:
sgausbatch
) for easy generation and submission of Gaussian run scriptsSee the sections below for more details.
Use the module avail gaussian
command to find available Gaussian installations:
Gaussian run scripts from Triolith should also work on Tetralith after the module load
line has been updated with a new module name.
We use the following default settings for the installations on Tetralith/Sigma:
Setting | Value |
---|---|
-M- (%Mem) | 1GB |
-P- (%NProcShared) | 1 |
These settings are suitable for small serial (i.e. 1 core) Gaussian jobs. For parallel jobs, you need to explicitly specify %Mem
and %NProcShared
(or %CPU
), as well as %LindaWorkers
if you are running large multi-node Gaussian jobs.
Normal Tetralith compute nodes:
Memory fat Tetralith compute nodes:
Example 1 - memory specification for a job using 16 cores on a normal Tetralith compute node:
Hence, we get the following Link 0 command settings:
Example 2 - memory specification for a job using one normal Tetralith compute node (i.e. 32 cores):
Link 0 command settings:
Example 3 - memory specification for a job using one memory fat Tetralith compute node (i.e. 32 cores):
Link 0 command settings:
To start Linda parallel jobs, you should now use the %LindaWorkers
Link 0 command. This command has the following syntax:
Where node1, node2, etc. are names of the compute nodes that the Linda workers should run on and n1, n2, etc. are the number of workers to start on respective compute node. However, as you cannot know the real node names when you setup and submit the job, NSC has a run time wrapper that translates a dummy list into a corresponding list with real node names.
Example Link 0 command settings for a Linda parallel job running on two compute nodes:
It doesn't really matter what you call the nodes in the node list, so just use simple dummy names like node1, node2, etc. What matters is the number of nodes in the list and the number of workers to start for each node. The default is to start one worker per node!
%LindaWorkers
list! NSC recommends using one worker per node, but do your own benchmarking to see what works well for your jobs.%NProcShared
if you run more than one worker per node. Linda workers on a node multiplied by the %NProcShared
setting should not be higher than 32!%NProcShared
vs.%CPU
For Gaussian 16, you can also use the new %CPU
Link 0 command, which binds processes to explicit cores. Valid syntaxes:
which can also be written as
or, using every second core
which can also be written as
If your job allocation is less than one whole compute node, then the NSC run time wrapper will change the %CPU
core list to the list of actually allocated cores. For example, if you specify %CPU=0,1,2,3,4,5
and allocate six cores for the job, then the NSC run time wrapper will change this list at run time to exactly the cores that were allocated by Slurm for your job.
We have so far not observed any significant performance benefits from using %CPU
compared with using %NProcShared
, however do your own benchmarking to see if there are any benefits for your jobs.
You can start Gaussian jobs in several ways:
sgausbatch
When using your own batch scripts, please take extra care to always match the sbatch options (e.g.#SBATCH --ntasks
) with the Gaussian Link 0 commands (e.g.%NProcShared
). One of the common issues we observe for Gaussian jobs is that either too many or too few cores or compute nodes are allocated compared with the Link 0 commands given to Gaussian. For example, a batch script that allocates four whole Tetralith compute nodes with #SBATCH --nodes=4
and then runs a Gaussian job that only sets %NProcShared=32
and no %LindaWorkers
command, which then leads to a job that only uses one of the four allocated compute nodes. As these types of mistakes are quite easy to make, we recommend generating your Gaussian batch scripts with the sgausbatch
utility, which is developed to help you avoid such mismatch mistakes.
To start GaussView, you should first load a Gaussian module:
To start jobs from GaussView, you first need to make sure that the 'Job Setup' preference is set to 'Execute indirectly through script using default command line'. Open the 'GaussView Preferences' window (File▸Preferences…) then click on 'Job Setup', choose 'Execute indirectly through script using default command line' and click 'Ok'.
Build a structure or read one in from a file and then open the 'Gaussian Calculation Setup' window (Calculate▸Gaussian Calculation Setup…). Once you have chosen the desired specifications and parameters for the job, simply click the 'Submit…' button. If you have not already done so, GaussView will ask you to save the input file and then open the 'Submit Job' window. Click 'Yes' to submit the job.
The job then gets submitted to the queue with the sgausbatch
utility (see below). When sgausbatch
is called from GaussView it launches a small GUI for setting the wall-time for the job as well as the project to charge the core-hours to. Choose the wall-time and project you want for your job and click 'OK' to submit the job.
Please see the sections below regarding sgausbatch
configuration for instructions that will allow you to control some other settings that you might also like to specify for your jobs. For most use this is not needed, though.
You can check whether the job is queued or running from the 'Jobs Log' window, which is opened from the 'Job Manager' window that in turn is opened from the 'Calculate' menu (Calculate▸Current Jobs…). Unfortunately, the 'Jobs Log' window doesn't update automatically, so to check if there is a change in the job status, you have to close and re-open the 'Jobs Log' window. You can, of course, also check the status of the job from the command line in a terminal using the squeue -u $USER
command.
Once the job has started, you can stream the output from within GaussView (Results▸Stream Output File). However, it is probably more convenient to follow the progress of a calculation from a terminal using various command line tools.
For geometry optimization jobs, you can read (File▸Open…) the intermediate (or finished) output into GaussView with the option 'Read Intermediate Geometries' checked and use that to inspect the progress of the calculation.
If the job is still running, GaussView will give a warning, but just click 'OK' to open the file. You can inspect the progress of the geometry optimization by opening the 'Optimization Plot' window (Results▸Optimization…).
sgausbatch
to submit Gaussian jobssgausbatch
is an NSC developed command line utility for easy generation and submission of Gaussian run scripts
sgausbatch
takes a Gaussian input file and generates a batch script with an appropriate SLURM allocation based on the Link 0 commands in the input. It then automatically submits the job script to the scheduler. If not all required parameters are specified (command line, config file, or sbatch environment variables) sgausbatch
will interactively ask for the parameters.
sgausbatch
is also used to submit jobs directly from GaussView.
The fundamentals of sgausbatch
were developed by Lukas Tallund during a summer internship at NSC in 2014.
Currently developed and maintained by Rickard Armiento and Torben Rasmussen.
For help with using sgausbatch
please contact NSC Support (support at nsc dot liu dot se). Feedback and feature requests are also welcomed. Please also report all problems, issues, and bugs to NSC Support (support at nsc dot liu dot se).
To get access to sgausbatch
, you first need to load a Gaussian module:
Then to submit a Gaussian job with default wall-time limit simply do:
Note that you will be prompted for the project (SLURM account) to use if multiple such options are available to you.
The above example will submit the gaussian_input.com
Gaussian job to the queue with a wall-time limit of 1 hour.
This example will submit the Gaussian job and generate the job script based on the template myTemplate.sh
which must be supplied in the same folder as the gaussian_input.com
file. Please see below for more information about the template file.
To see all available command line options, use the -h
option:
Values for many options can also be set in a configuration file. This optional user configuration file should be put in your home directory and named sgausbatch_user.cfg
. For example, if you always use the same wall time, you can add that to the configuration file. See more information about the configuration file below.
The following format is used in the config file:
sgausbatch
parses the Gaussian input file for Link 0 commands and completes a batch script template with appropriate sbatch options. Required options that are not specified on the command line or in the configuration file will be prompted for.
sgausbatch
uses the following Link 0 commands in your Gaussian input file to define an appropriate allocation:
%Mem
%NProcShared
(or %Cpu
)%LindaWorkers
(or %NprocLinda
)sgausbatch
also uses the %Chk
command to handle the checkpoint file. With the default template file, defined checkpoint files that exist in the submit directory are copied from submit directory to run directory and from run directory to submit directory.
If a Link 0 command is not set, sgausbatch
checks it's default value in the Default.Route file from the chosen Gaussian module.
When using sgausbatch
a number of options can be set. Many options can be set in different ways to make sure you can use the type of setup you prefer. These are the ways to set sgausbatch
options:
Options that are set in several of these places will always be used with the priority listed above. For example, if time is set both on command line and in the configuration file, the command line option will have priority.
The following parameters can be set in the configuration file:
account=account_name
Specifies which SLURM account (i.e. SNIC or LiU project) that should be used for the job.
exclusive=False|True
If set to True
, sgaubatch
will always send the --exclusive
flag to sbatch. Generally not recommended!
Default: False
fatmem=False|True
If set to True
, sgaubatch
will auto-accept using fat-memory nodes.
Default: False
jobname=name_of_job
Name that will be used for the job. By default the filename of the input file (without suffix) will be used.
nosub=False|True
If set to True
, sgausbatch
will only generate the batch script, but not submit it to the scheduler.
Default: False
overwrite=False|True
If set to True
, sgausbatch
will overwrite the script file if it already exists, otherwise sgausbatch
will generate a script with the name filename_1.sh.
Default: False
notest=False|True
If set to True
, sgausbatch
will not check the gaussian inputfile with testrt.
Default: False
template=full_path_to_script_template
Specifies the template file that will be used to generate the batch script. If not specified, the default template file will be used.
time=HH:MM:SS
Wall time limit for the job.
output_mode=verbose|silent|batch
verbose: Verbose output.
silent: No output, except questions if needed.
batch: No interaction. Options not set will be ignored. Intended for use in batch scripts.
sbatchoptions=blank separated list of sbatch options
List extra sbatch options to include in the job script. Generally not needed unless you want to use a specific reservation.
Set a default for job wall time and project to use for the job allocation:
Note that setting the account to use for a job allocation is only needed if you are included in several projects.
Use your own batch script template:
Add a few more sbatch options to the job batch script:
Note that sbatch options can also be added by editing the batch script template.
sgausbatch
will check and use these sbatch environment variables, but they have lower priority than command line options and options set in the user configuration file:
SBATCH_ACCOUNT
SBATCH_EXCLUSIVE
SBATCH_JOBNAME
SBATCH_TIMELIMIT
Default batch script template file used by sgausbatch:
The script template is based on Mako Templates for Python, so look at that if you want to make substantial changes.