Category Archives: Uncategorized @en

VASP

General information

Vienna Ab-initio of Simulation Package

5.4.4 version of the DFT ab-initio program. It uses plane wave basis and pseudopotentials (ultrasoft and PAW-augmented wave method). VSTS tools has been included.

License is needed.

How to use

To usu VASP in parallel is enougth to execute:

/software/bin/vasp

[intlink id=”1353″ type=”post”]p4vasp[/intlink], [intlink id=”5514″ type=”post”]XCrySDen[/intlink] is installed.

v2xsf

Job monitorization

The convergence of a running job can be monitorized with:

remote_vi JOB_ID

that will open the OSZICAR and OUTCAR files in additon to plot the energy and energy variation. It is necessary to use ssh -X or to use X2GO to open the graphic windows.

More information

VASP home page y manuals.

VTST tools.

SCIPION

General information

Scipion is an image processing framework to obtain 3D models of macromolecular complexes using Electron Microscopy. 2016 May version from Github.

How to use

To execute SCIPION use:

/software/bin/scipion

More information

SCIPION web page

AMBER

General information

14 version of AMBER (Assisted Model Building with Energy Refinement) and AMBER-tools15. Program with empiric potentials with molecular dynamics and energy minimization. Especially oriented to the simulation of biological systems.

How to use

The serial and parallel version have been compiled and can be found in the direktory

/software/bin/amber/

send_amber

To send jobs to the queue system you can use the send_amber command:

send_amber "Sander_options" Nodes Procs_Per_Node[property] Time [or Queue] [Mem]  ["Other_queue_options"]

Sander_options: the options you want to use in the calculation, inside quotes
Nodes: is the number of nodes
Procs: is the number of processors (you may uinclude the node type) per node.
Time: or Queue the walltime (in hh:mm:ss format) or the queue name of the calculation
Mem: the PBS memory (in gb)
[Mem] and ["Other_queue_options"] are optional

For “Other queue options” see examples below:

send_amber "sander.MPI -O -i in.md -c crd.md.23 -o file.out" job1 1 8 p_slow
send_amber "sander.MPI -O -i in.md -c crd.md.23 -o file.out" 2 8:xeon vfast 16 "-W depend=afterany:1234"
send_amber "sander.MPI -O -i in.md -c crd.md.23 -o file.out" 4 8 24:00:00 32 "-m be -M mi.email@ehu.es"

More information

Amber home page.

On-line manual.

Tutorials.

 

SPAdes

General information

SPAdes 3.6.0 – St. Petersburg genome assembler – is intended for both standard isolates and single-cell MDA bacteria assemblies. It works with Illumina or IonTorrent reads and is capable of providing hybrid assemblies using PacBio, Oxford Nanopore and Sanger reads. You can also provide additional contigs that will be used as long reads. Supports paired-end reads, mate-pairs and unpaired reads. SPAdes can take as input several paired-end and mate-pair libraries simultaneously. Note, that SPAdes was initially designed for small genomes. It was tested on single-cell and standard bacterial and fungal data sets.

How to use

To send jobs to the queue you can use the

send_spades

command that asks few questions to configure the job.

Performance

We have not measure any performance improvement or time reduction when using several cores in a standard calculation like:

spades.py -pe1-1 file1 -pe1-2 file2 -o outdir

We recommend to use 1 core, unless you know that you can use better performance with several cores.

More information

Web page of SPAdes.

How to submit siesta jobs

How to submit siesta jobs

There are three ways:

  • Using the send_siesta command.
  • Using qsub in interactive way.
  • With a scritp for the qsub command.

send_siesta

We have written the send_siesta command to submit Siesta jobs. Execute send_siesta and its usage is shownArina. In the following lines we describe it sintax-

send_siesta JOBNAME NODES PROCS_PER_NODE[property] TIME MEM ["Other queue options"]

JOBNAME: Input file without extension.
NODES: Number of nodes to be used.
PROCS_PER_NODE: Cores per node.
TIME: Walltime in hh:mm:ss format.
MEM: Memoria Gb unitatetan.
["Other queue options"]Other instructions to the queue system inside quotes.

Examples
To submit the job1.fdf input in one itaniumb node and for cores:

send_siesta job1 1 4:itaniumb 04:00:00 1

To submit the job2.fdf input to 2 nodes and for cores in each node, 192 hours and 8gb RAM memory. In adition, the job will start after the job with 1234 identifier finish:

send_siesta job2 2 4 192:00:00 8 ``-W depend=afterany:1234''

To submit the job2.fdf input file in 4 nodes and 8 cores per node, 200 hours, eta 15gb RAM. In addition send an email when the jobs starts and finish:

send_siesta job2 4 8 200:00:00 15 ``-m be -M nire.emaila@ehu.es''

The send_siesta command will use the local /scratch directory or the global file system /gscratch depending on the number of nodes used.

 

Interactive qsub

Exekute

qsub

without arguments and answer the question.

Regular qsub

Build a script for qsub,[intlink id=”237″ type=”post”] here there are examples[/intlink], and to execute Siesta use the following line

/software/bin/siesta/siesta_mpi < input.fdf > log.out

Job monitoring

        If you used

send_siesta

        or interactive

qsub

      to submit a job you can monitor it with the following commands:
remote_vi
 It will open the *.out file with gvim.
remote_xmakemol
xmakemol will be used to open the *.ANI file.
remote_qmde
xmgrace will be used to plot energia vs. time in molecular  dynamics simulations.

Examples to monitor the job with 3465 identifier:

remote_vi 3465
remote_xmakemol 3465
remote_qmde 3465

Siesta

General information

Spanish Initiative for Electronic Simulation with Thousands Atoms. DFT based simulation program for solids and molecules. It can be used for molecular dynamics and relaxations. It uses localized orbitals that allow to make calculations with large number of atoms. The academic license is freely distributed but it is necessary to ask for a license. The 3.0rc1 version is installed in the x87_64 nodes and the 2.0.1 in the Itanium nodes.

How to send siesta

[intlink id=”7224″ type=”post”]Follow this link[/intlink].

More information

Siesta home page.

Siesta online manual.

OOMMF

General information

1.2 version of the micromagnetic simulation program. It has not been compiled with the parallel version of tcl.

How to use

By executing

oommf.tcl

GUI will appear to prepare and analize calculations. To submit OOMMF to the queue system you can execute

send_oommf

which will build a correct script and submit it.

Benchmark

A small benchmark has been made with the 1.2 version. OOMMF scales quite well up to 4 cores in the xeon nodes, where best results are obtained..

Node type xeon20 xeon12 xeon8
Time
776
905 1224

More information

OOMMF home page.

OOMMF documentation.

How to send Gaussian

send_gauss command

seend_gauss command submits G09 jobs.

We recommend to use the send_gauss command. This command will prepare the Torque scritp and submit it to the queue. The .log file will remain in the /scratch of the node, but it could be visualized with the remote_vi and remote_molden tools (see bellow).

send_gauss is used as follows:

send_gauss input_file queue_or_walltime core_number [mem] [torque options]

where:

  • input_file: Is the Gaussian input file without the .com extension.
  • queue_or_walltime: Is the walltime in hh:mm:ss format or alternatively select the queue name.
  • ncore_number: Is the core number, it have to be less than 8 or a multiple of 8. It is possible to add node properties, for example 8:itaniumb.
  • mem:  Is the memoru in GB.
  • [torque options]: Advanced option for Torque.

Examples

send_gauss h2o p_slow 8

Will submit the h2o.com job to 8 cores through the p_slow queue, the memory will be set automatically to nproc*900mb, ie, 7200 mb.

send_gauss h2o p_slow 16 20

Will submit the h2o.com job to 2 nodes and 16 cores through the p_slow queue and 20 GB of RAM.

send_gauss h2o 23:00:00 16:xeon 4 ``-m be -M niri@ehu.es -W depend=afterany:4827''

Will submit the h2o.com job to 2 nodes and 16 xeon cores with 23 hours of walltime and 4  GB of RAM. The job will send and email at the beginning and when finishing the job. In addition, it will not start untill job 4827 finish.

qsub interactive command

If qsub is executed without arguments:

qsub

this will ask some questions and send the jobs.

Traditional qsub

We can built our own script for torque. [intlink id=”237″ type=”post” target=”_blank”]Examples[/intlink].

Job monitoring

The remote_vi and remote_molden tools allow to watch the .log file and plot it with Molden. For this job have to be submitted with send_gaussian o interactive qsub. It is used as follows

remote_vi 2341
remote_molden 2341

or

remote_vi 2341.arina
remote_molden 2341.arina

where 2341(.arina) is the queue  job id.