The local computing facilities include two IBM regattas, a 24 node Athlon64 cluster with Infiniband, a 25 node PowerPC MacOSX gigabit cluster and more (to be documented, ask Peter or Andy).

All systems are handled by the queueing system Torque.

File system

  • Other hosts can be accessed via /home/HOST/ which includes some hspc's and hslxws's
  • /home/DATA/ tries to be the location of phoenix specific files. Double check this, though!
  • /home/scratch/ or /scratch/ tries to be scratch space. This is not entirely true for the two clusters. There you will find a special logic in your .profile. Don't change this part of your .profile. Simply double check.
  • /home/saladin/ is available on every computer (i.e. including every node). There, you have storage space for your results. It is currently not backed up. Consider moving (NOTcopying) your results to your desktop or other save storage and working area.
  • /home/galactica/ is available on most systems (ask andy it not). This is supposed to be archival type of storage area. Don't waste it with your personal backups.
  • On your hspc or hslxws you can access saladin, nathan and seneca via /data/saladin/, /data/nathan/ and /data/seneca/ just like any other hspc. These directories are mounted by the automounter.

Accounts

  • For most systems, the accounts are managed via NIS, i.e. you have one password for all systems.
  • It is safest to change your password on nathan. You will have to wait 5 minutes until the change propagets to all systems.
  • Although accounts are managed with NIS, your home directories are typically different for different systems.

MPI

  • 'mpistart' is a shell script available on every system for every MPI implementation (hopefully) which takes the PBS assigned nodes and starts the job in the way the selected MPI implementaion wants it. 'mpistart -e' starts only one job per node, even if there are muplitple CPU's per node assigned to the job (useful if you either want all the RAM for one process or if you do node-internal/SMP parallelization with e.g. OpenMP.