ARC CLUSTER

New ARC users can access the Italian ARC node computing facilities by requesting a face-2-face visit (ALMA users only, through the ALMA Helpdesk) or by visiting the ARC node in Bologna (for any data-reduction-related issue to be solved in collaboration with the ARC staff). In both the cases they are requested to send an e-mail to help-desk@alma.inaf.it indicating the reason for the request.

Please notice that the request for a new account for a new requesting user implies that he/she (and/or his/her collaborators) visits the ARC for an induction on the ARC facilities usage and on issues related to data reduction with CASA both for ALMA or any other telescope. If the request is positively evaluated the visit details will be arranged via e-mail.

The account will guarantee the usage of the facilities and the support for 6 months.

Once the account expires the access to the data will be suspended and, after 1 month of quarantine ALL DATA WILL BE REMOVED. Only one gentle reminder will be send on account expiration. Extensions of the account duration period could be considered on request (via e-mail). No visit is needed in case of account renewal.

The ARC members support is guaranteed for any ALMA-related issue. For data-reduction-related issues that do not involve ALMA, the support (other than the technical support in the usage of the ARC computing facilities) is limited to the knowledge/experience and availability of the ARC members.

The same rules apply also to IRA staff members. IRA collaborators with temporary positions (i.e. students) can have an account for the entire duration of their position.

To ensure a well-balanced load on the cluster nodes please follow instructions about accessing the computer cluster

Queries can be issued via e-mail to help-desk@alma.inaf.it

Users will be automatically added to the arc-cluster-users@ira.inaf.it mailing list that will be used for any communication from our side.

Once you have obtained an ARC account at IRA, you can access the computer cluster nodes from everywhere through host scheduler.ira.inaf.it.. Using graphical applications on the cluster is possible through remote X access. The accessible working nodes are listed in the table below. Never submit workloads to arcserv (control node) and arcnas1 (storage) as this can slow down the entire cluster. You can enter a node for interactive work by typing:

ssh -tX scheduler.ira.inaf.it

You need to change directory to access your home on the cluster:

 cd /iranet/homesarc/username

Useful tip: by typing 'hostname' you can know on which node you are

Access is done via a Torque/Maui scheduler that redirects your job on the less-loaded node.

Jobs on the cluster are limited to a duration of 168 hours.

Here you can find some statistics about resources consumption on the arcblXX nodes.

You can execute programs in two ways:

p>in interactive mode - your command is immediately executed on the less-loaded node and standard input, output and error are linked to your terminal.

or scheduling a pbs job - submit a job file (here is a guide)

# from storage...
scp user@arcbl01.ira.inaf.it:/remote/path /local/path
 
# to storage
scp /local/path user@arcbl01.ira.inaf.it:/remote/path

On IRA workstations ARC home filesystem can be accessed on /iranet/homesarc

On your laptop ARC filesystems can be seamlessly accessed with fuse-sshfs:

as root, install the package sshfs

# on RedHat/Centos/ScientificLinux
yum install fuse-sshfs
 
# on Debian/Ubuntu
apt-get install sshfs

then, as user

sshfs arcbl01.ira.inaf.it:/iranet/homesarc/yourhome /your/local/mount/point/

By omitting /remote/path you can mount you home directory. i.e.:

Be aware that this method is suboptimal for heavy input/output loads. Running disk-intensive applications directly on the arc cluster will result in a file access speed 10-50 times faster.

Software packages available

Software available on ARC cluster could be listed typing the command setup-help

Software package setup command launch command notes
CASA casapy-setup casapy data reduction package http://casa.nrao.edu/
Miriad miriad-setup miriad data reduction package http://www.atnf.csiro.au/computing/software/miriad/
aips aips-setup   http://www.aips.nrao.edu/index.shtml
analysis utils analysisUtils-setup    
analytic infall analytic_infall-setup  
astron astron-setup    
Coyote library coyote-setup    
fits Viewer fv-setup    
GCC Compiler gcc-setup  
Gildas gildas-setup   http://www.iram.fr/IRAMFR/GILDAS/
Healpix healpix-setup   http://healpix.jpl.nasa.gov/
IDL idl-setup   http://www.exelisvis.com/ProductsServices/IDL.aspx
heasoft heasoft-setup   http://heasarc.nasa.gov/lheasoft/
QA2 qa2-setup    
Ratran ratran-setup   http://www.sron.rug.nl/~vdtak/ratran/frames.html
Starlink starlink-setup   http://starlink.jach.hawaii.edu/starlink

Name RAM CPU Cores Clock Data Net Work Disk Scratch Disk scheduler groups notes
arcbl01 32G Intel Xeon E5-2637 v3 2 x 4/8 3500 10GbE 1TB 1TB N *  
arcbl02 8G AMD Opteron 2352 8 2100 10GbE 34G   Y *  
arcbl03 8G AMD Opteron 2352 8 2100 10GbE 34G   Y *  
arcbl04 8G AMD Opteron 2352 8 2100 10GbE 34G   Y *  
arcbl05 8G AMD Opteron 2352 8 2100 10GbE 1T   Y *  
arcbl06 8G AMD Opteron 2352 8 2100 10GbE 34G   Y *  
arcbl07 8G AMD Opteron 2352 8 2100 10GbE 34G   Y *  
arcbl08 8G AMD Opteron 2352 8 2100 10GbE 34G   Y *  
arcbl09 8G AMD Opteron 2352 8 2100 10GbE 34G   Y *  
arcbl10 32G Intel Xeon E5-2637 v3 2 x 4/8 3500 10GbE 1T 1T N arc-staff, arc-vlbi  
arcbl11 8G AMD Opteron 2352 8 2100 10GbE 34G   N arc-staff, arc-vlbi  
arcbl12 16G AMD Opteron 2352 8 2100 10GbE 34G   N arc-staff, arc-vlbi  
arcbl13 16G AMD Opteron 2387 4 2800 10GbE 136G   N arc-staff, arc-vlbi  
arcbl17 64G AMD Ryzen 7 1800X 8/16 3600 1GbE 3,5TB   N arc-staff, arc-vlbi  
arcbl18 64G Intel Xeon CPU E3-1275 v6 4/8 3500 1GbE 22T 57G N arc-staff, arc-vlbi