Skip to content
Snippets Groups Projects
Commit 899a6f1d authored by Lukáš Krupčík's avatar Lukáš Krupčík
Browse files

->

parent d018d740
No related branches found
No related tags found
5 merge requests!368Update prace.md to document the change from qprace to qprod as the default...,!367Update prace.md to document the change from qprace to qprod as the default...,!366Update prace.md to document the change from qprace to qprod as the default...,!323extended-acls-storage-section,!74Md revision
Pipeline #
Showing
with 55 additions and 55 deletions
...@@ -101,10 +101,10 @@ Check status of the job array by the qstat command. ...@@ -101,10 +101,10 @@ Check status of the job array by the qstat command.
$ qstat -a 12345[].dm2 $ qstat -a 12345[].dm2
dm2: dm2:
Req'd Req'd Elap Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
--------------- -------- -- |---|---| ------ --- --- ------ ----- - ----- --------------- -------- -- |---|---| ------ --- --- ------ ----- - -----
12345[].dm2 user2 qprod xx 13516 1 16 -- 00:50 B 00:02 12345[].dm2 user2 qprod xx 13516 1 16 -- 00:50 B 00:02
``` ```
The status B means that some subjobs are already running. The status B means that some subjobs are already running.
...@@ -114,16 +114,16 @@ Check status of the first 100 subjobs by the qstat command. ...@@ -114,16 +114,16 @@ Check status of the first 100 subjobs by the qstat command.
$ qstat -a 12345[1-100].dm2 $ qstat -a 12345[1-100].dm2
dm2: dm2:
Req'd Req'd Elap Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
--------------- -------- -- |---|---| ------ --- --- ------ ----- - ----- --------------- -------- -- |---|---| ------ --- --- ------ ----- - -----
12345[1].dm2 user2 qprod xx 13516 1 16 -- 00:50 R 00:02 12345[1].dm2 user2 qprod xx 13516 1 16 -- 00:50 R 00:02
12345[2].dm2 user2 qprod xx 13516 1 16 -- 00:50 R 00:02 12345[2].dm2 user2 qprod xx 13516 1 16 -- 00:50 R 00:02
12345[3].dm2 user2 qprod xx 13516 1 16 -- 00:50 R 00:01 12345[3].dm2 user2 qprod xx 13516 1 16 -- 00:50 R 00:01
12345[4].dm2 user2 qprod xx 13516 1 16 -- 00:50 Q -- 12345[4].dm2 user2 qprod xx 13516 1 16 -- 00:50 Q --
. . . . . . . . . . . . . . . . . . . . . .
, . . . . . . . . . . , . . . . . . . . . .
12345[100].dm2 user2 qprod xx 13516 1 16 -- 00:50 Q -- 12345[100].dm2 user2 qprod xx 13516 1 16 -- 00:50 Q --
``` ```
Delete the entire job array. Running subjobs will be killed, queueing subjobs will be deleted. Delete the entire job array. Running subjobs will be killed, queueing subjobs will be deleted.
...@@ -152,7 +152,7 @@ Read more on job arrays in the [PBSPro Users guide](../../pbspro-documentation/) ...@@ -152,7 +152,7 @@ Read more on job arrays in the [PBSPro Users guide](../../pbspro-documentation/)
!!! note !!! note
Use GNU parallel to run many single core tasks on one node. Use GNU parallel to run many single core tasks on one node.
GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. GNU parallel is most useful in running single core jobs via the queue system on Anselm. GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. GNU parallel is most useful in running single core jobs via the queue system on Anselm.
For more information and examples see the parallel man page: For more information and examples see the parallel man page:
...@@ -197,7 +197,7 @@ TASK=$1 ...@@ -197,7 +197,7 @@ TASK=$1
cp $PBS_O_WORKDIR/$TASK input cp $PBS_O_WORKDIR/$TASK input
# execute the calculation # execute the calculation
cat input > output cat input > output
# copy output file to submit directory # copy output file to submit directory
cp output $PBS_O_WORKDIR/$TASK.out cp output $PBS_O_WORKDIR/$TASK.out
...@@ -214,7 +214,7 @@ $ qsub -N JOBNAME jobscript ...@@ -214,7 +214,7 @@ $ qsub -N JOBNAME jobscript
12345.dm2 12345.dm2
``` ```
In this example, we submit a job of 101 tasks. 16 input files will be processed in parallel. The 101 tasks on 16 cores are assumed to complete in less than 2 hours. In this example, we submit a job of 101 tasks. 16 input files will be processed in parallel. The 101 tasks on 16 cores are assumed to complete in less than 2 hours.
!!! hint !!! hint
Use #PBS directives in the beginning of the jobscript file, dont' forget to set your valid PROJECT_ID and desired queue. Use #PBS directives in the beginning of the jobscript file, dont' forget to set your valid PROJECT_ID and desired queue.
...@@ -279,10 +279,10 @@ cat input > output ...@@ -279,10 +279,10 @@ cat input > output
cp output $PBS_O_WORKDIR/$TASK.out cp output $PBS_O_WORKDIR/$TASK.out
``` ```
In this example, the jobscript executes in multiple instances in parallel, on all cores of a computing node. Variable $TASK expands to one of the input filenames from tasklist. We copy the input file to local scratch, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name. The numtasks file controls how many tasks will be run per subjob. Once an task is finished, new task starts, until the number of tasks in numtasks file is reached. In this example, the jobscript executes in multiple instances in parallel, on all cores of a computing node. Variable $TASK expands to one of the input filenames from tasklist. We copy the input file to local scratch, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name. The numtasks file controls how many tasks will be run per subjob. Once an task is finished, new task starts, until the number of tasks in numtasks file is reached.
!!! note !!! note
Select subjob walltime and number of tasks per subjob carefully Select subjob walltime and number of tasks per subjob carefully
When deciding this values, think about following guiding rules: When deciding this values, think about following guiding rules:
......
...@@ -28,7 +28,7 @@ fi ...@@ -28,7 +28,7 @@ fi
### Application Modules ### Application Modules
In order to configure your shell for running particular application on Anselm we use Module package interface. In order to configure your shell for running particular application on Anselm we use Module package interface.
!!! note !!! note
The modules set up the application paths, library paths and environment variables for running particular application. The modules set up the application paths, library paths and environment variables for running particular application.
...@@ -43,7 +43,7 @@ To check available modules use ...@@ -43,7 +43,7 @@ To check available modules use
$ module avail $ module avail
``` ```
To load a module, for example the octave module use To load a module, for example the octave module use
```bash ```bash
$ module load octave $ module load octave
......
# Hardware Overview # Hardware Overview
The Anselm cluster consists of 209 computational nodes named cn[1-209] of which 180 are regular compute nodes, 23 GPU Kepler K20 accelerated nodes, 4 MIC Xeon Phi 5110P accelerated nodes and 2 fat nodes. Each node is a powerful x86-64 computer, equipped with 16 cores (two eight-core Intel Sandy Bridge processors), at least 64 GB RAM, and local hard drive. The user access to the Anselm cluster is provided by two login nodes login[1,2]. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 320 TB /home disk storage to store the user files. The 146 TB shared /scratch storage is available for the scratch data. The Anselm cluster consists of 209 computational nodes named cn[1-209] of which 180 are regular compute nodes, 23 GPU Kepler K20 accelerated nodes, 4 MIC Xeon Phi 5110P accelerated nodes and 2 fat nodes. Each node is a powerful x86-64 computer, equipped with 16 cores (two eight-core Intel Sandy Bridge processors), at least 64 GB RAM, and local hard drive. The user access to the Anselm cluster is provided by two login nodes login[1,2]. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 320 TB /home disk storage to store the user files. The 146 TB shared /scratch storage is available for the scratch data.
The Fat nodes are equipped with large amount (512 GB) of memory. Virtualization infrastructure provides resources to run long term servers and services in virtual mode. Fat nodes and virtual servers may access 45 TB of dedicated block storage. Accelerated nodes, fat nodes, and virtualization infrastructure are available [upon request](https://support.it4i.cz/rt) made by a PI. The Fat nodes are equipped with large amount (512 GB) of memory. Virtualization infrastructure provides resources to run long term servers and services in virtual mode. Fat nodes and virtual servers may access 45 TB of dedicated block storage. Accelerated nodes, fat nodes, and virtualization infrastructure are available [upon request](https://support.it4i.cz/rt) made by a PI.
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
Welcome to Anselm supercomputer cluster. The Anselm cluster consists of 209 compute nodes, totaling 3344 compute cores with 15 TB RAM and giving over 94 TFLOP/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 16 cores, at least 64 GB RAM, and 500 GB hard disk drive. Nodes are interconnected by fully non-blocking fat-tree InfiniBand network and equipped with Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview/). Welcome to Anselm supercomputer cluster. The Anselm cluster consists of 209 compute nodes, totaling 3344 compute cores with 15 TB RAM and giving over 94 TFLOP/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 16 cores, at least 64 GB RAM, and 500 GB hard disk drive. Nodes are interconnected by fully non-blocking fat-tree InfiniBand network and equipped with Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview/).
The cluster runs [operating system](software/operating-system/), which is compatible with the RedHat [Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) We have installed a wide range of software packages targeted at different scientific domains. These packages are accessible via the [modules environment](environment-and-modules/). The cluster runs [operating system](software/operating-system/), which is compatible with the RedHat [Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) We have installed a wide range of software packages targeted at different scientific domains. These packages are accessible via the [modules environment](environment-and-modules/).
User data shared file-system (HOME, 320 TB) and job data shared file-system (SCRATCH, 146 TB) are available to users. User data shared file-system (HOME, 320 TB) and job data shared file-system (SCRATCH, 146 TB) are available to users.
......
...@@ -40,13 +40,13 @@ In this example, we allocate 4 nodes, 16 cores per node, for 1 hour. We allocate ...@@ -40,13 +40,13 @@ In this example, we allocate 4 nodes, 16 cores per node, for 1 hour. We allocate
$ qsub -A OPEN-0-0 -q qnvidia -l select=10:ncpus=16 ./myjob $ qsub -A OPEN-0-0 -q qnvidia -l select=10:ncpus=16 ./myjob
``` ```
In this example, we allocate 10 nvidia accelerated nodes, 16 cores per node, for 24 hours. We allocate these resources via the qnvidia queue. Jobscript myjob will be executed on the first node in the allocation. In this example, we allocate 10 nvidia accelerated nodes, 16 cores per node, for 24 hours. We allocate these resources via the qnvidia queue. Jobscript myjob will be executed on the first node in the allocation.
```bash ```bash
$ qsub -A OPEN-0-0 -q qfree -l select=10:ncpus=16 ./myjob $ qsub -A OPEN-0-0 -q qfree -l select=10:ncpus=16 ./myjob
``` ```
In this example, we allocate 10 nodes, 16 cores per node, for 12 hours. We allocate these resources via the qfree queue. It is not required that the project OPEN-0-0 has any available resources left. Consumed resources are still accounted for. Jobscript myjob will be executed on the first node in the allocation. In this example, we allocate 10 nodes, 16 cores per node, for 12 hours. We allocate these resources via the qfree queue. It is not required that the project OPEN-0-0 has any available resources left. Consumed resources are still accounted for. Jobscript myjob will be executed on the first node in the allocation.
All qsub options may be [saved directly into the jobscript](job-submission-and-execution/#PBSsaved). In such a case, no options to qsub are needed. All qsub options may be [saved directly into the jobscript](job-submission-and-execution/#PBSsaved). In such a case, no options to qsub are needed.
...@@ -126,7 +126,7 @@ In the following example, we select an allocation for benchmarking a very specia ...@@ -126,7 +126,7 @@ In the following example, we select an allocation for benchmarking a very specia
-N Benchmark ./mybenchmark -N Benchmark ./mybenchmark
``` ```
The MPI processes will be distributed differently on the nodes connected to the two switches. On the isw10 nodes, we will run 1 MPI process per node 16 threads per process, on isw20 nodes we will run 16 plain MPI processes. The MPI processes will be distributed differently on the nodes connected to the two switches. On the isw10 nodes, we will run 1 MPI process per node 16 threads per process, on isw20 nodes we will run 16 plain MPI processes.
Although this example is somewhat artificial, it demonstrates the flexibility of the qsub command options. Although this example is somewhat artificial, it demonstrates the flexibility of the qsub command options.
...@@ -148,12 +148,12 @@ Example: ...@@ -148,12 +148,12 @@ Example:
$ qstat -a $ qstat -a
srv11: srv11:
Req'd Req'd Elap Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
--------------- -------- -- |---|---| ------ --- --- ------ ----- - ----- --------------- -------- -- |---|---| ------ --- --- ------ ----- - -----
16287.srv11 user1 qlong job1 6183 4 64 -- 144:0 R 38:25 16287.srv11 user1 qlong job1 6183 4 64 -- 144:0 R 38:25
16468.srv11 user1 qlong job2 8060 4 64 -- 144:0 R 17:44 16468.srv11 user1 qlong job2 8060 4 64 -- 144:0 R 17:44
16547.srv11 user2 qprod job3x 13516 2 32 -- 48:00 R 00:58 16547.srv11 user2 qprod job3x 13516 2 32 -- 48:00 R 00:58
``` ```
In this example user1 and user2 are running jobs named job1, job2 and job3x. The jobs job1 and job2 are using 4 nodes, 16 cores per node each. The job1 already runs for 38 hours and 25 minutes, job2 for 17 hours 44 minutes. The job1 already consumed `64 x 38.41 = 2458.6` core hours. The job3x already consumed `0.96 x 32 = 30.93` core hours. These consumed core hours will be accounted on the respective project accounts, regardless of whether the allocated cores were actually used for computations. In this example user1 and user2 are running jobs named job1, job2 and job3x. The jobs job1 and job2 are using 4 nodes, 16 cores per node each. The job1 already runs for 38 hours and 25 minutes, job2 for 17 hours 44 minutes. The job1 already consumed `64 x 38.41 = 2458.6` core hours. The job3x already consumed `0.96 x 32 = 30.93` core hours. These consumed core hours will be accounted on the respective project accounts, regardless of whether the allocated cores were actually used for computations.
...@@ -251,10 +251,10 @@ $ qsub -q qexp -l select=4:ncpus=16 -N Name0 ./myjob ...@@ -251,10 +251,10 @@ $ qsub -q qexp -l select=4:ncpus=16 -N Name0 ./myjob
$ qstat -n -u username $ qstat -n -u username
srv11: srv11:
Req'd Req'd Elap Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
--------------- -------- -- |---|---| ------ --- --- ------ ----- - ----- --------------- -------- -- |---|---| ------ --- --- ------ ----- - -----
15209.srv11 username qexp Name0 5530 4 64 -- 01:00 R 00:00 15209.srv11 username qexp Name0 5530 4 64 -- 01:00 R 00:00
cn17/0*16+cn108/0*16+cn109/0*16+cn110/0*16 cn17/0*16+cn108/0*16+cn109/0*16+cn110/0*16
``` ```
......
...@@ -22,10 +22,10 @@ The compute nodes may be accessed via the regular Gigabit Ethernet network inter ...@@ -22,10 +22,10 @@ The compute nodes may be accessed via the regular Gigabit Ethernet network inter
```bash ```bash
$ qsub -q qexp -l select=4:ncpus=16 -N Name0 ./myjob $ qsub -q qexp -l select=4:ncpus=16 -N Name0 ./myjob
$ qstat -n -u username $ qstat -n -u username
Req'd Req'd Elap Req'd Req'd Elap
Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time
--------------- -------- -- |---|---| ------ --- --- ------ ----- - ----- --------------- -------- -- |---|---| ------ --- --- ------ ----- - -----
15209.srv11 username qexp Name0 5530 4 64 -- 01:00 R 00:00 15209.srv11 username qexp Name0 5530 4 64 -- 01:00 R 00:00
cn17/0*16+cn108/0*16+cn109/0*16+cn110/0*16 cn17/0*16+cn108/0*16+cn109/0*16+cn110/0*16
$ ssh 10.2.1.110 $ ssh 10.2.1.110
......
...@@ -137,7 +137,7 @@ Copy files **to** Anselm by running the following commands on your local machine ...@@ -137,7 +137,7 @@ Copy files **to** Anselm by running the following commands on your local machine
$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp-prace.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ $ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp-prace.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
``` ```
Or by using prace_service script: Or by using prace_service script:
```bash ```bash
$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -i -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ $ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -i -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
...@@ -149,7 +149,7 @@ Copy files **from** Anselm: ...@@ -149,7 +149,7 @@ Copy files **from** Anselm:
$ globus-url-copy gsiftp://gridftp-prace.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_ $ globus-url-copy gsiftp://gridftp-prace.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
``` ```
Or by using prace_service script: Or by using prace_service script:
```bash ```bash
$ globus-url-copy gsiftp://`prace_service -i -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_ $ globus-url-copy gsiftp://`prace_service -i -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
...@@ -170,7 +170,7 @@ Copy files **to** Anselm by running the following commands on your local machine ...@@ -170,7 +170,7 @@ Copy files **to** Anselm by running the following commands on your local machine
$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ $ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
``` ```
Or by using prace_service script: Or by using prace_service script:
```bash ```bash
$ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -e -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ $ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -e -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_
...@@ -182,7 +182,7 @@ Copy files **from** Anselm: ...@@ -182,7 +182,7 @@ Copy files **from** Anselm:
$ globus-url-copy gsiftp://gridftp.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_ $ globus-url-copy gsiftp://gridftp.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
``` ```
Or by using prace_service script: Or by using prace_service script:
```bash ```bash
$ globus-url-copy gsiftp://`prace_service -e -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_ $ globus-url-copy gsiftp://`prace_service -e -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_
......
...@@ -98,7 +98,7 @@ $ ssh login2.anselm.it4i.cz -L 5901:localhost:5901 ...@@ -98,7 +98,7 @@ $ ssh login2.anselm.it4i.cz -L 5901:localhost:5901
``` ```
x-window-system/ x-window-system/
If you use Windows and Putty, please refer to port forwarding setup in the documentation: If you use Windows and Putty, please refer to port forwarding setup in the documentation:
[x-window-and-vnc#section-12](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/) [x-window-and-vnc#section-12](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/)
#### 7. If You Don't Have Turbo VNC Installed on Your Workstation #### 7. If You Don't Have Turbo VNC Installed on Your Workstation
......
...@@ -59,9 +59,9 @@ Options: ...@@ -59,9 +59,9 @@ Options:
--get-node-ncpu-chart --get-node-ncpu-chart
Print chart of allocated ncpus per node Print chart of allocated ncpus per node
--summary Print summary --summary Print summary
--get-server-details Print server --get-server-details Print server
--get-queues Print queues --get-queues Print queues
--get-queues-details Print queues details --get-queues-details Print queues details
--get-reservations Print reservations --get-reservations Print reservations
--get-reservations-details --get-reservations-details
Print reservations details Print reservations details
...@@ -92,7 +92,7 @@ Options: ...@@ -92,7 +92,7 @@ Options:
--get-user-ncpus Print number of allocated ncpus per user --get-user-ncpus Print number of allocated ncpus per user
--get-qlist-nodes Print qlist nodes --get-qlist-nodes Print qlist nodes
--get-qlist-nodeset Print qlist nodeset --get-qlist-nodeset Print qlist nodeset
--get-ibswitch-nodes Print ibswitch nodes --get-ibswitch-nodes Print ibswitch nodes
--get-ibswitch-nodeset --get-ibswitch-nodeset
Print ibswitch nodeset Print ibswitch nodeset
--state=STATE Only for given job state --state=STATE Only for given job state
......
...@@ -47,7 +47,7 @@ After logging in, you will see the command prompt: ...@@ -47,7 +47,7 @@ After logging in, you will see the command prompt:
http://www.it4i.cz/?lang=en http://www.it4i.cz/?lang=en
Last login: Tue Jul 9 15:57:38 2013 from your-host.example.com Last login: Tue Jul 9 15:57:38 2013 from your-host.example.com
[username@login2.anselm ~]$ [username@login2.anselm ~]$
``` ```
...@@ -194,7 +194,7 @@ Once the proxy server is running, establish ssh port forwarding from Anselm to t ...@@ -194,7 +194,7 @@ Once the proxy server is running, establish ssh port forwarding from Anselm to t
local $ ssh -R 6000:localhost:1080 anselm.it4i.cz local $ ssh -R 6000:localhost:1080 anselm.it4i.cz
``` ```
Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding to access the [proxy server from compute nodes](#port-forwarding-from-compute-nodes) as well. Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding to access the [proxy server from compute nodes](#port-forwarding-from-compute-nodes) as well.
## Graphical User Interface ## Graphical User Interface
......
...@@ -62,11 +62,11 @@ The appropriate dimension of the problem has to be set by parameter (2d/3d). ...@@ -62,11 +62,11 @@ The appropriate dimension of the problem has to be set by parameter (2d/3d).
fluent solver_version [FLUENT_options] -i journal_file -pbs fluent solver_version [FLUENT_options] -i journal_file -pbs
``` ```
This syntax will start the ANSYS FLUENT job under PBS Professional using the qsub command in a batch manner. When resources are available, PBS Professional will start the job and return a job ID, usually in the form of _job_ID.hostname_. This job ID can then be used to query, control, or stop the job using standard PBS Professional commands, such as qstat or qdel. The job will be run out of the current working directory, and all output will be written to the file fluent.o _job_ID_. This syntax will start the ANSYS FLUENT job under PBS Professional using the qsub command in a batch manner. When resources are available, PBS Professional will start the job and return a job ID, usually in the form of _job_ID.hostname_. This job ID can then be used to query, control, or stop the job using standard PBS Professional commands, such as qstat or qdel. The job will be run out of the current working directory, and all output will be written to the file fluent.o _job_ID_.
## Running Fluent via User's Config File ## Running Fluent via User's Config File
The sample script uses a configuration file called pbs_fluent.conf if no command line arguments are present. This configuration file should be present in the directory from which the jobs are submitted (which is also the directory in which the jobs are executed). The following is an example of what the content of pbs_fluent.conf can be: The sample script uses a configuration file called pbs_fluent.conf if no command line arguments are present. This configuration file should be present in the directory from which the jobs are submitted (which is also the directory in which the jobs are executed). The following is an example of what the content of pbs_fluent.conf can be:
```bash ```bash
input="example_small.flin" input="example_small.flin"
......
# ANSYS LS-DYNA # ANSYS LS-DYNA
**[ANSYSLS-DYNA](http://www.ansys.com/products/structures/ansys-ls-dyna)** software provides convenient and easy-to-use access to the technology-rich, time-tested explicit solver without the need to contend with the complex input requirements of this sophisticated program. Introduced in 1996, ANSYS LS-DYNA capabilities have helped customers in numerous industries to resolve highly intricate design issues. ANSYS Mechanical users have been able take advantage of complex explicit solutions for a long time utilizing the traditional ANSYS Parametric Design Language (APDL) environment. These explicit capabilities are available to ANSYS Workbench users as well. The Workbench platform is a powerful, comprehensive, easy-to-use environment for engineering simulation. CAD import from all sources, geometry cleanup, automatic meshing, solution, parametric optimization, result visualization and comprehensive report generation are all available within a single fully interactive modern graphical user environment. **[ANSYSLS-DYNA](http://www.ansys.com/products/structures/ansys-ls-dyna)** software provides convenient and easy-to-use access to the technology-rich, time-tested explicit solver without the need to contend with the complex input requirements of this sophisticated program. Introduced in 1996, ANSYS LS-DYNA capabilities have helped customers in numerous industries to resolve highly intricate design issues. ANSYS Mechanical users have been able take advantage of complex explicit solutions for a long time utilizing the traditional ANSYS Parametric Design Language (APDL) environment. These explicit capabilities are available to ANSYS Workbench users as well. The Workbench platform is a powerful, comprehensive, easy-to-use environment for engineering simulation. CAD import from all sources, geometry cleanup, automatic meshing, solution, parametric optimization, result visualization and comprehensive report generation are all available within a single fully interactive modern graphical user environment.
To run ANSYS LS-DYNA in batch mode you can utilize/modify the default ansysdyna.pbs script and execute it via the qsub command. To run ANSYS LS-DYNA in batch mode you can utilize/modify the default ansysdyna.pbs script and execute it via the qsub command.
......
...@@ -2,7 +2,7 @@ ...@@ -2,7 +2,7 @@
**[SVS FEM](http://www.svsfem.cz/)** as **[ANSYS Channel partner](http://www.ansys.com/)** for Czech Republic provided all ANSYS licenses for ANSELM cluster and supports of all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent, Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging to problem of ANSYS functionality contact please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM) **[SVS FEM](http://www.svsfem.cz/)** as **[ANSYS Channel partner](http://www.ansys.com/)** for Czech Republic provided all ANSYS licenses for ANSELM cluster and supports of all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent, Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging to problem of ANSYS functionality contact please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM)
Anselm provides commercial as well as academic variants. Academic variants are distinguished by "**Academic...**" word in the name of license or by two letter preposition "**aa\_**" in the license feature name. Change of license is realized on command line respectively directly in user's PBS file (see individual products). [ More about licensing here](ansys/licensing/) Anselm provides commercial as well as academic variants. Academic variants are distinguished by "**Academic...**" word in the name of license or by two letter preposition "**aa\_**" in the license feature name. Change of license is realized on command line respectively directly in user's PBS file (see individual products). [ More about licensing here](ansys/licensing/)
To load the latest version of any ANSYS product (Mechanical, Fluent, CFX, MAPDL,...) load the module: To load the latest version of any ANSYS product (Mechanical, Fluent, CFX, MAPDL,...) load the module:
......
...@@ -33,7 +33,7 @@ Compilation parameters are default: ...@@ -33,7 +33,7 @@ Compilation parameters are default:
Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molpro reads the number of allocated nodes from PBS and launches a data server on one node. On the remaining allocated nodes, compute processes are launched, one process per node, each with 16 threads. You can modify this behavior by using -n, -t and helper-server options. Please refer to the [Molpro documentation](http://www.molpro.net/info/2010.1/doc/manual/node9.html) for more details. Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molpro reads the number of allocated nodes from PBS and launches a data server on one node. On the remaining allocated nodes, compute processes are launched, one process per node, each with 16 threads. You can modify this behavior by using -n, -t and helper-server options. Please refer to the [Molpro documentation](http://www.molpro.net/info/2010.1/doc/manual/node9.html) for more details.
!!! note !!! note
The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option mpiprocs=16:ompthreads=1 to PBS. The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option mpiprocs=16:ompthreads=1 to PBS.
You are advised to use the -d option to point to a directory in [SCRATCH file system](../../storage/storage/). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch file system. You are advised to use the -d option to point to a directory in [SCRATCH file system](../../storage/storage/). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch file system.
......
...@@ -18,7 +18,7 @@ For information about the usage of Intel Compilers and other Intel products, ple ...@@ -18,7 +18,7 @@ For information about the usage of Intel Compilers and other Intel products, ple
## GNU C/C++ and Fortran Compilers ## GNU C/C++ and Fortran Compilers
For compatibility reasons there are still available the original (old 4.4.6-4) versions of GNU compilers as part of the OS. These are accessible in the search path by default. For compatibility reasons there are still available the original (old 4.4.6-4) versions of GNU compilers as part of the OS. These are accessible in the search path by default.
It is strongly recommended to use the up to date version (4.8.1) which comes with the module gcc: It is strongly recommended to use the up to date version (4.8.1) which comes with the module gcc:
......
...@@ -49,7 +49,7 @@ To run COMSOL in batch mode, without the COMSOL Desktop GUI environment, user ca ...@@ -49,7 +49,7 @@ To run COMSOL in batch mode, without the COMSOL Desktop GUI environment, user ca
#PBS -l select=3:ncpus=16 #PBS -l select=3:ncpus=16
#PBS -q qprod #PBS -q qprod
#PBS -N JOB_NAME #PBS -N JOB_NAME
#PBS -A PROJECT_ID #PBS -A PROJECT_ID
cd /scratch/$USER/ || exit cd /scratch/$USER/ || exit
...@@ -95,7 +95,7 @@ To run LiveLink for MATLAB in batch mode with (comsol_matlab.pbs) job script you ...@@ -95,7 +95,7 @@ To run LiveLink for MATLAB in batch mode with (comsol_matlab.pbs) job script you
#PBS -l select=3:ncpus=16 #PBS -l select=3:ncpus=16
#PBS -q qprod #PBS -q qprod
#PBS -N JOB_NAME #PBS -N JOB_NAME
#PBS -A PROJECT_ID #PBS -A PROJECT_ID
cd /scratch/$USER || exit cd /scratch/$USER || exit
......
...@@ -53,7 +53,7 @@ Before debugging, you need to compile your code with theses flags: ...@@ -53,7 +53,7 @@ Before debugging, you need to compile your code with theses flags:
## Starting a Job With DDT ## Starting a Job With DDT
Be sure to log in with an X window forwarding enabled. This could mean using the -X in the ssh: Be sure to log in with an X window forwarding enabled. This could mean using the -X in the ssh:
```bash ```bash
$ ssh -X username@anselm.it4i.cz $ ssh -X username@anselm.it4i.cz
......
...@@ -30,7 +30,7 @@ CUBE is a graphical application. Refer to Graphical User Interface documentation ...@@ -30,7 +30,7 @@ CUBE is a graphical application. Refer to Graphical User Interface documentation
!!! note !!! note
Analyzing large data sets can consume large amount of CPU and RAM. Do not perform large analysis on login nodes. Analyzing large data sets can consume large amount of CPU and RAM. Do not perform large analysis on login nodes.
After loading the appropriate module, simply launch cube command, or alternatively you can use scalasca -examine command to launch the GUI. Note that for Scalasca datasets, if you do not analyze the data with scalasca -examine before to opening them with CUBE, not all performance data will be available. After loading the appropriate module, simply launch cube command, or alternatively you can use scalasca -examine command to launch the GUI. Note that for Scalasca datasets, if you do not analyze the data with scalasca -examine before to opening them with CUBE, not all performance data will be available.
References References
1\. <http://www.scalasca.org/software/cube-4.x/download.html> 1\. <http://www.scalasca.org/software/cube-4.x/download.html>
...@@ -57,7 +57,7 @@ Sample output: ...@@ -57,7 +57,7 @@ Sample output:
### Pcm-Msr ### Pcm-Msr
Command pcm-msr.x can be used to read/write model specific registers of the CPU. Command pcm-msr.x can be used to read/write model specific registers of the CPU.
### Pcm-Numa ### Pcm-Numa
......
...@@ -56,7 +56,7 @@ Application: ssh ...@@ -56,7 +56,7 @@ Application: ssh
Application parameters: mic0 source ~/.profile && /path/to/your/bin Application parameters: mic0 source ~/.profile && /path/to/your/bin
Note that we include source ~/.profile in the command to setup environment paths [as described here](../intel-xeon-phi/). Note that we include source ~/.profile in the command to setup environment paths [as described here](../intel-xeon-phi/).
!!! note !!! note
If the analysis is interrupted or aborted, further analysis on the card might be impossible and you will get errors like "ERROR connecting to MIC card". In this case please contact our support to reboot the MIC card. If the analysis is interrupted or aborted, further analysis on the card might be impossible and you will get errors like "ERROR connecting to MIC card". In this case please contact our support to reboot the MIC card.
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment