diff --git a/docs.it4i/anselm-cluster-documentation/capacity-computing.md b/docs.it4i/anselm-cluster-documentation/capacity-computing.md index 59d58906020f9d73c15c259916c421a5e47b048b..0130ed8ad75028ad10f09fbe7f6113df8519281c 100644 --- a/docs.it4i/anselm-cluster-documentation/capacity-computing.md +++ b/docs.it4i/anselm-cluster-documentation/capacity-computing.md @@ -101,10 +101,10 @@ Check status of the job array by the qstat command. $ qstat -a 12345[].dm2 dm2: - Req'd Req'd Elap -Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time + Req'd Req'd Elap +Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time --------------- -------- -- |---|---| ------ --- --- ------ ----- - ----- -12345[].dm2 user2 qprod xx 13516 1 16 -- 00:50 B 00:02 +12345[].dm2 user2 qprod xx 13516 1 16 -- 00:50 B 00:02 ``` The status B means that some subjobs are already running. @@ -114,16 +114,16 @@ Check status of the first 100 subjobs by the qstat command. $ qstat -a 12345[1-100].dm2 dm2: - Req'd Req'd Elap -Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time + Req'd Req'd Elap +Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time --------------- -------- -- |---|---| ------ --- --- ------ ----- - ----- -12345[1].dm2 user2 qprod xx 13516 1 16 -- 00:50 R 00:02 -12345[2].dm2 user2 qprod xx 13516 1 16 -- 00:50 R 00:02 -12345[3].dm2 user2 qprod xx 13516 1 16 -- 00:50 R 00:01 -12345[4].dm2 user2 qprod xx 13516 1 16 -- 00:50 Q -- +12345[1].dm2 user2 qprod xx 13516 1 16 -- 00:50 R 00:02 +12345[2].dm2 user2 qprod xx 13516 1 16 -- 00:50 R 00:02 +12345[3].dm2 user2 qprod xx 13516 1 16 -- 00:50 R 00:01 +12345[4].dm2 user2 qprod xx 13516 1 16 -- 00:50 Q -- . . . . . . . . . . . , . . . . . . . . . . -12345[100].dm2 user2 qprod xx 13516 1 16 -- 00:50 Q -- +12345[100].dm2 user2 qprod xx 13516 1 16 -- 00:50 Q -- ``` Delete the entire job array. Running subjobs will be killed, queueing subjobs will be deleted. @@ -152,7 +152,7 @@ Read more on job arrays in the [PBSPro Users guide](../../pbspro-documentation/) !!! note Use GNU parallel to run many single core tasks on one node. -GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. GNU parallel is most useful in running single core jobs via the queue system on Anselm. +GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. GNU parallel is most useful in running single core jobs via the queue system on Anselm. For more information and examples see the parallel man page: @@ -197,7 +197,7 @@ TASK=$1 cp $PBS_O_WORKDIR/$TASK input # execute the calculation -cat input > output +cat input > output # copy output file to submit directory cp output $PBS_O_WORKDIR/$TASK.out @@ -214,7 +214,7 @@ $ qsub -N JOBNAME jobscript 12345.dm2 ``` -In this example, we submit a job of 101 tasks. 16 input files will be processed in parallel. The 101 tasks on 16 cores are assumed to complete in less than 2 hours. +In this example, we submit a job of 101 tasks. 16 input files will be processed in parallel. The 101 tasks on 16 cores are assumed to complete in less than 2 hours. !!! hint Use #PBS directives in the beginning of the jobscript file, dont' forget to set your valid PROJECT_ID and desired queue. @@ -279,10 +279,10 @@ cat input > output cp output $PBS_O_WORKDIR/$TASK.out ``` -In this example, the jobscript executes in multiple instances in parallel, on all cores of a computing node. Variable $TASK expands to one of the input filenames from tasklist. We copy the input file to local scratch, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name. The numtasks file controls how many tasks will be run per subjob. Once an task is finished, new task starts, until the number of tasks in numtasks file is reached. +In this example, the jobscript executes in multiple instances in parallel, on all cores of a computing node. Variable $TASK expands to one of the input filenames from tasklist. We copy the input file to local scratch, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name. The numtasks file controls how many tasks will be run per subjob. Once an task is finished, new task starts, until the number of tasks in numtasks file is reached. !!! note - Select subjob walltime and number of tasks per subjob carefully + Select subjob walltime and number of tasks per subjob carefully When deciding this values, think about following guiding rules: diff --git a/docs.it4i/anselm-cluster-documentation/environment-and-modules.md b/docs.it4i/anselm-cluster-documentation/environment-and-modules.md index 1439c6733c35f3440df643da8e83e1c6308726c7..2de4cf7f30e8b665f2d4f8b685dfe67f96221aa9 100644 --- a/docs.it4i/anselm-cluster-documentation/environment-and-modules.md +++ b/docs.it4i/anselm-cluster-documentation/environment-and-modules.md @@ -28,7 +28,7 @@ fi ### Application Modules -In order to configure your shell for running particular application on Anselm we use Module package interface. +In order to configure your shell for running particular application on Anselm we use Module package interface. !!! note The modules set up the application paths, library paths and environment variables for running particular application. @@ -43,7 +43,7 @@ To check available modules use $ module avail ``` -To load a module, for example the octave module use +To load a module, for example the octave module use ```bash $ module load octave diff --git a/docs.it4i/anselm-cluster-documentation/hardware-overview.md b/docs.it4i/anselm-cluster-documentation/hardware-overview.md index b477688da52a57619221a9bcc22397e0e7769191..94130cf1737b06d64fff0a5b7a7574a2cd301da9 100644 --- a/docs.it4i/anselm-cluster-documentation/hardware-overview.md +++ b/docs.it4i/anselm-cluster-documentation/hardware-overview.md @@ -1,6 +1,6 @@ # Hardware Overview -The Anselm cluster consists of 209 computational nodes named cn[1-209] of which 180 are regular compute nodes, 23 GPU Kepler K20 accelerated nodes, 4 MIC Xeon Phi 5110P accelerated nodes and 2 fat nodes. Each node is a powerful x86-64 computer, equipped with 16 cores (two eight-core Intel Sandy Bridge processors), at least 64 GB RAM, and local hard drive. The user access to the Anselm cluster is provided by two login nodes login[1,2]. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 320 TB /home disk storage to store the user files. The 146 TB shared /scratch storage is available for the scratch data. +The Anselm cluster consists of 209 computational nodes named cn[1-209] of which 180 are regular compute nodes, 23 GPU Kepler K20 accelerated nodes, 4 MIC Xeon Phi 5110P accelerated nodes and 2 fat nodes. Each node is a powerful x86-64 computer, equipped with 16 cores (two eight-core Intel Sandy Bridge processors), at least 64 GB RAM, and local hard drive. The user access to the Anselm cluster is provided by two login nodes login[1,2]. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 320 TB /home disk storage to store the user files. The 146 TB shared /scratch storage is available for the scratch data. The Fat nodes are equipped with large amount (512 GB) of memory. Virtualization infrastructure provides resources to run long term servers and services in virtual mode. Fat nodes and virtual servers may access 45 TB of dedicated block storage. Accelerated nodes, fat nodes, and virtualization infrastructure are available [upon request](https://support.it4i.cz/rt) made by a PI. diff --git a/docs.it4i/anselm-cluster-documentation/introduction.md b/docs.it4i/anselm-cluster-documentation/introduction.md index 6cf377ecf0fe651ee18040170eeb29a5578f4907..73e8631574b0f2200b2efe6db084a6c0073e4d07 100644 --- a/docs.it4i/anselm-cluster-documentation/introduction.md +++ b/docs.it4i/anselm-cluster-documentation/introduction.md @@ -2,7 +2,7 @@ Welcome to Anselm supercomputer cluster. The Anselm cluster consists of 209 compute nodes, totaling 3344 compute cores with 15 TB RAM and giving over 94 TFLOP/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 16 cores, at least 64 GB RAM, and 500 GB hard disk drive. Nodes are interconnected by fully non-blocking fat-tree InfiniBand network and equipped with Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview/). -The cluster runs [operating system](software/operating-system/), which is compatible with the RedHat [Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) We have installed a wide range of software packages targeted at different scientific domains. These packages are accessible via the [modules environment](environment-and-modules/). +The cluster runs [operating system](software/operating-system/), which is compatible with the RedHat [Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) We have installed a wide range of software packages targeted at different scientific domains. These packages are accessible via the [modules environment](environment-and-modules/). User data shared file-system (HOME, 320 TB) and job data shared file-system (SCRATCH, 146 TB) are available to users. diff --git a/docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md b/docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md index 2f76f9280b0751802d69d11587aee54f41f611b7..5a3d528746f3ae1319a4447c84f638586db2eaad 100644 --- a/docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md +++ b/docs.it4i/anselm-cluster-documentation/job-submission-and-execution.md @@ -40,13 +40,13 @@ In this example, we allocate 4 nodes, 16 cores per node, for 1 hour. We allocate $ qsub -A OPEN-0-0 -q qnvidia -l select=10:ncpus=16 ./myjob ``` -In this example, we allocate 10 nvidia accelerated nodes, 16 cores per node, for 24 hours. We allocate these resources via the qnvidia queue. Jobscript myjob will be executed on the first node in the allocation. +In this example, we allocate 10 nvidia accelerated nodes, 16 cores per node, for 24 hours. We allocate these resources via the qnvidia queue. Jobscript myjob will be executed on the first node in the allocation. ```bash $ qsub -A OPEN-0-0 -q qfree -l select=10:ncpus=16 ./myjob ``` -In this example, we allocate 10 nodes, 16 cores per node, for 12 hours. We allocate these resources via the qfree queue. It is not required that the project OPEN-0-0 has any available resources left. Consumed resources are still accounted for. Jobscript myjob will be executed on the first node in the allocation. +In this example, we allocate 10 nodes, 16 cores per node, for 12 hours. We allocate these resources via the qfree queue. It is not required that the project OPEN-0-0 has any available resources left. Consumed resources are still accounted for. Jobscript myjob will be executed on the first node in the allocation. All qsub options may be [saved directly into the jobscript](job-submission-and-execution/#PBSsaved). In such a case, no options to qsub are needed. @@ -126,7 +126,7 @@ In the following example, we select an allocation for benchmarking a very specia -N Benchmark ./mybenchmark ``` -The MPI processes will be distributed differently on the nodes connected to the two switches. On the isw10 nodes, we will run 1 MPI process per node 16 threads per process, on isw20 nodes we will run 16 plain MPI processes. +The MPI processes will be distributed differently on the nodes connected to the two switches. On the isw10 nodes, we will run 1 MPI process per node 16 threads per process, on isw20 nodes we will run 16 plain MPI processes. Although this example is somewhat artificial, it demonstrates the flexibility of the qsub command options. @@ -148,12 +148,12 @@ Example: $ qstat -a srv11: - Req'd Req'd Elap -Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time + Req'd Req'd Elap +Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time --------------- -------- -- |---|---| ------ --- --- ------ ----- - ----- -16287.srv11 user1 qlong job1 6183 4 64 -- 144:0 R 38:25 -16468.srv11 user1 qlong job2 8060 4 64 -- 144:0 R 17:44 -16547.srv11 user2 qprod job3x 13516 2 32 -- 48:00 R 00:58 +16287.srv11 user1 qlong job1 6183 4 64 -- 144:0 R 38:25 +16468.srv11 user1 qlong job2 8060 4 64 -- 144:0 R 17:44 +16547.srv11 user2 qprod job3x 13516 2 32 -- 48:00 R 00:58 ``` In this example user1 and user2 are running jobs named job1, job2 and job3x. The jobs job1 and job2 are using 4 nodes, 16 cores per node each. The job1 already runs for 38 hours and 25 minutes, job2 for 17 hours 44 minutes. The job1 already consumed `64 x 38.41 = 2458.6` core hours. The job3x already consumed `0.96 x 32 = 30.93` core hours. These consumed core hours will be accounted on the respective project accounts, regardless of whether the allocated cores were actually used for computations. @@ -251,10 +251,10 @@ $ qsub -q qexp -l select=4:ncpus=16 -N Name0 ./myjob $ qstat -n -u username srv11: - Req'd Req'd Elap -Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time + Req'd Req'd Elap +Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time --------------- -------- -- |---|---| ------ --- --- ------ ----- - ----- -15209.srv11 username qexp Name0 5530 4 64 -- 01:00 R 00:00 +15209.srv11 username qexp Name0 5530 4 64 -- 01:00 R 00:00 cn17/0*16+cn108/0*16+cn109/0*16+cn110/0*16 ``` diff --git a/docs.it4i/anselm-cluster-documentation/network.md b/docs.it4i/anselm-cluster-documentation/network.md index a682f44ff119881d0f0e1ad1695afdd8046b5ec0..a2af06f97a85472d327eeffc4a743d5eb70d6bb1 100644 --- a/docs.it4i/anselm-cluster-documentation/network.md +++ b/docs.it4i/anselm-cluster-documentation/network.md @@ -22,10 +22,10 @@ The compute nodes may be accessed via the regular Gigabit Ethernet network inter ```bash $ qsub -q qexp -l select=4:ncpus=16 -N Name0 ./myjob $ qstat -n -u username - Req'd Req'd Elap -Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time + Req'd Req'd Elap +Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time --------------- -------- -- |---|---| ------ --- --- ------ ----- - ----- -15209.srv11 username qexp Name0 5530 4 64 -- 01:00 R 00:00 +15209.srv11 username qexp Name0 5530 4 64 -- 01:00 R 00:00 cn17/0*16+cn108/0*16+cn109/0*16+cn110/0*16 $ ssh 10.2.1.110 diff --git a/docs.it4i/anselm-cluster-documentation/prace.md b/docs.it4i/anselm-cluster-documentation/prace.md index 1754d8e28202c6c553e9607846f4ab664a600bbd..84200c820f2ec9956c712539c52917fd80106b2d 100644 --- a/docs.it4i/anselm-cluster-documentation/prace.md +++ b/docs.it4i/anselm-cluster-documentation/prace.md @@ -137,7 +137,7 @@ Copy files **to** Anselm by running the following commands on your local machine $ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp-prace.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ ``` -Or by using prace_service script: +Or by using prace_service script: ```bash $ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -i -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ @@ -149,7 +149,7 @@ Copy files **from** Anselm: $ globus-url-copy gsiftp://gridftp-prace.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_ ``` -Or by using prace_service script: +Or by using prace_service script: ```bash $ globus-url-copy gsiftp://`prace_service -i -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_ @@ -170,7 +170,7 @@ Copy files **to** Anselm by running the following commands on your local machine $ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ ``` -Or by using prace_service script: +Or by using prace_service script: ```bash $ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -e -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ @@ -182,7 +182,7 @@ Copy files **from** Anselm: $ globus-url-copy gsiftp://gridftp.anselm.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_ ``` -Or by using prace_service script: +Or by using prace_service script: ```bash $ globus-url-copy gsiftp://`prace_service -e -f anselm`/home/prace/_YOUR_ACCOUNT_ON_ANSELM_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_ diff --git a/docs.it4i/anselm-cluster-documentation/remote-visualization.md b/docs.it4i/anselm-cluster-documentation/remote-visualization.md index fc01d18790668c5a4eee13dbc0c8090f8cd3d89c..171c8e003216d4a6d3efe573e7bd9ef15d88070d 100644 --- a/docs.it4i/anselm-cluster-documentation/remote-visualization.md +++ b/docs.it4i/anselm-cluster-documentation/remote-visualization.md @@ -98,7 +98,7 @@ $ ssh login2.anselm.it4i.cz -L 5901:localhost:5901 ``` x-window-system/ -If you use Windows and Putty, please refer to port forwarding setup in the documentation: +If you use Windows and Putty, please refer to port forwarding setup in the documentation: [x-window-and-vnc#section-12](../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/) #### 7. If You Don't Have Turbo VNC Installed on Your Workstation diff --git a/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md b/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md index ba4dde0614f2159082eaf6983a867af9b66d4ab1..5668df86761b633ce7f67b73a9eba724351037e6 100644 --- a/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md +++ b/docs.it4i/anselm-cluster-documentation/resources-allocation-policy.md @@ -59,9 +59,9 @@ Options: --get-node-ncpu-chart Print chart of allocated ncpus per node --summary Print summary - --get-server-details Print server + --get-server-details Print server --get-queues Print queues - --get-queues-details Print queues details + --get-queues-details Print queues details --get-reservations Print reservations --get-reservations-details Print reservations details @@ -92,7 +92,7 @@ Options: --get-user-ncpus Print number of allocated ncpus per user --get-qlist-nodes Print qlist nodes --get-qlist-nodeset Print qlist nodeset - --get-ibswitch-nodes Print ibswitch nodes + --get-ibswitch-nodes Print ibswitch nodes --get-ibswitch-nodeset Print ibswitch nodeset --state=STATE Only for given job state diff --git a/docs.it4i/anselm-cluster-documentation/shell-and-data-access.md b/docs.it4i/anselm-cluster-documentation/shell-and-data-access.md index 38fbda64678ccf7c7a406163ce8f27be753ac67a..bffc71a5010d2b5ec60645b3e3a1a3ce91d298cd 100644 --- a/docs.it4i/anselm-cluster-documentation/shell-and-data-access.md +++ b/docs.it4i/anselm-cluster-documentation/shell-and-data-access.md @@ -47,7 +47,7 @@ After logging in, you will see the command prompt: http://www.it4i.cz/?lang=en -Last login: Tue Jul 9 15:57:38 2013 from your-host.example.com +Last login: Tue Jul 9 15:57:38 2013 from your-host.example.com [username@login2.anselm ~]$ ``` @@ -194,7 +194,7 @@ Once the proxy server is running, establish ssh port forwarding from Anselm to t local $ ssh -R 6000:localhost:1080 anselm.it4i.cz ``` -Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding to access the [proxy server from compute nodes](#port-forwarding-from-compute-nodes) as well. +Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding to access the [proxy server from compute nodes](#port-forwarding-from-compute-nodes) as well. ## Graphical User Interface diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-fluent.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-fluent.md index 89fedb100bd003835dd0f8efa8f21431790c5f88..c44130cdc11864932e02ab093d15bafbee40903e 100644 --- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-fluent.md +++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-fluent.md @@ -62,11 +62,11 @@ The appropriate dimension of the problem has to be set by parameter (2d/3d). fluent solver_version [FLUENT_options] -i journal_file -pbs ``` -This syntax will start the ANSYS FLUENT job under PBS Professional using the qsub command in a batch manner. When resources are available, PBS Professional will start the job and return a job ID, usually in the form of _job_ID.hostname_. This job ID can then be used to query, control, or stop the job using standard PBS Professional commands, such as qstat or qdel. The job will be run out of the current working directory, and all output will be written to the file fluent.o _job_ID_. +This syntax will start the ANSYS FLUENT job under PBS Professional using the qsub command in a batch manner. When resources are available, PBS Professional will start the job and return a job ID, usually in the form of _job_ID.hostname_. This job ID can then be used to query, control, or stop the job using standard PBS Professional commands, such as qstat or qdel. The job will be run out of the current working directory, and all output will be written to the file fluent.o _job_ID_. ## Running Fluent via User's Config File -The sample script uses a configuration file called pbs_fluent.conf if no command line arguments are present. This configuration file should be present in the directory from which the jobs are submitted (which is also the directory in which the jobs are executed). The following is an example of what the content of pbs_fluent.conf can be: +The sample script uses a configuration file called pbs_fluent.conf if no command line arguments are present. This configuration file should be present in the directory from which the jobs are submitted (which is also the directory in which the jobs are executed). The following is an example of what the content of pbs_fluent.conf can be: ```bash input="example_small.flin" diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md index e033709fd65858003bbfcb8bd931b74e455ee499..73c18686e3005610838f3a974723680839fa5cdf 100644 --- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md +++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys-ls-dyna.md @@ -1,6 +1,6 @@ # ANSYS LS-DYNA -**[ANSYSLS-DYNA](http://www.ansys.com/products/structures/ansys-ls-dyna)** software provides convenient and easy-to-use access to the technology-rich, time-tested explicit solver without the need to contend with the complex input requirements of this sophisticated program. Introduced in 1996, ANSYS LS-DYNA capabilities have helped customers in numerous industries to resolve highly intricate design issues. ANSYS Mechanical users have been able take advantage of complex explicit solutions for a long time utilizing the traditional ANSYS Parametric Design Language (APDL) environment. These explicit capabilities are available to ANSYS Workbench users as well. The Workbench platform is a powerful, comprehensive, easy-to-use environment for engineering simulation. CAD import from all sources, geometry cleanup, automatic meshing, solution, parametric optimization, result visualization and comprehensive report generation are all available within a single fully interactive modern graphical user environment. +**[ANSYSLS-DYNA](http://www.ansys.com/products/structures/ansys-ls-dyna)** software provides convenient and easy-to-use access to the technology-rich, time-tested explicit solver without the need to contend with the complex input requirements of this sophisticated program. Introduced in 1996, ANSYS LS-DYNA capabilities have helped customers in numerous industries to resolve highly intricate design issues. ANSYS Mechanical users have been able take advantage of complex explicit solutions for a long time utilizing the traditional ANSYS Parametric Design Language (APDL) environment. These explicit capabilities are available to ANSYS Workbench users as well. The Workbench platform is a powerful, comprehensive, easy-to-use environment for engineering simulation. CAD import from all sources, geometry cleanup, automatic meshing, solution, parametric optimization, result visualization and comprehensive report generation are all available within a single fully interactive modern graphical user environment. To run ANSYS LS-DYNA in batch mode you can utilize/modify the default ansysdyna.pbs script and execute it via the qsub command. diff --git a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md index 1ab4477c2f3c477695b072bfec6340c371d37f12..1bdb1def09ff581f6bf5987edba13a8285308aaf 100644 --- a/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md +++ b/docs.it4i/anselm-cluster-documentation/software/ansys/ansys.md @@ -2,7 +2,7 @@ **[SVS FEM](http://www.svsfem.cz/)** as **[ANSYS Channel partner](http://www.ansys.com/)** for Czech Republic provided all ANSYS licenses for ANSELM cluster and supports of all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent, Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging to problem of ANSYS functionality contact please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM) -Anselm provides commercial as well as academic variants. Academic variants are distinguished by "**Academic...**" word in the name of license or by two letter preposition "**aa\_**" in the license feature name. Change of license is realized on command line respectively directly in user's PBS file (see individual products). [ More about licensing here](ansys/licensing/) +Anselm provides commercial as well as academic variants. Academic variants are distinguished by "**Academic...**" word in the name of license or by two letter preposition "**aa\_**" in the license feature name. Change of license is realized on command line respectively directly in user's PBS file (see individual products). [ More about licensing here](ansys/licensing/) To load the latest version of any ANSYS product (Mechanical, Fluent, CFX, MAPDL,...) load the module: diff --git a/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md b/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md index e8827e17c777055dbf9d916d5fd55174be129bee..9b08cb6ec8d2137e936f391eae4af97789d4f229 100644 --- a/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md +++ b/docs.it4i/anselm-cluster-documentation/software/chemistry/molpro.md @@ -33,7 +33,7 @@ Compilation parameters are default: Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molpro reads the number of allocated nodes from PBS and launches a data server on one node. On the remaining allocated nodes, compute processes are launched, one process per node, each with 16 threads. You can modify this behavior by using -n, -t and helper-server options. Please refer to the [Molpro documentation](http://www.molpro.net/info/2010.1/doc/manual/node9.html) for more details. !!! note - The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option mpiprocs=16:ompthreads=1 to PBS. + The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option mpiprocs=16:ompthreads=1 to PBS. You are advised to use the -d option to point to a directory in [SCRATCH file system](../../storage/storage/). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch file system. diff --git a/docs.it4i/anselm-cluster-documentation/software/compilers.md b/docs.it4i/anselm-cluster-documentation/software/compilers.md index deb6c1122ee897e1eeb2a7df4a44158ac53ace58..d680a4b536a39ec6c6246dc2dbaab3e7d8e097e2 100644 --- a/docs.it4i/anselm-cluster-documentation/software/compilers.md +++ b/docs.it4i/anselm-cluster-documentation/software/compilers.md @@ -18,7 +18,7 @@ For information about the usage of Intel Compilers and other Intel products, ple ## GNU C/C++ and Fortran Compilers -For compatibility reasons there are still available the original (old 4.4.6-4) versions of GNU compilers as part of the OS. These are accessible in the search path by default. +For compatibility reasons there are still available the original (old 4.4.6-4) versions of GNU compilers as part of the OS. These are accessible in the search path by default. It is strongly recommended to use the up to date version (4.8.1) which comes with the module gcc: diff --git a/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md b/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md index 5dd2b59ef6413cd27c2b47f5740c9affbadf995b..dcb5a8f6760d2a3064da640ca16f9f543c95963b 100644 --- a/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md +++ b/docs.it4i/anselm-cluster-documentation/software/comsol-multiphysics.md @@ -49,7 +49,7 @@ To run COMSOL in batch mode, without the COMSOL Desktop GUI environment, user ca #PBS -l select=3:ncpus=16 #PBS -q qprod #PBS -N JOB_NAME -#PBS -A PROJECT_ID +#PBS -A PROJECT_ID cd /scratch/$USER/ || exit @@ -95,7 +95,7 @@ To run LiveLink for MATLAB in batch mode with (comsol_matlab.pbs) job script you #PBS -l select=3:ncpus=16 #PBS -q qprod #PBS -N JOB_NAME -#PBS -A PROJECT_ID +#PBS -A PROJECT_ID cd /scratch/$USER || exit diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md index bd5ece9bd34415d42838ce75defb300695738338..7c581fd14d872d0b81694de8d05b663c26a678a8 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/allinea-ddt.md @@ -53,7 +53,7 @@ Before debugging, you need to compile your code with theses flags: ## Starting a Job With DDT -Be sure to log in with an X window forwarding enabled. This could mean using the -X in the ssh: +Be sure to log in with an X window forwarding enabled. This could mean using the -X in the ssh: ```bash $ ssh -X username@anselm.it4i.cz diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md index 799b10bad52639bf995ed19ae609c1ef2e42c503..94646c3ea0e5a6f07f478b6fd926c971a2b0a0e6 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/cube.md @@ -30,7 +30,7 @@ CUBE is a graphical application. Refer to Graphical User Interface documentation !!! note Analyzing large data sets can consume large amount of CPU and RAM. Do not perform large analysis on login nodes. -After loading the appropriate module, simply launch cube command, or alternatively you can use scalasca -examine command to launch the GUI. Note that for Scalasca datasets, if you do not analyze the data with scalasca -examine before to opening them with CUBE, not all performance data will be available. +After loading the appropriate module, simply launch cube command, or alternatively you can use scalasca -examine command to launch the GUI. Note that for Scalasca datasets, if you do not analyze the data with scalasca -examine before to opening them with CUBE, not all performance data will be available. References 1\. <http://www.scalasca.org/software/cube-4.x/download.html> diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md index d9b878254aab451e1ec5b8c9ffe7efb1a3dca363..c6be6a384588a3ab86756ba827c97398ae57b8ce 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-performance-counter-monitor.md @@ -57,7 +57,7 @@ Sample output: ### Pcm-Msr -Command pcm-msr.x can be used to read/write model specific registers of the CPU. +Command pcm-msr.x can be used to read/write model specific registers of the CPU. ### Pcm-Numa diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md index 3c3f1a8af340e402b2e8797a8fbb802ae2b5257d..f3cb6fd44547d26a4e7fd9e589e7b55449ceaa6b 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/intel-vtune-amplifier.md @@ -56,7 +56,7 @@ Application: ssh Application parameters: mic0 source ~/.profile && /path/to/your/bin -Note that we include source ~/.profile in the command to setup environment paths [as described here](../intel-xeon-phi/). +Note that we include source ~/.profile in the command to setup environment paths [as described here](../intel-xeon-phi/). !!! note If the analysis is interrupted or aborted, further analysis on the card might be impossible and you will get errors like "ERROR connecting to MIC card". In this case please contact our support to reboot the MIC card. diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md index ee7d63fe69c9440d50da4200bdca431687ab9cec..ec17791fa00aa231621847ae1c8a9ba815bd0a4c 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/papi.md @@ -20,7 +20,7 @@ This will load the default version. Execute module avail papi for a list of inst ## Utilities -The bin directory of PAPI (which is automatically added to $PATH upon loading the module) contains various utilites. +The bin directory of PAPI (which is automatically added to $PATH upon loading the module) contains various utilites. ### Papi_avail diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md index f0d0c33b8e48afa24e51d6540d53705dfa1e477a..f66fa5329e3edae89cb989a219cdb33d9755c046 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/score-p.md @@ -34,14 +34,14 @@ $ mpif90 -o myapp foo.o bar.o with: ```bash -$ scorep mpif90 -c foo.f90 -$ scorep mpif90 -c bar.f90 -$ scorep mpif90 -o myapp foo.o bar.o +$ scorep mpif90 -c foo.f90 +$ scorep mpif90 -c bar.f90 +$ scorep mpif90 -o myapp foo.o bar.o ``` -Usually your program is compiled using a Makefile or similar script, so it advisable to add the scorep command to your definition of variables CC, CXX, FCC etc. +Usually your program is compiled using a Makefile or similar script, so it advisable to add the scorep command to your definition of variables CC, CXX, FCC etc. -It is important that scorep is prepended also to the linking command, in order to link with Score-P instrumentation libraries. +It is important that scorep is prepended also to the linking command, in order to link with Score-P instrumentation libraries. ### Manual Instrumentation Using API Calls @@ -78,7 +78,7 @@ Please refer to the [documentation for description of the API](https://silc.zih. ### Manual Instrumentation Using Directives -This method uses POMP2 directives to mark regions to be instrumented. To use this method, use command scorep --pomp. +This method uses POMP2 directives to mark regions to be instrumented. To use this method, use command scorep --pomp. Example directives in C/C++ : diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md index 2265a89b6e4b51024f36fcabb0b537426931ca60..635b51bd1e45f73a7d100011b1af88c027215a4a 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/total-view.md @@ -20,7 +20,7 @@ You can check the status of the licenses here: # totalview # ------------------------------------------------- - # FEATURE TOTAL USED AVAIL + # FEATURE TOTAL USED AVAIL # ------------------------------------------------- TotalView_Team 64 0 64 Replay 64 0 64 diff --git a/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md b/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md index bfcfc9a86aeb60b88cf8a06ce45fd741bd34768d..494ec69a53ab6c72975bfd0f7f4403489ae07526 100644 --- a/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md +++ b/docs.it4i/anselm-cluster-documentation/software/debuggers/valgrind.md @@ -156,7 +156,7 @@ The default version without MPI support will however report a large number of fa ==30166== by 0x4008BD: main (valgrind-example-mpi.c:18) ``` -so it is better to use the MPI-enabled valgrind from module. The MPI version requires library /apps/tools/valgrind/3.9.0/impi/lib/valgrind/libmpiwrap-amd64-linux.so, which must be included in the LD_PRELOAD environment variable. +so it is better to use the MPI-enabled valgrind from module. The MPI version requires library /apps/tools/valgrind/3.9.0/impi/lib/valgrind/libmpiwrap-amd64-linux.so, which must be included in the LD_PRELOAD environment variable. Lets look at this MPI example : diff --git a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md index dcd4f6c7ea441ca6fed50a0da94ba9dfc974b1ae..51aef698b0d7422a151978a31934645fff3883d6 100644 --- a/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md +++ b/docs.it4i/anselm-cluster-documentation/software/intel-suite/intel-mkl.md @@ -11,7 +11,7 @@ Intel Math Kernel Library (Intel MKL) is a library of math kernel subroutines, e * Vector Math Library (VML) routines for optimized mathematical operations on vectors. * Vector Statistical Library (VSL) routines, which offer high-performance vectorized random number generators (RNG) for several probability distributions, convolution and correlation routines, and summary statistics functions. * Data Fitting Library, which provides capabilities for spline-based approximation of functions, derivatives and integrals of functions, and search. -* Extended Eigensolver, a shared memory version of an eigensolver based on the Feast Eigenvalue Solver. +* Extended Eigensolver, a shared memory version of an eigensolver based on the Feast Eigenvalue Solver. For details see the [Intel MKL Reference Manual](http://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mklman/index.htm). @@ -39,7 +39,7 @@ The MKL library provides number of interfaces. The fundamental once are the LP64 Linking MKL libraries may be complex. Intel [mkl link line advisor](http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor) helps. See also [examples](intel-mkl/#examples) below. -You will need the mkl module loaded to run the mkl enabled executable. This may be avoided, by compiling library search paths into the executable. Include rpath on the compile line: +You will need the mkl module loaded to run the mkl enabled executable. This may be avoided, by compiling library search paths into the executable. Include rpath on the compile line: ```bash $ icc .... -Wl,-rpath=$LIBRARY_PATH ... @@ -74,7 +74,7 @@ Number of examples, demonstrating use of the MKL library and its linking is avai $ make sointel64 function=cblas_dgemm ``` -In this example, we compile, link and run the cblas_dgemm example, demonstrating use of MKL example suite installed on Anselm. +In this example, we compile, link and run the cblas_dgemm example, demonstrating use of MKL example suite installed on Anselm. ### Example: MKL and Intel Compiler @@ -88,14 +88,14 @@ In this example, we compile, link and run the cblas_dgemm example, demonstratin $ ./cblas_dgemmx.x data/cblas_dgemmx.d ``` -In this example, we compile, link and run the cblas_dgemm example, demonstrating use of MKL with icc -mkl option. Using the -mkl option is equivalent to: +In this example, we compile, link and run the cblas_dgemm example, demonstrating use of MKL with icc -mkl option. Using the -mkl option is equivalent to: ```bash $ icc -w source/cblas_dgemmx.c source/common_func.c -o cblas_dgemmx.x -I$MKL_INC_DIR -L$MKL_LIB_DIR -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 ``` -In this example, we compile and link the cblas_dgemm example, using LP64 interface to threaded MKL and Intel OMP threads implementation. +In this example, we compile and link the cblas_dgemm example, using LP64 interface to threaded MKL and Intel OMP threads implementation. ### Example: MKL and GNU Compiler @@ -111,7 +111,7 @@ In this example, we compile and link the cblas_dgemm example, using LP64 interf $ ./cblas_dgemmx.x data/cblas_dgemmx.d ``` -In this example, we compile, link and run the cblas_dgemm example, using LP64 interface to threaded MKL and gnu OMP threads implementation. +In this example, we compile, link and run the cblas_dgemm example, using LP64 interface to threaded MKL and gnu OMP threads implementation. ## MKL and MIC Accelerators diff --git a/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md b/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md index d2512ae42bd0466f4daface9d4449d2fd0177d5c..bd5ba63afebaa85904bdd7a798047a4e434870bf 100644 --- a/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md +++ b/docs.it4i/anselm-cluster-documentation/software/isv_licenses.md @@ -38,7 +38,7 @@ Example of the Commercial Matlab license state: $ cat /apps/user/licenses/matlab_features_state.txt # matlab # ------------------------------------------------- - # FEATURE TOTAL USED AVAIL + # FEATURE TOTAL USED AVAIL # ------------------------------------------------- MATLAB 1 1 0 SIMULINK 1 0 1 diff --git a/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md b/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md index 8ecfd71d5e62637b80eebd54f1cc32dedb818f5e..400bc45a0808285816ac94e3e084e3a3ec81efc1 100644 --- a/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md +++ b/docs.it4i/anselm-cluster-documentation/software/kvirtualization.md @@ -198,7 +198,7 @@ Example run script (run.bat) for Windows virtual machine: call application.bat z:data z:output ``` -Run script runs application from shared job directory (mapped as drive z:), process input data (z:data) from job directory and store output to job directory (z:output). +Run script runs application from shared job directory (mapped as drive z:), process input data (z:data) from job directory and store output to job directory (z:output). ### Run Jobs diff --git a/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md b/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md index f164792863fdfb0b5fd83b41f5a8efd9328b301a..6c391d7b1b08027c619239b3f7b2cb504589a972 100644 --- a/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md +++ b/docs.it4i/anselm-cluster-documentation/software/mpi/mpi.md @@ -19,7 +19,7 @@ Look up section modulefiles/mpi in module avail ```bash $ module avail ------------------------- /opt/modules/modulefiles/mpi ------------------------- - bullxmpi/bullxmpi-1.2.4.1 mvapich2/1.9-icc + bullxmpi/bullxmpi-1.2.4.1 mvapich2/1.9-icc impi/4.0.3.008 openmpi/1.6.5-gcc(default) impi/4.1.0.024 openmpi/1.6.5-gcc46 impi/4.1.0.030 openmpi/1.6.5-icc diff --git a/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md b/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md index 1a8972c390a62ba9e29cb67e3993b7b8c1ea412f..64d3c620fddf82b25339d535fb984067924ef29a 100644 --- a/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md +++ b/docs.it4i/anselm-cluster-documentation/software/mpi/running-mpich2.md @@ -41,7 +41,7 @@ You need to preload the executable, if running on the local scratch /lscratch fi Hello world! from rank 3 of 4 on host cn110 ``` -In this example, we assume the executable helloworld_mpi.x is present on shared home directory. We run the cp command via mpirun, copying the executable from shared home to local scratch . Second mpirun will execute the binary in the /lscratch/15210.srv11 directory on nodes cn17, cn108, cn109 and cn110, one process per node. +In this example, we assume the executable helloworld_mpi.x is present on shared home directory. We run the cp command via mpirun, copying the executable from shared home to local scratch . Second mpirun will execute the binary in the /lscratch/15210.srv11 directory on nodes cn17, cn108, cn109 and cn110, one process per node. !!! note MPI process mapping may be controlled by PBS parameters. diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md index d693a1872e3cf23badce337d4715ee679b2f00e8..992caf8f55f3279277c752110c585664782cbff2 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab.md @@ -274,7 +274,7 @@ You can use MATLAB on UV2000 in two parallel modes: ### Threaded Mode -Since this is a SMP machine, you can completely avoid using Parallel Toolbox and use only MATLAB's threading. MATLAB will automatically detect the number of cores you have allocated and will set maxNumCompThreads accordingly and certain operations, such as fft, , eig, svd, etc. will be automatically run in threads. The advantage of this mode is that you don't need to modify your existing sequential codes. +Since this is a SMP machine, you can completely avoid using Parallel Toolbox and use only MATLAB's threading. MATLAB will automatically detect the number of cores you have allocated and will set maxNumCompThreads accordingly and certain operations, such as fft, , eig, svd, etc. will be automatically run in threads. The advantage of this mode is that you don't need to modify your existing sequential codes. ### Local Cluster Mode diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md index f9cf95feb5013a3843458fb22fd2f8eaa6e9f5e9..312a1f09490e1937c4fb369c713c9e29d8cda16f 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/matlab_1314.md @@ -72,7 +72,7 @@ extras = {}; System MPI library allows Matlab to communicate through 40 Gbit/s InfiniBand QDR interconnect instead of slower 1 Gbit Ethernet network. !!! note - The path to MPI library in "mpiLibConf.m" has to match with version of loaded Intel MPI module. In this example the version 4.1.1.036 of Intel MPI is used by Matlab and therefore module impi/4.1.1.036 has to be loaded prior to starting Matlab. + The path to MPI library in "mpiLibConf.m" has to match with version of loaded Intel MPI module. In this example the version 4.1.1.036 of Intel MPI is used by Matlab and therefore module impi/4.1.1.036 has to be loaded prior to starting Matlab. ### Parallel Matlab Interactive Session diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md index 56426eb06591ed490d317fa14a65ddfd2bc4290f..f62cad83d6f5e29a8310cef81d10eef8df6fcb60 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-languages/r.md @@ -70,7 +70,7 @@ This script may be submitted directly to the PBS workload manager via the qsub c ## Parallel R -Parallel execution of R may be achieved in many ways. One approach is the implied parallelization due to linked libraries or specially enabled functions, as [described above](r/#interactive-execution). In the following sections, we focus on explicit parallelization, where parallel constructs are directly stated within the R script. +Parallel execution of R may be achieved in many ways. One approach is the implied parallelization due to linked libraries or specially enabled functions, as [described above](r/#interactive-execution). In the following sections, we focus on explicit parallelization, where parallel constructs are directly stated within the R script. ## Package Parallel @@ -375,7 +375,7 @@ Example jobscript for [static Rmpi](r/#static-rmpi) parallel R execution, runnin #PBS -N Rjob #PBS -l select=100:ncpus=16:mpiprocs=16:ompthreads=1 - # change to scratch directory + # change to scratch directory SCRDIR=/scratch/$USER/myjob cd $SCRDIR || exit diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/fftw.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/fftw.md index 67c3fdf090a195dc3cd88d66dd920e0cd6163648..038e1223a44cde79a37f2f7fe59fab9f7e5a8e8e 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/fftw.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/fftw.md @@ -2,7 +2,7 @@ The discrete Fourier transform in one or more dimensions, MPI parallel -FFTW is a C subroutine library for computing the discrete Fourier transform in one or more dimensions, of arbitrary input size, and of both real and complex data (as well as of even/odd data, e.g. the discrete cosine/sine transforms or DCT/DST). The FFTW library allows for MPI parallel, in-place discrete Fourier transform, with data distributed over number of nodes. +FFTW is a C subroutine library for computing the discrete Fourier transform in one or more dimensions, of arbitrary input size, and of both real and complex data (as well as of even/odd data, e.g. the discrete cosine/sine transforms or DCT/DST). The FFTW library allows for MPI parallel, in-place discrete Fourier transform, with data distributed over number of nodes. Two versions, **3.3.3** and **2.1.5** of FFTW are available on Anselm, each compiled for **Intel MPI** and **OpenMPI** using **intel** and **gnu** compilers. These are available via modules: diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md index c4b1c262b007a7b34eac140fa2e31a65d9513512..d0fccfedeaba22e3bad9e666af7bae33c34dce6e 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/magma-for-intel-xeon-phi.md @@ -23,7 +23,7 @@ Compilation example: ```bash $ icc -mkl -O3 -DHAVE_MIC -DADD_ -Wall $MAGMA_INC -c testing_dgetrf_mic.cpp -o testing_dgetrf_mic.o - $ icc -mkl -O3 -DHAVE_MIC -DADD_ -Wall -fPIC -Xlinker -zmuldefs -Wall -DNOCHANGE -DHOST testing_dgetrf_mic.o -o testing_dgetrf_mic $MAGMA_LIBS + $ icc -mkl -O3 -DHAVE_MIC -DADD_ -Wall -fPIC -Xlinker -zmuldefs -Wall -DNOCHANGE -DHOST testing_dgetrf_mic.o -o testing_dgetrf_mic $MAGMA_LIBS ``` ### Running MAGMA Code @@ -54,15 +54,15 @@ To test if the MAGMA server runs properly we can run one of examples that are pa M N CPU GFlop/s (sec) MAGMA GFlop/s (sec) ||PA-LU||/(||A||*N) ========================================================================= - 1088 1088 --- ( --- ) 13.93 ( 0.06) --- - 2112 2112 --- ( --- ) 77.85 ( 0.08) --- - 3136 3136 --- ( --- ) 183.21 ( 0.11) --- - 4160 4160 --- ( --- ) 227.52 ( 0.21) --- - 5184 5184 --- ( --- ) 258.61 ( 0.36) --- - 6208 6208 --- ( --- ) 333.12 ( 0.48) --- - 7232 7232 --- ( --- ) 416.52 ( 0.61) --- - 8256 8256 --- ( --- ) 446.97 ( 0.84) --- - 9280 9280 --- ( --- ) 461.15 ( 1.16) --- + 1088 1088 --- ( --- ) 13.93 ( 0.06) --- + 2112 2112 --- ( --- ) 77.85 ( 0.08) --- + 3136 3136 --- ( --- ) 183.21 ( 0.11) --- + 4160 4160 --- ( --- ) 227.52 ( 0.21) --- + 5184 5184 --- ( --- ) 258.61 ( 0.36) --- + 6208 6208 --- ( --- ) 333.12 ( 0.48) --- + 7232 7232 --- ( --- ) 416.52 ( 0.61) --- + 8256 8256 --- ( --- ) 446.97 ( 0.84) --- + 9280 9280 --- ( --- ) 461.15 ( 1.16) --- 10304 10304 --- ( --- ) 500.70 ( 1.46) --- ``` diff --git a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/petsc.md b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/petsc.md index 6d0b8fb58fae24e98cf4fe1f682e119890a12d67..c44e9122429366fd989c63be1736475bba53ca8e 100644 --- a/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/petsc.md +++ b/docs.it4i/anselm-cluster-documentation/software/numerical-libraries/petsc.md @@ -10,7 +10,7 @@ PETSc (Portable, Extensible Toolkit for Scientific Computation) is a suite of bu * [project webpage](http://www.mcs.anl.gov/petsc/) * [documentation](http://www.mcs.anl.gov/petsc/documentation/) - * [PETSc Users Manual (PDF)](http://www.mcs.anl.gov/petsc/petsc-current/docs/manual.pdf) + * [PETSc Users Manual (PDF)](http://www.mcs.anl.gov/petsc/petsc-current/docs/manual.pdf) * [index of all manual pages](http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/singleindex.html) * PRACE Video Tutorial [part1](http://www.youtube.com/watch?v=asVaFg1NDqY), [part2](http://www.youtube.com/watch?v=ubp_cSibb9I), [part3](http://www.youtube.com/watch?v=vJAAAQv-aaw), [part4](http://www.youtube.com/watch?v=BKVlqWNh8jY), [part5](http://www.youtube.com/watch?v=iXkbLEBFjlM) diff --git a/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md b/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md index c968fb56351fc78cfbcfb0576ccc0ef8063a898c..7ef93b8fe0ba11c1fda11f21b2ed390869b7fc40 100644 --- a/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md +++ b/docs.it4i/anselm-cluster-documentation/software/omics-master/overview.md @@ -68,7 +68,7 @@ corresponding information is unavailable. | 1 | QNAME | Query NAME of the read or the read pai | | 2 | FLAG | Bitwise FLAG (pairing,strand,mate strand,etc.) | | 3 | RNAME | <p>Reference sequence NAME | -| 4 | POS | <p>1-Based leftmost POSition of clipped alignment | +| 4 | POS | <p>1-Based leftmost POSition of clipped alignment | | 5 | MAPQ | <p>MAPping Quality (Phred-scaled) | | 6 | CIGAR | <p>Extended CIGAR string (operations:MIDNSHP) | | 7 | MRNM | <p>Mate REference NaMe ('=' if same RNAME) | @@ -121,7 +121,7 @@ Identification of single nucleotide variants and indels on the alignments is per VCF (3) is a standardized format for storing the most prevalent types of sequence variation, including SNPs, indels and larger structural variants, together with rich annotations. The format was developed with the primary intention to represent human genetic variation, but its use is not restricted >to diploid genomes and can be used in different contexts as well. Its flexibility and user extensibility allows representation of a wide variety of genomic variation with respect to a single reference sequence. -A VCF file consists of a header section and a data section. The header contains an arbitrary number of metainformation lines, each starting with characters ‘##’, and a TAB delimited field definition line, starting with a single ‘#’ character. The meta-information header lines provide a standardized description of tags and annotations used in the data section. The use of meta-information allows the information stored within a VCF file to be tailored to the dataset in question. It can be also used to provide information about the means of file creation, date of creation, version of the reference sequence, software used and any other information relevant to the history of the file. The field definition line names eight mandatory columns, corresponding to data columns representing the chromosome (CHROM), a 1-based position of the start of the variant (POS), unique identifiers of the variant (ID), the reference allele (REF), a comma separated list of alternate non-reference alleles (ALT), a phred-scaled quality score (QUAL), site filtering information (FILTER) and a semicolon separated list of additional, user extensible annotation (INFO). In addition, if samples are present in the file, the mandatory header columns are followed by a FORMAT column and an arbitrary number of sample IDs that define the samples included in the VCF file. The FORMAT column is used to define the information contained within each subsequent genotype column, which consists of a colon separated list of fields. For example, the FORMAT field GT:GQ:DP in the fourth data entry of Figure 1a indicates that the subsequent entries contain information regarding the genotype, genotype quality and read depth for each sample. All data lines are TAB delimited and the number of fields in each data line must match the number of fields in the header line. It is strongly recommended that all annotation tags used are declared in the VCF header section. +A VCF file consists of a header section and a data section. The header contains an arbitrary number of metainformation lines, each starting with characters ‘##’, and a TAB delimited field definition line, starting with a single ‘#’ character. The meta-information header lines provide a standardized description of tags and annotations used in the data section. The use of meta-information allows the information stored within a VCF file to be tailored to the dataset in question. It can be also used to provide information about the means of file creation, date of creation, version of the reference sequence, software used and any other information relevant to the history of the file. The field definition line names eight mandatory columns, corresponding to data columns representing the chromosome (CHROM), a 1-based position of the start of the variant (POS), unique identifiers of the variant (ID), the reference allele (REF), a comma separated list of alternate non-reference alleles (ALT), a phred-scaled quality score (QUAL), site filtering information (FILTER) and a semicolon separated list of additional, user extensible annotation (INFO). In addition, if samples are present in the file, the mandatory header columns are followed by a FORMAT column and an arbitrary number of sample IDs that define the samples included in the VCF file. The FORMAT column is used to define the information contained within each subsequent genotype column, which consists of a colon separated list of fields. For example, the FORMAT field GT:GQ:DP in the fourth data entry of Figure 1a indicates that the subsequent entries contain information regarding the genotype, genotype quality and read depth for each sample. All data lines are TAB delimited and the number of fields in each data line must match the number of fields in the header line. It is strongly recommended that all annotation tags used are declared in the VCF header section. , the quota may be lifted upon request. !!! note - The Scratch filesystem is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the SCRATCH filesystem as their working directory. + The Scratch filesystem is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the SCRATCH filesystem as their working directory. >Users are advised to save the necessary data from the SCRATCH filesystem to HOME filesystem after the calculations and clean up the scratch files. @@ -166,11 +166,11 @@ Example for Lustre HOME directory: ```bash $ lfs quota /home Disk quotas for user user001 (uid 1234): - Filesystem kbytes quota limit grace files quota limit grace - /home 300096 0 250000000 - 2102 0 500000 - + Filesystem kbytes quota limit grace files quota limit grace + /home 300096 0 250000000 - 2102 0 500000 - Disk quotas for group user001 (gid 1234): - Filesystem kbytes quota limit grace files quota limit grace - /home 300096 0 0 - 2102 0 0 - + Filesystem kbytes quota limit grace files quota limit grace + /home 300096 0 0 - 2102 0 0 - ``` In this example, we view current quota size limit of 250GB and 300MB currently used by user001. @@ -180,7 +180,7 @@ Example for Lustre SCRATCH directory: ```bash $ lfs quota /scratch Disk quotas for user user001 (uid 1234): - Filesystem kbytes quota limit grace files quota limit grace + Filesystem kbytes quota limit grace files quota limit grace /scratch 8 0 100000000000 - 3 0 0 - Disk quotas for group user001 (gid 1234): Filesystem kbytes quota limit grace files quota limit grace @@ -229,7 +229,7 @@ ACLs on a Lustre file system work exactly like ACLs on any Linux file system. Th [vop999@login1.anselm ~]$ umask 027 [vop999@login1.anselm ~]$ mkdir test [vop999@login1.anselm ~]$ ls -ld test -drwxr-x--- 2 vop999 vop999 4096 Nov 5 14:17 test +drwxr-x--- 2 vop999 vop999 4096 Nov 5 14:17 test [vop999@login1.anselm ~]$ getfacl test # file: test # owner: vop999 @@ -240,7 +240,7 @@ other::--- [vop999@login1.anselm ~]$ setfacl -m user:johnsm:rwx test [vop999@login1.anselm ~]$ ls -ld test -drwxrwx---+ 2 vop999 vop999 4096 Nov 5 14:17 test +drwxrwx---+ 2 vop999 vop999 4096 Nov 5 14:17 test [vop999@login1.anselm ~]$ getfacl test # file: test # owner: vop999 @@ -267,7 +267,7 @@ Use local scratch in case you need to access large amount of small files during The local scratch disk is mounted as /lscratch and is accessible to user at /lscratch/$PBS_JOBID directory. -The local scratch filesystem is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs that access large number of small files within the calculation must use the local scratch filesystem as their working directory. This is required for performance reasons, as frequent access to number of small files may overload the metadata servers (MDS) of the Lustre filesystem. +The local scratch filesystem is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs that access large number of small files within the calculation must use the local scratch filesystem as their working directory. This is required for performance reasons, as frequent access to number of small files may overload the metadata servers (MDS) of the Lustre filesystem. !!! note The local scratch directory /lscratch/$PBS_JOBID will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript. @@ -349,7 +349,7 @@ Once registered for CESNET Storage, you may [access the storage](https://du.cesn !!! note SSHFS: The storage will be mounted like a local hard drive -The SSHFS provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in a usual fashion. +The SSHFS provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in a usual fashion. First, create the mount point diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md index 7d243fc01535188dfc754a950c09ed78204146b9..ae60d1e99674ebc8970023c5cd128536c60fa2bc 100644 --- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md +++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/vnc.md @@ -18,7 +18,7 @@ Verify: ## Start Vncserver !!! note - To access VNC a local vncserver must be started first and also a tunnel using SSH port forwarding must be established. + To access VNC a local vncserver must be started first and also a tunnel using SSH port forwarding must be established. [See below](vnc.md#linux-example-of-creating-a-tunnel) for the details on SSH tunnels. In this example we use port 61. @@ -26,8 +26,8 @@ You can find ports which are already occupied. Here you can see that ports " /us ```bash [username@login2 ~]$ ps aux | grep Xvnc -username 5971 0.0 0.0 201072 92564 ? SN Sep22 4:19 /usr/bin/Xvnc :79 -desktop login2:79 (username) -auth /home/gre196/.Xauthority -geometry 1024x768 -rfbwait 30000 -rfbauth /home/username/.vnc/passwd -rfbport 5979 -fp catalogue:/etc/X11/fontpath.d -pn -username 10296 0.0 0.0 131772 21076 pts/29 SN 13:01 0:01 /usr/bin/Xvnc :60 -desktop login2:61 (username) -auth /home/username/.Xauthority -geometry 1600x900 -depth 16 -rfbwait 30000 -rfbauth /home/jir13/.vnc/passwd -rfbport 5960 -fp catalogue:/etc/X11/fontpath.d -pn +username 5971 0.0 0.0 201072 92564 ? SN Sep22 4:19 /usr/bin/Xvnc :79 -desktop login2:79 (username) -auth /home/gre196/.Xauthority -geometry 1024x768 -rfbwait 30000 -rfbauth /home/username/.vnc/passwd -rfbport 5979 -fp catalogue:/etc/X11/fontpath.d -pn +username 10296 0.0 0.0 131772 21076 pts/29 SN 13:01 0:01 /usr/bin/Xvnc :60 -desktop login2:61 (username) -auth /home/username/.Xauthority -geometry 1600x900 -depth 16 -rfbwait 30000 -rfbauth /home/jir13/.vnc/passwd -rfbport 5960 -fp catalogue:/etc/X11/fontpath.d -pn ..... ``` @@ -58,7 +58,7 @@ Another command: ```bash [username@login2 .vnc]$ ps aux | grep Xvnc -username 10296 0.0 0.0 131772 21076 pts/29 SN 13:01 0:01 /usr/bin/Xvnc :61 -desktop login2:61 (username) -auth /home/jir13/.Xauthority -geometry 1600x900 -depth 16 -rfbwait 30000 -rfbauth /home/username/.vnc/passwd -rfbport 5961 -fp catalogue:/etc/X11/fontpath.d -pn +username 10296 0.0 0.0 131772 21076 pts/29 SN 13:01 0:01 /usr/bin/Xvnc :61 -desktop login2:61 (username) -auth /home/jir13/.Xauthority -geometry 1600x900 -depth 16 -rfbwait 30000 -rfbauth /home/username/.vnc/passwd -rfbport 5961 -fp catalogue:/etc/X11/fontpath.d -pn ``` To access the VNC server you have to create a tunnel between the login node using TCP **port 5961** and your machine using a free TCP port (for simplicity the very same, in this case). @@ -162,8 +162,8 @@ If the screen gets locked you have to kill the screensaver. Do not to forget to ```bash [username@login2 .vnc]$ ps aux | grep screen -username 1503 0.0 0.0 103244 892 pts/4 S+ 14:37 0:00 grep screen -username 24316 0.0 0.0 270564 3528 ? Ss 14:12 0:00 gnome-screensaver +username 1503 0.0 0.0 103244 892 pts/4 S+ 14:37 0:00 grep screen +username 24316 0.0 0.0 270564 3528 ? Ss 14:12 0:00 gnome-screensaver [username@login2 .vnc]$ kill 24316 ``` diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md index 9c1d75b807e8c1e7fa62da076875749d695ef045..e46cd0833520b9639bfb698592ba43285d084097 100644 --- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md +++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system.md @@ -99,7 +99,7 @@ xserver-xephyr, on OS X it is part of [XQuartz](http://xquartz.macosforge.org/la local $ Xephyr -ac -screen 1024x768 -br -reset -terminate :1 & ``` -This will open a new X window with size 1024 x 768 at DISPLAY :1. Next, ssh to the cluster with DISPLAY environment variable set and launch gnome-session +This will open a new X window with size 1024 x 768 at DISPLAY :1. Next, ssh to the cluster with DISPLAY environment variable set and launch gnome-session ```bash local $ DISPLAY=:1.0 ssh -XC yourname@cluster-name.it4i.cz -i ~/.ssh/path_to_your_key diff --git a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md index 85ef36b73ec669306fcc3753509d7a612a619813..5e9bbe54c8bc438358542c265a3b04995324ebfb 100644 --- a/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md +++ b/docs.it4i/get-started-with-it4innovations/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md @@ -10,10 +10,10 @@ After logging in, you can see .ssh/ directory with SSH keys and authorized_keys total 24 drwx------ 2 username username 4096 May 13 15:12 . drwxr-x---22 username username 4096 May 13 07:22 .. - -rw-r--r-- 1 username username 392 May 21 2014 authorized_keys - -rw------- 1 username username 1675 May 21 2014 id_rsa - -rw------- 1 username username 1460 May 21 2014 id_rsa.ppk - -rw-r--r-- 1 username username 392 May 21 2014 id_rsa.pub + -rw-r--r-- 1 username username 392 May 21 2014 authorized_keys + -rw------- 1 username username 1675 May 21 2014 id_rsa + -rw------- 1 username username 1460 May 21 2014 id_rsa.ppk + -rw-r--r-- 1 username username 392 May 21 2014 id_rsa.pub ``` !!! hint @@ -76,7 +76,7 @@ An example of private key format: ## Public Key -Public key file in "\*.pub" format is used to verify a digital signature. Public key is present on the remote side and allows access to the owner of the matching private key. +Public key file in "\*.pub" format is used to verify a digital signature. Public key is present on the remote side and allows access to the owner of the matching private key. An example of public key format: diff --git a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md index 7213ec31c1fc03b7cd770cb35c4cd77070ecdbb8..34a348495b2fe6e27c9e31d6f2086621d89aebcb 100644 --- a/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md +++ b/docs.it4i/get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials.md @@ -2,7 +2,7 @@ ## Obtaining Authorization -The computational resources of IT4I are allocated by the Allocation Committee to a [Project](/), investigated by a Primary Investigator. By allocating the computational resources, the Allocation Committee is authorizing the PI to access and use the clusters. The PI may decide to authorize a number of her/his Collaborators to access and use the clusters, to consume the resources allocated to her/his Project. These collaborators will be associated to the Project. The Figure below is depicting the authorization chain: +The computational resources of IT4I are allocated by the Allocation Committee to a [Project](/), investigated by a Primary Investigator. By allocating the computational resources, the Allocation Committee is authorizing the PI to access and use the clusters. The PI may decide to authorize a number of her/his Collaborators to access and use the clusters, to consume the resources allocated to her/his Project. These collaborators will be associated to the Project. The Figure below is depicting the authorization chain:  diff --git a/docs.it4i/salomon/capacity-computing.md b/docs.it4i/salomon/capacity-computing.md index c5ae6b385bbe260340d5e69257f0d3d0854ee40a..d6811bf4bb09e6ec39f7e6198430ebdf5cc7cf52 100644 --- a/docs.it4i/salomon/capacity-computing.md +++ b/docs.it4i/salomon/capacity-computing.md @@ -101,10 +101,10 @@ Check status of the job array by the qstat command. $ qstat -a 506493[].isrv5 isrv5: - Req'd Req'd Elap -Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time + Req'd Req'd Elap +Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time --------------- -------- -- |---|---| ------ --- --- ------ ----- - ----- -12345[].dm2 user2 qprod xx 13516 1 24 -- 00:50 B 00:02 +12345[].dm2 user2 qprod xx 13516 1 24 -- 00:50 B 00:02 ``` The status B means that some subjobs are already running. @@ -115,16 +115,16 @@ Check status of the first 100 subjobs by the qstat command. $ qstat -a 12345[1-100].isrv5 isrv5: - Req'd Req'd Elap -Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time + Req'd Req'd Elap +Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time --------------- -------- -- |---|---| ------ --- --- ------ ----- - ----- -12345[1].isrv5 user2 qprod xx 13516 1 24 -- 00:50 R 00:02 -12345[2].isrv5 user2 qprod xx 13516 1 24 -- 00:50 R 00:02 -12345[3].isrv5 user2 qprod xx 13516 1 24 -- 00:50 R 00:01 -12345[4].isrv5 user2 qprod xx 13516 1 24 -- 00:50 Q -- +12345[1].isrv5 user2 qprod xx 13516 1 24 -- 00:50 R 00:02 +12345[2].isrv5 user2 qprod xx 13516 1 24 -- 00:50 R 00:02 +12345[3].isrv5 user2 qprod xx 13516 1 24 -- 00:50 R 00:01 +12345[4].isrv5 user2 qprod xx 13516 1 24 -- 00:50 Q -- . . . . . . . . . . . , . . . . . . . . . . -12345[100].isrv5 user2 qprod xx 13516 1 24 -- 00:50 Q -- +12345[100].isrv5 user2 qprod xx 13516 1 24 -- 00:50 Q -- ``` Delete the entire job array. Running subjobs will be killed, queueing subjobs will be deleted. @@ -154,7 +154,7 @@ Read more on job arrays in the [PBSPro Users guide](../../pbspro-documentation/) !!! note Use GNU parallel to run many single core tasks on one node. -GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. GNU parallel is most useful in running single core jobs via the queue system on Anselm. +GNU parallel is a shell tool for executing jobs in parallel using one or more computers. A job can be a single command or a small script that has to be run for each of the lines in the input. GNU parallel is most useful in running single core jobs via the queue system on Anselm. For more information and examples see the parallel man page: @@ -199,7 +199,7 @@ TASK=$1 cp $PBS_O_WORKDIR/$TASK input # execute the calculation -cat input > output +cat input > output # copy output file to submit directory cp output $PBS_O_WORKDIR/$TASK.out @@ -216,7 +216,7 @@ $ qsub -N JOBNAME jobscript 12345.dm2 ``` -In this example, we submit a job of 101 tasks. 24 input files will be processed in parallel. The 101 tasks on 24 cores are assumed to complete in less than 2 hours. +In this example, we submit a job of 101 tasks. 24 input files will be processed in parallel. The 101 tasks on 24 cores are assumed to complete in less than 2 hours. !!! note Use #PBS directives in the beginning of the jobscript file, dont' forget to set your valid PROJECT_ID and desired queue. @@ -281,10 +281,10 @@ cat input > output cp output $PBS_O_WORKDIR/$TASK.out ``` -In this example, the jobscript executes in multiple instances in parallel, on all cores of a computing node. Variable $TASK expands to one of the input filenames from tasklist. We copy the input file to local scratch, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name. The numtasks file controls how many tasks will be run per subjob. Once an task is finished, new task starts, until the number of tasks in numtasks file is reached. +In this example, the jobscript executes in multiple instances in parallel, on all cores of a computing node. Variable $TASK expands to one of the input filenames from tasklist. We copy the input file to local scratch, execute the myprog.x and copy the output file back to the submit directory, under the $TASK.out name. The numtasks file controls how many tasks will be run per subjob. Once an task is finished, new task starts, until the number of tasks in numtasks file is reached. !!! note - Select subjob walltime and number of tasks per subjob carefully + Select subjob walltime and number of tasks per subjob carefully When deciding this values, think about following guiding rules : diff --git a/docs.it4i/salomon/environment-and-modules.md b/docs.it4i/salomon/environment-and-modules.md index a9a6def4dfeb499e8daf6ad3cd8fc8bd707d7d91..06be4665c5f1c800c42586fb11eb9dd4a027605f 100644 --- a/docs.it4i/salomon/environment-and-modules.md +++ b/docs.it4i/salomon/environment-and-modules.md @@ -24,11 +24,11 @@ fi ``` !!! note - Do not run commands outputting to standard output (echo, module list, etc) in .bashrc for non-interactive SSH sessions. It breaks fundamental functionality (scp, PBS) of your account! Take care for SSH session interactivity for such commands as stated in the previous example. + Do not run commands outputting to standard output (echo, module list, etc) in .bashrc for non-interactive SSH sessions. It breaks fundamental functionality (scp, PBS) of your account! Take care for SSH session interactivity for such commands as stated in the previous example. ### Application Modules -In order to configure your shell for running particular application on Salomon we use Module package interface. +In order to configure your shell for running particular application on Salomon we use Module package interface. Application modules on Salomon cluster are built using [EasyBuild](http://hpcugent.github.io/easybuild/ "EasyBuild"). The modules are divided into the following structure: @@ -67,7 +67,7 @@ To check available modules use $ module avail ``` -To load a module, for example the Open MPI module use +To load a module, for example the Open MPI module use ```bash $ module load OpenMPI diff --git a/docs.it4i/salomon/hardware-overview.md b/docs.it4i/salomon/hardware-overview.md index c20234030c70a4a6dc9e8ba36f86b1097437313d..d84bd6293e02f4db183bd9ac32dc77597f70ecfc 100644 --- a/docs.it4i/salomon/hardware-overview.md +++ b/docs.it4i/salomon/hardware-overview.md @@ -2,7 +2,7 @@ ## Introduction -The Salomon cluster consists of 1008 computational nodes of which 576 are regular compute nodes and 432 accelerated nodes. Each node is a powerful x86-64 computer, equipped with 24 cores (two twelve-core Intel Xeon processors) and 128 GB RAM. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 0.5 PB /home NFS disk storage to store the user files. Users may use a DDN Lustre shared storage with capacity of 1.69 PB which is available for the scratch project data. The user access to the Salomon cluster is provided by four login nodes. +The Salomon cluster consists of 1008 computational nodes of which 576 are regular compute nodes and 432 accelerated nodes. Each node is a powerful x86-64 computer, equipped with 24 cores (two twelve-core Intel Xeon processors) and 128 GB RAM. The nodes are interlinked by high speed InfiniBand and Ethernet networks. All nodes share 0.5 PB /home NFS disk storage to store the user files. Users may use a DDN Lustre shared storage with capacity of 1.69 PB which is available for the scratch project data. The user access to the Salomon cluster is provided by four login nodes. [More about schematic representation of the Salomon cluster compute nodes IB topology](ib-single-plane-topology/). diff --git a/docs.it4i/salomon/introduction.md b/docs.it4i/salomon/introduction.md index 1563518906866ae72937039dd6dc2ddb29126c15..3ddeb8bfd888ee021aa097750dedc1c53bddbe82 100644 --- a/docs.it4i/salomon/introduction.md +++ b/docs.it4i/salomon/introduction.md @@ -2,7 +2,7 @@ Welcome to Salomon supercomputer cluster. The Salomon cluster consists of 1008 compute nodes, totaling 24192 compute cores with 129 TB RAM and giving over 2 Pflop/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 24 cores, at least 128 GB RAM. Nodes are interconnected by 7D Enhanced hypercube InfiniBand network and equipped with Intel Xeon E5-2680v3 processors. The Salomon cluster consists of 576 nodes without accelerators and 432 nodes equipped with Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview/). -The cluster runs [CentOS Linux](http://www.bull.com/bullx-logiciels/systeme-exploitation.html) operating system, which is compatible with the RedHat [ Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) +The cluster runs [CentOS Linux](http://www.bull.com/bullx-logiciels/systeme-exploitation.html) operating system, which is compatible with the RedHat [ Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) **Water-cooled Compute Nodes With MIC Accelerator** diff --git a/docs.it4i/salomon/job-submission-and-execution.md b/docs.it4i/salomon/job-submission-and-execution.md index 0865e9c21b44c7b755e810d9a81da902de452183..d1ed9278a2d02f68886dccce41a30095f8205331 100644 --- a/docs.it4i/salomon/job-submission-and-execution.md +++ b/docs.it4i/salomon/job-submission-and-execution.md @@ -43,13 +43,13 @@ In this example, we allocate 4 nodes, 24 cores per node, for 1 hour. We allocate $ qsub -A OPEN-0-0 -q qlong -l select=10:ncpus=24 ./myjob ``` -In this example, we allocate 10 nodes, 24 cores per node, for 72 hours. We allocate these resources via the qlong queue. Jobscript myjob will be executed on the first node in the allocation. +In this example, we allocate 10 nodes, 24 cores per node, for 72 hours. We allocate these resources via the qlong queue. Jobscript myjob will be executed on the first node in the allocation. ```bash $ qsub -A OPEN-0-0 -q qfree -l select=10:ncpus=24 ./myjob ``` -In this example, we allocate 10 nodes, 24 cores per node, for 12 hours. We allocate these resources via the qfree queue. It is not required that the project OPEN-0-0 has any available resources left. Consumed resources are still accounted for. Jobscript myjob will be executed on the first node in the allocation. +In this example, we allocate 10 nodes, 24 cores per node, for 12 hours. We allocate these resources via the qfree queue. It is not required that the project OPEN-0-0 has any available resources left. Consumed resources are still accounted for. Jobscript myjob will be executed on the first node in the allocation. ### Intel Xeon Phi Co-Processors @@ -76,13 +76,13 @@ In this example, we allocate 4 nodes, with 24 cores per node (totalling 96 cores Per NUMA node allocation. Jobs are isolated by cpusets. -The UV2000 (node uv1) offers 3328GB of RAM and 112 cores, distributed in 14 NUMA nodes. A NUMA node packs 8 cores and approx. 236GB RAM. In the PBS the UV2000 provides 14 chunks, a chunk per NUMA node (see [Resource allocation policy](resources-allocation-policy/)). The jobs on UV2000 are isolated from each other by cpusets, so that a job by one user may not utilize CPU or memory allocated to a job by other user. Always, full chunks are allocated, a job may only use resources of the NUMA nodes allocated to itself. +The UV2000 (node uv1) offers 3328GB of RAM and 112 cores, distributed in 14 NUMA nodes. A NUMA node packs 8 cores and approx. 236GB RAM. In the PBS the UV2000 provides 14 chunks, a chunk per NUMA node (see [Resource allocation policy](resources-allocation-policy/)). The jobs on UV2000 are isolated from each other by cpusets, so that a job by one user may not utilize CPU or memory allocated to a job by other user. Always, full chunks are allocated, a job may only use resources of the NUMA nodes allocated to itself. ```bash $ qsub -A OPEN-0-0 -q qfat -l select=14 ./myjob ``` -In this example, we allocate all 14 NUMA nodes (corresponds to 14 chunks), 112 cores of the SGI UV2000 node for 72 hours. Jobscript myjob will be executed on the node uv1. +In this example, we allocate all 14 NUMA nodes (corresponds to 14 chunks), 112 cores of the SGI UV2000 node for 72 hours. Jobscript myjob will be executed on the node uv1. ```bash $ qsub -A OPEN-0-0 -q qfat -l select=1:mem=2000GB ./myjob @@ -249,12 +249,12 @@ Example: $ qstat -a srv11: - Req'd Req'd Elap -Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time + Req'd Req'd Elap +Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time --------------- -------- -- |---|---| ------ --- --- ------ ----- - ----- -16287.isrv5 user1 qlong job1 6183 4 64 -- 144:0 R 38:25 -16468.isrv5 user1 qlong job2 8060 4 64 -- 144:0 R 17:44 -16547.isrv5 user2 qprod job3x 13516 2 32 -- 48:00 R 00:58 +16287.isrv5 user1 qlong job1 6183 4 64 -- 144:0 R 38:25 +16468.isrv5 user1 qlong job2 8060 4 64 -- 144:0 R 17:44 +16547.isrv5 user2 qprod job3x 13516 2 32 -- 48:00 R 00:58 ``` In this example user1 and user2 are running jobs named job1, job2 and job3x. The jobs job1 and job2 are using 4 nodes, 16 cores per node each. The job1 already runs for 38 hours and 25 minutes, job2 for 17 hours 44 minutes. The job1 already consumed 64 x 38.41 = 2458.6 core hours. The job3x already consumed 0.96 x 32 = 30.93 core hours. These consumed core hours will be accounted on the respective project accounts, regardless of whether the allocated cores were actually used for computations. @@ -350,10 +350,10 @@ $ qsub -q qexp -l select=4:ncpus=24 -N Name0 ./myjob $ qstat -n -u username isrv5: - Req'd Req'd Elap -Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time + Req'd Req'd Elap +Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time --------------- -------- -- |---|---| ------ --- --- ------ ----- - ----- -15209.isrv5 username qexp Name0 5530 4 96 -- 01:00 R 00:00 +15209.isrv5 username qexp Name0 5530 4 96 -- 01:00 R 00:00 r21u01n577/0*24+r21u02n578/0*24+r21u03n579/0*24+r21u04n580/0*24 ``` diff --git a/docs.it4i/salomon/network.md b/docs.it4i/salomon/network.md index 005e6d6600147d5b7ff344017c377368338d295f..2f3f8a09f474c12ffe961781c39ea6fbea260a46 100644 --- a/docs.it4i/salomon/network.md +++ b/docs.it4i/salomon/network.md @@ -19,10 +19,10 @@ The network provides **2170MB/s** transfer rates via the TCP connection (single ```bash $ qsub -q qexp -l select=4:ncpus=16 -N Name0 ./myjob $ qstat -n -u username - Req'd Req'd Elap -Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time + Req'd Req'd Elap +Job ID Username Queue Jobname SessID NDS TSK Memory Time S Time --------------- -------- -- |---|---| ------ --- --- ------ ----- - ----- -15209.isrv5 username qexp Name0 5530 4 96 -- 01:00 R 00:00 +15209.isrv5 username qexp Name0 5530 4 96 -- 01:00 R 00:00 r4i1n0/0*24+r4i1n1/0*24+r4i1n2/0*24+r4i1n3/0*24 ``` @@ -32,7 +32,7 @@ In this example, we access the node r4i1n0 by Infiniband network via the ib0 int $ ssh 10.17.35.19 ``` -In this example, we get +In this example, we get information of the Infiniband network. ```bash diff --git a/docs.it4i/salomon/prace.md b/docs.it4i/salomon/prace.md index 76b9186711bd7477e714a2621d60eb46d4ef290d..1cade6cf91cccac5351cb598e7d0aa9218d05954 100644 --- a/docs.it4i/salomon/prace.md +++ b/docs.it4i/salomon/prace.md @@ -142,7 +142,7 @@ Copy files **to** Salomon by running the following commands on your local machin $ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp-prace.salomon.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_ ``` -Or by using prace_service script: +Or by using prace_service script: ```bash $ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -i -f salomon`/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_ @@ -154,7 +154,7 @@ Copy files **from** Salomon: $ globus-url-copy gsiftp://gridftp-prace.salomon.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_ ``` -Or by using prace_service script: +Or by using prace_service script: ```bash $ globus-url-copy gsiftp://`prace_service -i -f salomon`/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_ @@ -175,7 +175,7 @@ Copy files **to** Salomon by running the following commands on your local machin $ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://gridftp.salomon.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_ ``` -Or by using prace_service script: +Or by using prace_service script: ```bash $ globus-url-copy file://_LOCAL_PATH_TO_YOUR_FILE_ gsiftp://`prace_service -e -f salomon`/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_ @@ -187,7 +187,7 @@ Copy files **from** Salomon: $ globus-url-copy gsiftp://gridftp.salomon.it4i.cz:2812/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_ ``` -Or by using prace_service script: +Or by using prace_service script: ```bash $ globus-url-copy gsiftp://`prace_service -e -f salomon`/home/prace/_YOUR_ACCOUNT_ON_SALOMON_/_PATH_TO_YOUR_FILE_ file://_LOCAL_PATH_TO_YOUR_FILE_ diff --git a/docs.it4i/salomon/resources-allocation-policy.md b/docs.it4i/salomon/resources-allocation-policy.md index ab5f32a4f3a6327d7cb262ac06deff646783a8db..4122d7097ac0b360d5bcdbaac1b6c5eace6a4050 100644 --- a/docs.it4i/salomon/resources-allocation-policy.md +++ b/docs.it4i/salomon/resources-allocation-policy.md @@ -61,9 +61,9 @@ Usage: rspbs [options] Options: --version show program's version number and exit -h, --help show this help message and exit - --get-server-details Print server + --get-server-details Print server --get-queues Print queues - --get-queues-details Print queues details + --get-queues-details Print queues details --get-reservations Print reservations --get-reservations-details Print reservations details @@ -94,7 +94,7 @@ Options: --get-user-ncpus Print number of allocated ncpus per user --get-qlist-nodes Print qlist nodes --get-qlist-nodeset Print qlist nodeset - --get-ibswitch-nodes Print ibswitch nodes + --get-ibswitch-nodes Print ibswitch nodes --get-ibswitch-nodeset Print ibswitch nodeset --summary Print summary diff --git a/docs.it4i/salomon/shell-and-data-access.md b/docs.it4i/salomon/shell-and-data-access.md index fef1df3bca4bf1da985055af4bb386667c7a076e..04bdc3a78fcf8eb0d0889719a27c45dd14121833 100644 --- a/docs.it4i/salomon/shell-and-data-access.md +++ b/docs.it4i/salomon/shell-and-data-access.md @@ -52,7 +52,7 @@ After logging in, you will see the command prompt: http://www.it4i.cz/?lang=en -Last login: Tue Jul 9 15:57:38 2013 from your-host.example.com +Last login: Tue Jul 9 15:57:38 2013 from your-host.example.com [username@login2.salomon ~]$ ``` @@ -140,7 +140,7 @@ Pick some unused port on Salomon login node (for example 6000) and establish th local $ ssh -R 6000:remote.host.com:1234 salomon.it4i.cz ``` -In this example, we establish port forwarding between port 6000 on Salomon and port 1234 on the remote.host.com. By accessing localhost:6000 on Salomon, an application will see response of remote.host.com:1234. The traffic will run via users local workstation. +In this example, we establish port forwarding between port 6000 on Salomon and port 1234 on the remote.host.com. By accessing localhost:6000 on Salomon, an application will see response of remote.host.com:1234. The traffic will run via users local workstation. Port forwarding may be done **using PuTTY** as well. On the PuTTY Configuration screen, load your Salomon configuration first. Then go to Connection->SSH->Tunnels to set up the port forwarding. Click Remote radio button. Insert 6000 to Source port textbox. Insert remote.host.com:1234. Click Add button, then Open. @@ -187,7 +187,7 @@ Once the proxy server is running, establish ssh port forwarding from Salomon to local $ ssh -R 6000:localhost:1080 salomon.it4i.cz ``` -Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding to access the [proxy server from compute nodes](#port-forwarding-from-compute-nodes) as well. +Now, configure the applications proxy settings to **localhost:6000**. Use port forwarding to access the [proxy server from compute nodes](#port-forwarding-from-compute-nodes) as well. ## Graphical User Interface diff --git a/docs.it4i/salomon/software/ansys/ansys-fluent.md b/docs.it4i/salomon/software/ansys/ansys-fluent.md index aefcfbf77da974a6ec14110ef571e7dcf1f8ba36..73ada568a1b202abf130544bc4399e27f740a015 100644 --- a/docs.it4i/salomon/software/ansys/ansys-fluent.md +++ b/docs.it4i/salomon/software/ansys/ansys-fluent.md @@ -62,11 +62,11 @@ The appropriate dimension of the problem has to be set by parameter (2d/3d). fluent solver_version [FLUENT_options] -i journal_file -pbs ``` -This syntax will start the ANSYS FLUENT job under PBS Professional using the qsub command in a batch manner. When resources are available, PBS Professional will start the job and return a job ID, usually in the form of _job_ID.hostname_. This job ID can then be used to query, control, or stop the job using standard PBS Professional commands, such as qstat or qdel. The job will be run out of the current working directory, and all output will be written to the file fluent.o _job_ID_. +This syntax will start the ANSYS FLUENT job under PBS Professional using the qsub command in a batch manner. When resources are available, PBS Professional will start the job and return a job ID, usually in the form of _job_ID.hostname_. This job ID can then be used to query, control, or stop the job using standard PBS Professional commands, such as qstat or qdel. The job will be run out of the current working directory, and all output will be written to the file fluent.o _job_ID_. 3. Running Fluent via user's config file -The sample script uses a configuration file called pbs_fluent.conf if no command line arguments are present. This configuration file should be present in the directory from which the jobs are submitted (which is also the directory in which the jobs are executed). The following is an example of what the content of pbs_fluent.conf can be: +The sample script uses a configuration file called pbs_fluent.conf if no command line arguments are present. This configuration file should be present in the directory from which the jobs are submitted (which is also the directory in which the jobs are executed). The following is an example of what the content of pbs_fluent.conf can be: ```bash input="example_small.flin" diff --git a/docs.it4i/salomon/software/ansys/ansys-ls-dyna.md b/docs.it4i/salomon/software/ansys/ansys-ls-dyna.md index 65423033146af8c27fe47a513aad0e51543128c3..c5322976b98a22ca884f19a7872643f81c9c1386 100644 --- a/docs.it4i/salomon/software/ansys/ansys-ls-dyna.md +++ b/docs.it4i/salomon/software/ansys/ansys-ls-dyna.md @@ -1,6 +1,6 @@ # ANSYS LS-DYNA -**[ANSYSLS-DYNA](http://www.ansys.com/products/structures/ansys-ls-dyna)** software provides convenient and easy-to-use access to the technology-rich, time-tested explicit solver without the need to contend with the complex input requirements of this sophisticated program. Introduced in 1996, ANSYS LS-DYNA capabilities have helped customers in numerous industries to resolve highly intricate design issues. ANSYS Mechanical users have been able take advantage of complex explicit solutions for a long time utilizing the traditional ANSYS Parametric Design Language (APDL) environment. These explicit capabilities are available to ANSYS Workbench users as well. The Workbench platform is a powerful, comprehensive, easy-to-use environment for engineering simulation. CAD import from all sources, geometry cleanup, automatic meshing, solution, parametric optimization, result visualization and comprehensive report generation are all available within a single fully interactive modern graphical user environment. +**[ANSYSLS-DYNA](http://www.ansys.com/products/structures/ansys-ls-dyna)** software provides convenient and easy-to-use access to the technology-rich, time-tested explicit solver without the need to contend with the complex input requirements of this sophisticated program. Introduced in 1996, ANSYS LS-DYNA capabilities have helped customers in numerous industries to resolve highly intricate design issues. ANSYS Mechanical users have been able take advantage of complex explicit solutions for a long time utilizing the traditional ANSYS Parametric Design Language (APDL) environment. These explicit capabilities are available to ANSYS Workbench users as well. The Workbench platform is a powerful, comprehensive, easy-to-use environment for engineering simulation. CAD import from all sources, geometry cleanup, automatic meshing, solution, parametric optimization, result visualization and comprehensive report generation are all available within a single fully interactive modern graphical user environment. To run ANSYS LS-DYNA in batch mode you can utilize/modify the default ansysdyna.pbs script and execute it via the qsub command. diff --git a/docs.it4i/salomon/software/ansys/ansys.md b/docs.it4i/salomon/software/ansys/ansys.md index c72e964629ed098f6413c5ccc491ef5920220363..fc43e50a634f8e8558bfa3f02e4f551ec35f4404 100644 --- a/docs.it4i/salomon/software/ansys/ansys.md +++ b/docs.it4i/salomon/software/ansys/ansys.md @@ -2,7 +2,7 @@ **[SVS FEM](http://www.svsfem.cz/)** as **[ANSYS Channel partner](http://www.ansys.com/)** for Czech Republic provided all ANSYS licenses for ANSELM cluster and supports of all ANSYS Products (Multiphysics, Mechanical, MAPDL, CFX, Fluent, Maxwell, LS-DYNA...) to IT staff and ANSYS users. If you are challenging to problem of ANSYS functionality contact please [hotline@svsfem.cz](mailto:hotline@svsfem.cz?subject=Ostrava%20-%20ANSELM) -Anselm provides as commercial as academic variants. Academic variants are distinguished by "**Academic...**" word in the name of license or by two letter preposition "**aa\_**" in the license feature name. Change of license is realized on command line respectively directly in user's pbs file (see individual products). [ More about licensing here](licensing/) +Anselm provides as commercial as academic variants. Academic variants are distinguished by "**Academic...**" word in the name of license or by two letter preposition "**aa\_**" in the license feature name. Change of license is realized on command line respectively directly in user's pbs file (see individual products). [ More about licensing here](licensing/) To load the latest version of any ANSYS product (Mechanical, Fluent, CFX, MAPDL,...) load the module: diff --git a/docs.it4i/salomon/software/ansys/setting-license-preferences.md b/docs.it4i/salomon/software/ansys/setting-license-preferences.md index f3dfad8e2daa5212d4620b5b17155e60b5aadad7..fe14541d46b1fe4cab38eb7b883c58e40e03dd32 100644 --- a/docs.it4i/salomon/software/ansys/setting-license-preferences.md +++ b/docs.it4i/salomon/software/ansys/setting-license-preferences.md @@ -2,7 +2,7 @@ Some ANSYS tools allow you to explicitly specify usage of academic or commercial licenses in the command line (eg. ansys161 -p aa_r to select Academic Research license). However, we have observed that not all tools obey this option and choose commercial license. -Thus you need to configure preferred license order with ANSLIC_ADMIN. Please follow these steps and move Academic Research license to the top or bottom of the list accordingly. +Thus you need to configure preferred license order with ANSLIC_ADMIN. Please follow these steps and move Academic Research license to the top or bottom of the list accordingly. Launch the ANSLIC_ADMIN utility in a graphical environment: diff --git a/docs.it4i/salomon/software/chemistry/molpro.md b/docs.it4i/salomon/software/chemistry/molpro.md index eb0ffb2db199699a64c6aa853417068fc0d773d9..ab53760cda8c5efa186e93d7ab9d4b4032979f53 100644 --- a/docs.it4i/salomon/software/chemistry/molpro.md +++ b/docs.it4i/salomon/software/chemistry/molpro.md @@ -33,7 +33,7 @@ Compilation parameters are default: Molpro is compiled for parallel execution using MPI and OpenMP. By default, Molpro reads the number of allocated nodes from PBS and launches a data server on one node. On the remaining allocated nodes, compute processes are launched, one process per node, each with 16 threads. You can modify this behavior by using -n, -t and helper-server options. Please refer to the [Molpro documentation](http://www.molpro.net/info/2010.1/doc/manual/node9.html) for more details. !!! note - The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option mpiprocs=16:ompthreads=1 to PBS. + The OpenMP parallelization in Molpro is limited and has been observed to produce limited scaling. We therefore recommend to use MPI parallelization only. This can be achieved by passing option mpiprocs=16:ompthreads=1 to PBS. You are advised to use the -d option to point to a directory in [SCRATCH filesystem](../../storage/storage/). Molpro can produce a large amount of temporary data during its run, and it is important that these are placed in the fast scratch filesystem. diff --git a/docs.it4i/salomon/software/chemistry/phono3py.md b/docs.it4i/salomon/software/chemistry/phono3py.md index b1f575f6758847682894d66598b48a04d725a237..3f747d23bc9775f80137c0d6e4f1b4821d97439b 100644 --- a/docs.it4i/salomon/software/chemistry/phono3py.md +++ b/docs.it4i/salomon/software/chemistry/phono3py.md @@ -27,14 +27,14 @@ $ cat POSCAR Si 8 Direct - 0.8750000000000000 0.8750000000000000 0.8750000000000000 - 0.8750000000000000 0.3750000000000000 0.3750000000000000 - 0.3750000000000000 0.8750000000000000 0.3750000000000000 - 0.3750000000000000 0.3750000000000000 0.8750000000000000 - 0.1250000000000000 0.1250000000000000 0.1250000000000000 - 0.1250000000000000 0.6250000000000000 0.6250000000000000 - 0.6250000000000000 0.1250000000000000 0.6250000000000000 - 0.6250000000000000 0.6250000000000000 0.1250000000000000 + 0.8750000000000000 0.8750000000000000 0.8750000000000000 + 0.8750000000000000 0.3750000000000000 0.3750000000000000 + 0.3750000000000000 0.8750000000000000 0.3750000000000000 + 0.3750000000000000 0.3750000000000000 0.8750000000000000 + 0.1250000000000000 0.1250000000000000 0.1250000000000000 + 0.1250000000000000 0.6250000000000000 0.6250000000000000 + 0.6250000000000000 0.1250000000000000 0.6250000000000000 + 0.6250000000000000 0.6250000000000000 0.1250000000000000 ``` ### Generating Displacement Using 2 by 2 by 2 Supercell for Both Second and Third Order Force Constants @@ -47,15 +47,15 @@ $ phono3py -d --dim="2 2 2" -c POSCAR disp_fc3.yaml, and the structure input files with this displacements are POSCAR-00XXX, where the XXX=111. ```bash -disp_fc3.yaml POSCAR-00008 POSCAR-00017 POSCAR-00026 POSCAR-00035 POSCAR-00044 POSCAR-00053 POSCAR-00062 POSCAR-00071 POSCAR-00080 POSCAR-00089 POSCAR-00098 POSCAR-00107 -POSCAR POSCAR-00009 POSCAR-00018 POSCAR-00027 POSCAR-00036 POSCAR-00045 POSCAR-00054 POSCAR-00063 POSCAR-00072 POSCAR-00081 POSCAR-00090 POSCAR-00099 POSCAR-00108 -POSCAR-00001 POSCAR-00010 POSCAR-00019 POSCAR-00028 POSCAR-00037 POSCAR-00046 POSCAR-00055 POSCAR-00064 POSCAR-00073 POSCAR-00082 POSCAR-00091 POSCAR-00100 POSCAR-00109 -POSCAR-00002 POSCAR-00011 POSCAR-00020 POSCAR-00029 POSCAR-00038 POSCAR-00047 POSCAR-00056 POSCAR-00065 POSCAR-00074 POSCAR-00083 POSCAR-00092 POSCAR-00101 POSCAR-00110 -POSCAR-00003 POSCAR-00012 POSCAR-00021 POSCAR-00030 POSCAR-00039 POSCAR-00048 POSCAR-00057 POSCAR-00066 POSCAR-00075 POSCAR-00084 POSCAR-00093 POSCAR-00102 POSCAR-00111 -POSCAR-00004 POSCAR-00013 POSCAR-00022 POSCAR-00031 POSCAR-00040 POSCAR-00049 POSCAR-00058 POSCAR-00067 POSCAR-00076 POSCAR-00085 POSCAR-00094 POSCAR-00103 -POSCAR-00005 POSCAR-00014 POSCAR-00023 POSCAR-00032 POSCAR-00041 POSCAR-00050 POSCAR-00059 POSCAR-00068 POSCAR-00077 POSCAR-00086 POSCAR-00095 POSCAR-00104 -POSCAR-00006 POSCAR-00015 POSCAR-00024 POSCAR-00033 POSCAR-00042 POSCAR-00051 POSCAR-00060 POSCAR-00069 POSCAR-00078 POSCAR-00087 POSCAR-00096 POSCAR-00105 -POSCAR-00007 POSCAR-00016 POSCAR-00025 POSCAR-00034 POSCAR-00043 POSCAR-00052 POSCAR-00061 POSCAR-00070 POSCAR-00079 POSCAR-00088 POSCAR-00097 POSCAR-00106 +disp_fc3.yaml POSCAR-00008 POSCAR-00017 POSCAR-00026 POSCAR-00035 POSCAR-00044 POSCAR-00053 POSCAR-00062 POSCAR-00071 POSCAR-00080 POSCAR-00089 POSCAR-00098 POSCAR-00107 +POSCAR POSCAR-00009 POSCAR-00018 POSCAR-00027 POSCAR-00036 POSCAR-00045 POSCAR-00054 POSCAR-00063 POSCAR-00072 POSCAR-00081 POSCAR-00090 POSCAR-00099 POSCAR-00108 +POSCAR-00001 POSCAR-00010 POSCAR-00019 POSCAR-00028 POSCAR-00037 POSCAR-00046 POSCAR-00055 POSCAR-00064 POSCAR-00073 POSCAR-00082 POSCAR-00091 POSCAR-00100 POSCAR-00109 +POSCAR-00002 POSCAR-00011 POSCAR-00020 POSCAR-00029 POSCAR-00038 POSCAR-00047 POSCAR-00056 POSCAR-00065 POSCAR-00074 POSCAR-00083 POSCAR-00092 POSCAR-00101 POSCAR-00110 +POSCAR-00003 POSCAR-00012 POSCAR-00021 POSCAR-00030 POSCAR-00039 POSCAR-00048 POSCAR-00057 POSCAR-00066 POSCAR-00075 POSCAR-00084 POSCAR-00093 POSCAR-00102 POSCAR-00111 +POSCAR-00004 POSCAR-00013 POSCAR-00022 POSCAR-00031 POSCAR-00040 POSCAR-00049 POSCAR-00058 POSCAR-00067 POSCAR-00076 POSCAR-00085 POSCAR-00094 POSCAR-00103 +POSCAR-00005 POSCAR-00014 POSCAR-00023 POSCAR-00032 POSCAR-00041 POSCAR-00050 POSCAR-00059 POSCAR-00068 POSCAR-00077 POSCAR-00086 POSCAR-00095 POSCAR-00104 +POSCAR-00006 POSCAR-00015 POSCAR-00024 POSCAR-00033 POSCAR-00042 POSCAR-00051 POSCAR-00060 POSCAR-00069 POSCAR-00078 POSCAR-00087 POSCAR-00096 POSCAR-00105 +POSCAR-00007 POSCAR-00016 POSCAR-00025 POSCAR-00034 POSCAR-00043 POSCAR-00052 POSCAR-00061 POSCAR-00070 POSCAR-00079 POSCAR-00088 POSCAR-00097 POSCAR-00106 ``` For each displacement the forces needs to be calculated, i.e. in form of the output file of VASP (vasprun.xml). For a single VASP calculations one needs [KPOINTS](KPOINTS), [POTCAR](POTCAR), [INCAR](INCAR) in your case directory (where you have POSCARS) and those 111 displacements calculations can be generated by [prepare.sh](prepare.sh) script. Then each of the single 111 calculations is submitted [run.sh](run.sh) by [submit.sh](submit.sh). @@ -63,14 +63,14 @@ For each displacement the forces needs to be calculated, i.e. in form of the out ```bash $./prepare.sh $ls -disp-00001 disp-00009 disp-00017 disp-00025 disp-00033 disp-00041 disp-00049 disp-00057 disp-00065 disp-00073 disp-00081 disp-00089 disp-00097 disp-00105 INCAR -disp-00002 disp-00010 disp-00018 disp-00026 disp-00034 disp-00042 disp-00050 disp-00058 disp-00066 disp-00074 disp-00082 disp-00090 disp-00098 disp-00106 KPOINTS -disp-00003 disp-00011 disp-00019 disp-00027 disp-00035 disp-00043 disp-00051 disp-00059 disp-00067 disp-00075 disp-00083 disp-00091 disp-00099 disp-00107 POSCAR -disp-00004 disp-00012 disp-00020 disp-00028 disp-00036 disp-00044 disp-00052 disp-00060 disp-00068 disp-00076 disp-00084 disp-00092 disp-00100 disp-00108 POTCAR -disp-00005 disp-00013 disp-00021 disp-00029 disp-00037 disp-00045 disp-00053 disp-00061 disp-00069 disp-00077 disp-00085 disp-00093 disp-00101 disp-00109 prepare.sh -disp-00006 disp-00014 disp-00022 disp-00030 disp-00038 disp-00046 disp-00054 disp-00062 disp-00070 disp-00078 disp-00086 disp-00094 disp-00102 disp-00110 run.sh -disp-00007 disp-00015 disp-00023 disp-00031 disp-00039 disp-00047 disp-00055 disp-00063 disp-00071 disp-00079 disp-00087 disp-00095 disp-00103 disp-00111 submit.sh -disp-00008 disp-00016 disp-00024 disp-00032 disp-00040 disp-00048 disp-00056 disp-00064 disp-00072 disp-00080 disp-00088 disp-00096 disp-00104 disp_fc3.yaml +disp-00001 disp-00009 disp-00017 disp-00025 disp-00033 disp-00041 disp-00049 disp-00057 disp-00065 disp-00073 disp-00081 disp-00089 disp-00097 disp-00105 INCAR +disp-00002 disp-00010 disp-00018 disp-00026 disp-00034 disp-00042 disp-00050 disp-00058 disp-00066 disp-00074 disp-00082 disp-00090 disp-00098 disp-00106 KPOINTS +disp-00003 disp-00011 disp-00019 disp-00027 disp-00035 disp-00043 disp-00051 disp-00059 disp-00067 disp-00075 disp-00083 disp-00091 disp-00099 disp-00107 POSCAR +disp-00004 disp-00012 disp-00020 disp-00028 disp-00036 disp-00044 disp-00052 disp-00060 disp-00068 disp-00076 disp-00084 disp-00092 disp-00100 disp-00108 POTCAR +disp-00005 disp-00013 disp-00021 disp-00029 disp-00037 disp-00045 disp-00053 disp-00061 disp-00069 disp-00077 disp-00085 disp-00093 disp-00101 disp-00109 prepare.sh +disp-00006 disp-00014 disp-00022 disp-00030 disp-00038 disp-00046 disp-00054 disp-00062 disp-00070 disp-00078 disp-00086 disp-00094 disp-00102 disp-00110 run.sh +disp-00007 disp-00015 disp-00023 disp-00031 disp-00039 disp-00047 disp-00055 disp-00063 disp-00071 disp-00079 disp-00087 disp-00095 disp-00103 disp-00111 submit.sh +disp-00008 disp-00016 disp-00024 disp-00032 disp-00040 disp-00048 disp-00056 disp-00064 disp-00072 disp-00080 disp-00088 disp-00096 disp-00104 disp_fc3.yaml ``` Taylor your run.sh script to fit into your project and other needs and submit all 111 calculations using submit.sh script diff --git a/docs.it4i/salomon/software/comsol/comsol-multiphysics.md b/docs.it4i/salomon/software/comsol/comsol-multiphysics.md index fd40c1e4aefe6acfc79aff06425ebf5ee7594fe5..a03c4495d9dad72aea2748ddff332697a8370bd9 100644 --- a/docs.it4i/salomon/software/comsol/comsol-multiphysics.md +++ b/docs.it4i/salomon/software/comsol/comsol-multiphysics.md @@ -18,7 +18,7 @@ On the clusters COMSOL is available in the latest stable version. There are two * **Non commercial** or so called >**EDU variant**>, which can be used for research and educational purposes. -* **Commercial** or so called **COM variant**, which can used also for commercial activities. **COM variant** has only subset of features compared to the **EDU variant** available. More about licensing will be posted here soon. +* **Commercial** or so called **COM variant**, which can used also for commercial activities. **COM variant** has only subset of features compared to the **EDU variant** available. More about licensing will be posted here soon. To load the of COMSOL load the module diff --git a/docs.it4i/salomon/software/comsol/licensing-and-available-versions.md b/docs.it4i/salomon/software/comsol/licensing-and-available-versions.md index 6e9d290c73257580869df79364c7cca8d6ae72e5..1f1ad09bd9d88edba248c317b83db26f2593159f 100644 --- a/docs.it4i/salomon/software/comsol/licensing-and-available-versions.md +++ b/docs.it4i/salomon/software/comsol/licensing-and-available-versions.md @@ -12,7 +12,7 @@ The licence intended to be used for science and research, publications, students ## Comsol COM Network Licence -The licence intended to be used for science and research, publications, students’ projects, commercial research with no commercial use restrictions. Enables the solution of at least one job by one user in one program start. +The licence intended to be used for science and research, publications, students’ projects, commercial research with no commercial use restrictions. Enables the solution of at least one job by one user in one program start. ## Available Versions diff --git a/docs.it4i/salomon/software/debuggers/allinea-ddt.md b/docs.it4i/salomon/software/debuggers/allinea-ddt.md index 3315d6deecb54892b0dfa059ca86f949a7385ca5..b1710f44fde7a369d1beecc49a575fdce9c0c548 100644 --- a/docs.it4i/salomon/software/debuggers/allinea-ddt.md +++ b/docs.it4i/salomon/software/debuggers/allinea-ddt.md @@ -54,7 +54,7 @@ Before debugging, you need to compile your code with theses flags: ## Starting a Job With DDT -Be sure to log in with an X window forwarding enabled. This could mean using the -X in the ssh: +Be sure to log in with an X window forwarding enabled. This could mean using the -X in the ssh: ```bash $ ssh -X username@anselm.it4i.cz diff --git a/docs.it4i/salomon/software/debuggers/allinea-performance-reports.md b/docs.it4i/salomon/software/debuggers/allinea-performance-reports.md index 526f28cc89d5fd41c2ef5cd220a3475d8bc58f54..1efe89278cd9a9f7d6174fe62bc59b8baf55172a 100644 --- a/docs.it4i/salomon/software/debuggers/allinea-performance-reports.md +++ b/docs.it4i/salomon/software/debuggers/allinea-performance-reports.md @@ -28,7 +28,7 @@ Instead of [running your MPI program the usual way](../mpi/mpi/), use the the pe $ perf-report mpirun ./mympiprog.x ``` -The mpi program will run as usual. The perf-report creates two additional files, in \*.txt and \*.html format, containing the performance report. Note that demanding MPI codes should be run within [ the queue system](../../resource-allocation-and-job-execution/job-submission-and-execution/). +The mpi program will run as usual. The perf-report creates two additional files, in \*.txt and \*.html format, containing the performance report. Note that demanding MPI codes should be run within [ the queue system](../../resource-allocation-and-job-execution/job-submission-and-execution/). ## Example diff --git a/docs.it4i/salomon/software/debuggers/valgrind.md b/docs.it4i/salomon/software/debuggers/valgrind.md index af97d2b617e4af5f9b1db30fbfbcad4650575289..9759f2c09e902feee23c0c762959355805b3f73e 100644 --- a/docs.it4i/salomon/software/debuggers/valgrind.md +++ b/docs.it4i/salomon/software/debuggers/valgrind.md @@ -159,7 +159,7 @@ so it is better to use the MPI-enabled valgrind from module. The MPI versions re $EBROOTVALGRIND/lib/valgrind/libmpiwrap-amd64-linux.so -which must be included in the LD_PRELOAD environment variable. +which must be included in the LD_PRELOAD environment variable. Lets look at this MPI example: diff --git a/docs.it4i/salomon/software/intel-suite/intel-debugger.md b/docs.it4i/salomon/software/intel-suite/intel-debugger.md index 1fbb569d179f6f95aa9918748075c67b92458bf1..c4f25ac328d0712cc9a136fd6f96c3eddc27778e 100644 --- a/docs.it4i/salomon/software/intel-suite/intel-debugger.md +++ b/docs.it4i/salomon/software/intel-suite/intel-debugger.md @@ -4,7 +4,7 @@ IDB is no longer available since Intel Parallel Studio 2015 ## Debugging Serial Applications -The intel debugger version 13.0 is available, via module intel. The debugger works for applications compiled with C and C++ compiler and the ifort fortran 77/90/95 compiler. The debugger provides java GUI environment. Use [X display](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/) for running the GUI. +The intel debugger version 13.0 is available, via module intel. The debugger works for applications compiled with C and C++ compiler and the ifort fortran 77/90/95 compiler. The debugger provides java GUI environment. Use [X display](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/) for running the GUI. ```bash $ module load intel/2014.06 diff --git a/docs.it4i/salomon/software/intel-suite/intel-inspector.md b/docs.it4i/salomon/software/intel-suite/intel-inspector.md index 3ff7762b131f56dff0fa6385f90003e5f78d8812..0f90f4d106f17f4d5b0c615afec491b7eca12f80 100644 --- a/docs.it4i/salomon/software/intel-suite/intel-inspector.md +++ b/docs.it4i/salomon/software/intel-suite/intel-inspector.md @@ -28,7 +28,7 @@ In the main pane, you can start a predefined analysis type or define your own. C ### Batch Mode -Analysis can be also run from command line in batch mode. Batch mode analysis is run with command inspxe-cl. To obtain the required parameters, either consult the documentation or you can configure the analysis in the GUI and then click "Command Line" button in the lower right corner to the respective command line. +Analysis can be also run from command line in batch mode. Batch mode analysis is run with command inspxe-cl. To obtain the required parameters, either consult the documentation or you can configure the analysis in the GUI and then click "Command Line" button in the lower right corner to the respective command line. Results obtained from batch mode can be then viewed in the GUI by selecting File -> Open -> Result... diff --git a/docs.it4i/salomon/software/intel-suite/intel-mkl.md b/docs.it4i/salomon/software/intel-suite/intel-mkl.md index ca0cdcc619554869f74f13d9d25895b28530c330..ce284b0ef4ca5096df31282b6ecdbf6c4eada38b 100644 --- a/docs.it4i/salomon/software/intel-suite/intel-mkl.md +++ b/docs.it4i/salomon/software/intel-suite/intel-mkl.md @@ -11,7 +11,7 @@ Intel Math Kernel Library (Intel MKL) is a library of math kernel subroutines, e * Vector Math Library (VML) routines for optimized mathematical operations on vectors. * Vector Statistical Library (VSL) routines, which offer high-performance vectorized random number generators (RNG) for several probability distributions, convolution and correlation routines, and summary statistics functions. * Data Fitting Library, which provides capabilities for spline-based approximation of functions, derivatives and integrals of functions, and search. -* Extended Eigensolver, a shared memory version of an eigensolver based on the Feast Eigenvalue Solver. +* Extended Eigensolver, a shared memory version of an eigensolver based on the Feast Eigenvalue Solver. For details see the [Intel MKL Reference Manual](http://software.intel.com/sites/products/documentation/doclib/mkl_sa/11/mklman/index.htm). @@ -38,7 +38,7 @@ Intel MKL library provides number of interfaces. The fundamental once are the LP Linking Intel MKL libraries may be complex. Intel [mkl link line advisor](http://software.intel.com/en-us/articles/intel-mkl-link-line-advisor) helps. See also [examples](intel-mkl/#examples) below. -You will need the mkl module loaded to run the mkl enabled executable. This may be avoided, by compiling library search paths into the executable. Include rpath on the compile line: +You will need the mkl module loaded to run the mkl enabled executable. This may be avoided, by compiling library search paths into the executable. Include rpath on the compile line: ```bash $ icc .... -Wl,-rpath=$LIBRARY_PATH ... @@ -72,7 +72,7 @@ Number of examples, demonstrating use of the Intel MKL library and its linking i $ make sointel64 function=cblas_dgemm ``` -In this example, we compile, link and run the cblas_dgemm example, demonstrating use of MKL example suite installed on clusters. +In this example, we compile, link and run the cblas_dgemm example, demonstrating use of MKL example suite installed on clusters. ### Example: MKL and Intel Compiler @@ -86,14 +86,14 @@ In this example, we compile, link and run the cblas_dgemm example, demonstratin $ ./cblas_dgemmx.x data/cblas_dgemmx.d ``` -In this example, we compile, link and run the cblas_dgemm example, demonstrating use of MKL with icc -mkl option. Using the -mkl option is equivalent to: +In this example, we compile, link and run the cblas_dgemm example, demonstrating use of MKL with icc -mkl option. Using the -mkl option is equivalent to: ```bash $ icc -w source/cblas_dgemmx.c source/common_func.c -o cblas_dgemmx.x -I$MKL_INC_DIR -L$MKL_LIB_DIR -lmkl_intel_lp64 -lmkl_intel_thread -lmkl_core -liomp5 ``` -In this example, we compile and link the cblas_dgemm example, using LP64 interface to threaded MKL and Intel OMP threads implementation. +In this example, we compile and link the cblas_dgemm example, using LP64 interface to threaded MKL and Intel OMP threads implementation. ### Example: Intel MKL and GNU Compiler @@ -109,7 +109,7 @@ In this example, we compile and link the cblas_dgemm example, using LP64 interf $ ./cblas_dgemmx.x data/cblas_dgemmx.d ``` -In this example, we compile, link and run the cblas_dgemm example, using LP64 interface to threaded MKL and gnu OMP threads implementation. +In this example, we compile, link and run the cblas_dgemm example, using LP64 interface to threaded MKL and gnu OMP threads implementation. ## MKL and MIC Accelerators diff --git a/docs.it4i/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md b/docs.it4i/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md index cf170231eec28314236eb262ae6893abf71852a9..b2191d8d2b0941eda701a37a7a3b66db21950f12 100644 --- a/docs.it4i/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md +++ b/docs.it4i/salomon/software/intel-suite/intel-trace-analyzer-and-collector.md @@ -6,7 +6,7 @@ ITAC is a offline analysis tool - first you run your application to collect a tr ## Installed Version -Currently on Salomon is version 9.1.2.024 available as module itac/9.1.2.024 +Currently on Salomon is version 9.1.2.024 available as module itac/9.1.2.024 ## Collecting Traces diff --git a/docs.it4i/salomon/software/mpi/Running_OpenMPI.md b/docs.it4i/salomon/software/mpi/Running_OpenMPI.md index 0af557ecf054b258b83ff3b0f1046a2e7d932e54..9aa54f09aa07ccde2daa1bfc5c6ff4daeab2b78b 100644 --- a/docs.it4i/salomon/software/mpi/Running_OpenMPI.md +++ b/docs.it4i/salomon/software/mpi/Running_OpenMPI.md @@ -29,7 +29,7 @@ Example: Please be aware, that in this example, the directive **-pernode** is used to run only **one task per node**, which is normally an unwanted behaviour (unless you want to run hybrid code with just one MPI and 24 OpenMP tasks per node). In normal MPI programs **omit the -pernode directive** to run up to 24 MPI tasks per each node. In this example, we allocate 4 nodes via the express queue interactively. We set up the openmpi environment and interactively run the helloworld_mpi.x program. -Note that the executable helloworld_mpi.x must be available within the same path on all nodes. This is automatically fulfilled on the /home and /scratch filesystem. +Note that the executable helloworld_mpi.x must be available within the same path on all nodes. This is automatically fulfilled on the /home and /scratch filesystem. You need to preload the executable, if running on the local ramdisk /tmp filesystem diff --git a/docs.it4i/salomon/software/numerical-languages/matlab.md b/docs.it4i/salomon/software/numerical-languages/matlab.md index 8cfbdf31afc0155eee6b84a64f43eb2bf2f35fef..ea4e6181f7934099e005042e1c177adde7654615 100644 --- a/docs.it4i/salomon/software/numerical-languages/matlab.md +++ b/docs.it4i/salomon/software/numerical-languages/matlab.md @@ -272,7 +272,7 @@ You can use MATLAB on UV2000 in two parallel modes: ### Threaded Mode -Since this is a SMP machine, you can completely avoid using Parallel Toolbox and use only MATLAB's threading. MATLAB will automatically detect the number of cores you have allocated and will set maxNumCompThreads accordingly and certain operations, such as fft, , eig, svd, etc. will be automatically run in threads. The advantage of this mode is that you don't need to modify your existing sequential codes. +Since this is a SMP machine, you can completely avoid using Parallel Toolbox and use only MATLAB's threading. MATLAB will automatically detect the number of cores you have allocated and will set maxNumCompThreads accordingly and certain operations, such as fft, , eig, svd, etc. will be automatically run in threads. The advantage of this mode is that you don't need to modify your existing sequential codes. ### Local Cluster Mode diff --git a/docs.it4i/salomon/software/numerical-languages/r.md b/docs.it4i/salomon/software/numerical-languages/r.md index 138e4da07151f4e9e802ef447c8ad7bdad7ec190..a8829cb43e48f3d91cd82d312f98a0a5e0687c9f 100644 --- a/docs.it4i/salomon/software/numerical-languages/r.md +++ b/docs.it4i/salomon/software/numerical-languages/r.md @@ -70,7 +70,7 @@ This script may be submitted directly to the PBS workload manager via the qsub c ## Parallel R -Parallel execution of R may be achieved in many ways. One approach is the implied parallelization due to linked libraries or specially enabled functions, as [described above](r/#interactive-execution). In the following sections, we focus on explicit parallelization, where parallel constructs are directly stated within the R script. +Parallel execution of R may be achieved in many ways. One approach is the implied parallelization due to linked libraries or specially enabled functions, as [described above](r/#interactive-execution). In the following sections, we focus on explicit parallelization, where parallel constructs are directly stated within the R script. ## Package Parallel @@ -371,7 +371,7 @@ Example jobscript for [static Rmpi](r/#static-rmpi) parallel R execution, runnin #PBS -N Rjob #PBS -l select=100:ncpus=24:mpiprocs=24:ompthreads=1 - # change to scratch directory + # change to scratch directory SCRDIR=/scratch/work/user/$USER/myjob cd $SCRDIR || exit diff --git a/docs.it4i/salomon/storage.md b/docs.it4i/salomon/storage.md index 0edbcd5db799a8517bbe8ffff128f9c552d70832..afae50969ec849ad6ab236f6f138949aee4a0287 100644 --- a/docs.it4i/salomon/storage.md +++ b/docs.it4i/salomon/storage.md @@ -30,7 +30,7 @@ The HOME file system is realized as a Tiered file system, exported via NFS. The ### SCRATCH File System -The architecture of Lustre on Salomon is composed of two metadata servers (MDS) and six data/object storage servers (OSS). Accessible capacity is 1.69 PB, shared among all users. The SCRATCH file system hosts the [WORK and TEMP workspaces](#shared-workspaces). +The architecture of Lustre on Salomon is composed of two metadata servers (MDS) and six data/object storage servers (OSS). Accessible capacity is 1.69 PB, shared among all users. The SCRATCH file system hosts the [WORK and TEMP workspaces](#shared-workspaces). Configuration of the SCRATCH Lustre storage @@ -50,14 +50,14 @@ Configuration of the SCRATCH Lustre storage A user file on the Lustre file system can be divided into multiple chunks (stripes) and stored across a subset of the object storage targets (OSTs) (disks). The stripes are distributed among the OSTs in a round-robin fashion to ensure load balancing. -When a client (a compute node from your job) needs to create or access a file, the client queries the metadata server ( MDS) and the metadata target ( MDT) for the layout and location of the [file's stripes](http://www.nas.nasa.gov/hecc/support/kb/Lustre_Basics_224.html#striping). Once the file is opened and the client obtains the striping information, the MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and OSTs to perform I/O operations such as locking, disk allocation, storage, and retrieval. +When a client (a compute node from your job) needs to create or access a file, the client queries the metadata server ( MDS) and the metadata target ( MDT) for the layout and location of the [file's stripes](http://www.nas.nasa.gov/hecc/support/kb/Lustre_Basics_224.html#striping). Once the file is opened and the client obtains the striping information, the MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and OSTs to perform I/O operations such as locking, disk allocation, storage, and retrieval. If multiple clients try to read and write the same part of a file at the same time, the Lustre distributed lock manager enforces coherency so that all clients see consistent results. There is default stripe configuration for Salomon Lustre file systems. However, users can set the following stripe parameters for their own directories or files to get optimum I/O performance: 1. stripe_size: the size of the chunk in bytes; specify with k, m, or g to use units of KB, MB, or GB, respectively; the size must be an even multiple of 65,536 bytes; default is 1MB for all Salomon Lustre file systems -2. stripe_count the number of OSTs to stripe across; default is 1 for Salomon Lustre file systems one can specify -1 to use all OSTs in the file system. +2. stripe_count the number of OSTs to stripe across; default is 1 for Salomon Lustre file systems one can specify -1 to use all OSTs in the file system. 3. stripe_offset The index of the OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended. !!! note @@ -121,7 +121,7 @@ Example for Lustre SCRATCH directory: ```bash $ lfs quota /scratch Disk quotas for user user001 (uid 1234): - Filesystem kbytes quota limit grace files quota limit grace + Filesystem kbytes quota limit grace files quota limit grace /scratch 8 0 100000000000 * 3 0 0 - Disk quotas for group user001 (gid 1234): Filesystem kbytes quota limit grace files quota limit grace @@ -141,9 +141,9 @@ Example output: ```bash $ quota Disk quotas for user vop999 (uid 1025): - Filesystem blocks quota limit grace files quota limit grace + Filesystem blocks quota limit grace files quota limit grace home-nfs-ib.salomon.it4i.cz:/home - 28 0 250000000 10 0 500000 + 28 0 250000000 10 0 500000 ``` To have a better understanding of where the space is exactly used, you can use following command to find out. @@ -186,7 +186,7 @@ ACLs on a Lustre file system work exactly like ACLs on any Linux file system. Th [vop999@login1.salomon ~]$ umask 027 [vop999@login1.salomon ~]$ mkdir test [vop999@login1.salomon ~]$ ls -ld test -drwxr-x--- 2 vop999 vop999 4096 Nov 5 14:17 test +drwxr-x--- 2 vop999 vop999 4096 Nov 5 14:17 test [vop999@login1.salomon ~]$ getfacl test # file: test # owner: vop999 @@ -197,7 +197,7 @@ other::--- [vop999@login1.salomon ~]$ setfacl -m user:johnsm:rwx test [vop999@login1.salomon ~]$ ls -ld test -drwxrwx---+ 2 vop999 vop999 4096 Nov 5 14:17 test +drwxrwx---+ 2 vop999 vop999 4096 Nov 5 14:17 test [vop999@login1.salomon ~]$ getfacl test # file: test # owner: vop999 @@ -222,7 +222,7 @@ Users home directories /home/username reside on HOME file system. Accessible cap !!! note The HOME file system is intended for preparation, evaluation, processing and storage of data generated by active Projects. -The HOME should not be used to archive data of past Projects or other unrelated data. +The HOME should not be used to archive data of past Projects or other unrelated data. The files on HOME will not be deleted until end of the [users lifecycle](../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials/). @@ -241,7 +241,7 @@ The workspace is backed up, such that it can be restored in case of catasthropic The WORK workspace resides on SCRATCH file system. Users may create subdirectories and files in directories **/scratch/work/user/username** and **/scratch/work/project/projectid. **The /scratch/work/user/username is private to user, much like the home directory. The /scratch/work/project/projectid is accessible to all users involved in project projectid. !!! note - The WORK workspace is intended to store users project data as well as for high performance access to input and output files. All project data should be removed once the project is finished. The data on the WORK workspace are not backed up. + The WORK workspace is intended to store users project data as well as for high performance access to input and output files. All project data should be removed once the project is finished. The data on the WORK workspace are not backed up. Files on the WORK file system are **persistent** (not automatically deleted) throughout duration of the project. @@ -266,7 +266,7 @@ The WORK workspace is hosted on SCRATCH file system. The SCRATCH is realized as The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoint is /scratch/temp. Users may freely create subdirectories and files on the workspace. Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK. Individual users are restricted by file system usage quotas, set to 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire file system and deny service to other users. >If 100 TB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request. !!! note - The TEMP workspace is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the TEMP workspace as their working directory. + The TEMP workspace is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the TEMP workspace as their working directory. Users are advised to save the necessary data from the TEMP workspace to HOME or WORK after the calculations and clean up the scratch files. @@ -352,7 +352,7 @@ Once registered for CESNET Storage, you may [access the storage](https://du.cesn !!! note SSHFS: The storage will be mounted like a local hard drive -The SSHFS provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in a usual fashion. +The SSHFS provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in a usual fashion. First, create the mount point diff --git a/docs.it4i/software/lmod.md b/docs.it4i/software/lmod.md index 00e70819ce8c8bac76f0d14d42b39c10afcd0a67..66ed012edbd0fe1bffd0ccbb1dcab4c2783f4ae7 100644 --- a/docs.it4i/software/lmod.md +++ b/docs.it4i/software/lmod.md @@ -156,7 +156,7 @@ Lmod does a partial match on the module name, so sometimes you need to use / to $ ml av GCC/ ------------------------------------------ /apps/modules/compiler ------------------------------------------- -GCC/4.4.7-system GCC/4.8.3 GCC/4.9.2 GCC/4.9.3 GCC/5.1.0-binutils-2.25 GCC/5.3.0-binutils-2.25 GCC/5.3.0-2.26 GCC/5.4.0-2.26 GCC/4.7.4 GCC/4.9.2-binutils-2.25 GCC/4.9.3-binutils-2.25 GCC/4.9.3-2.25 GCC/5.2.0 GCC/5.3.0-2.25 GCC/6.2.0-2.27 (D) +GCC/4.4.7-system GCC/4.8.3 GCC/4.9.2 GCC/4.9.3 GCC/5.1.0-binutils-2.25 GCC/5.3.0-binutils-2.25 GCC/5.3.0-2.26 GCC/5.4.0-2.26 GCC/4.7.4 GCC/4.9.2-binutils-2.25 GCC/4.9.3-binutils-2.25 GCC/4.9.3-2.25 GCC/5.2.0 GCC/5.3.0-2.25 GCC/6.2.0-2.27 (D) Where: D: Default Module