diff --git a/docs.it4i/anselm/storage.md b/docs.it4i/anselm/storage.md index 3b0e0ae28ed89efa99f06d25be3d9347b7125553..dc1b4ae53d87d3dca6cd7359230d3dbd45fa699d 100644 --- a/docs.it4i/anselm/storage.md +++ b/docs.it4i/anselm/storage.md @@ -120,7 +120,8 @@ Default stripe size is 1MB, stripe count is 1. There are 22 OSTs dedicated for t | Mountpoint | /home | | Capacity | 320 TB | | Throughput | 2 GB/s | -| User quota | 250 GB | +| User space quota | 250 GB | +| User inodes quota | 500 k | | Default stripe size | 1 MB | | Default stripe count | 1 | | Number of OSTs | 22 | @@ -145,10 +146,11 @@ The SCRATCH filesystem is realized as Lustre parallel filesystem and is availabl | SCRATCH filesystem | | | -------------------- | -------- | | Mountpoint | /scratch | -| Capacity | 146TB | -| Throughput | 6GB/s | -| User quota | 100TB | -| Default stripe size | 1MB | +| Capacity | 146 TB | +| Throughput | 6 GB/s | +| User space quota | 100 TB | +| User inodes quota | 10 M | +| Default stripe size | 1 MB | | Default stripe count | 1 | | Number of OSTs | 10 | @@ -178,7 +180,7 @@ Filesystem: /scratch Space used: 0 Space limit: 93T Entries: 0 -Entries limit: 0 +Entries limit: 10m ``` In this example, we view current size limits and space occupied on the /home and /scratch filesystem, for a particular user executing the command. @@ -269,8 +271,8 @@ The local scratch filesystem is intended for temporary scratch data generated du | ------------------------ | -------------------- | | Mountpoint | /lscratch | | Accesspoint | /lscratch/$PBS_JOBID | -| Capacity | 330GB | -| Throughput | 100MB/s | +| Capacity | 330 GB | +| Throughput | 100 MB/s | | User quota | none | ### RAM Disk @@ -287,13 +289,13 @@ The local RAM disk filesystem is intended for temporary scratch data generated d !!! note The local RAM disk directory /ramdisk/$PBS_JOBID will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript. -| RAM disk | | -| ----------- | ------------------------------------------------------------------------------------------------------- | -| Mountpoint | /ramdisk | -| Accesspoint | /ramdisk/$PBS_JOBID | -| Capacity | 60GB at compute nodes without accelerator, 90GB at compute nodes with accelerator, 500GB at fat nodes | -| Throughput | over 1.5 GB/s write, over 5 GB/s read, single thread, over 10 GB/s write, over 50 GB/s read, 16 threads | -| User quota | none | +| RAM disk | | +| ----------- | -------------------------------------------------------------------------------------------------------- | +| Mountpoint | /ramdisk | +| Accesspoint | /ramdisk/$PBS_JOBID | +| Capacity | 60 GB at compute nodes without accelerator, 90 GB at compute nodes with accelerator, 500 GB at fat nodes | +| Throughput | over 1.5 GB/s write, over 5 GB/s read, single thread, over 10 GB/s write, over 50 GB/s read, 16 threads | +| User quota | none | ### Tmp @@ -301,13 +303,13 @@ Each node is equipped with local /tmp directory of few GB capacity. The /tmp dir ## Summary -| Mountpoint | Usage | Protocol | Net Capacity | Throughput | Limitations | Access | Services | | -| ---------- | ------------------------- | -------- | -------------- | ---------- | ----------- | ----------------------- | --------------------------- | ------ | -| /home | home directory | Lustre | 320 TiB | 2 GB/s | Quota 250GB | Compute and login nodes | backed up | | -| /scratch | cluster shared jobs' data | Lustre | 146 TiB | 6 GB/s | Quota 100TB | Compute and login nodes | files older 90 days removed | | -| /lscratch | node local jobs' data | local | 330 GB | 100 MB/s | none | Compute nodes | purged after job ends | | -| /ramdisk | node local jobs' data | local | 60, 90, 500 GB | 5-50 GB/s | none | Compute nodes | purged after job ends | | -| /tmp | local temporary files | local | 9.5 GB | 100 MB/s | none | Compute and login nodes | auto | purged | +| Mountpoint | Usage | Protocol | Net Capacity | Throughput | Space/Inodes quota | Access | Services | | +| ---------- | ------------------------- | -------- | -------------- | ---------- | ------------------------ | ----------------------- | --------------------------- | ------ | +| /home | home directory | Lustre | 320 TiB | 2 GB/s | 250 GB / 500 k | Compute and login nodes | backed up | | +| /scratch | cluster shared jobs' data | Lustre | 146 TiB | 6 GB/s | 100 TB / 10 M | Compute and login nodes | files older 90 days removed | | +| /lscratch | node local jobs' data | local | 330 GB | 100 MB/s | none / none | Compute nodes | purged after job ends | | +| /ramdisk | node local jobs' data | local | 60, 90, 500 GB | 5-50 GB/s | none / none | Compute nodes | purged after job ends | | +| /tmp | local temporary files | local | 9.5 GB | 100 MB/s | none / none | Compute and login nodes | auto | purged | ## CESNET Data Storage diff --git a/docs.it4i/salomon-upgrade.md b/docs.it4i/salomon-upgrade.md index 424ef68a0b11e3f4d2fe5025beb7174f5e8aeb8b..eb109aea86370855148909507e70594f7214e39e 100644 --- a/docs.it4i/salomon-upgrade.md +++ b/docs.it4i/salomon-upgrade.md @@ -12,7 +12,7 @@ Salomon operating system has been upgraded to the latest CentOS 7.6 on 2018-12-0 * glibc upgraded to 2.17 (2.12 now) * software modules/binaries should be recompiled or deleted -## Discontinued Modules +# Discontinued Modules A new tag has been introduced. Modules tagged with **C6** might be malfunctioning. These modules might be recompiled during transition period. Keep support@it4i.cz informed on malfunctioning modules. diff --git a/docs.it4i/salomon/shell-and-data-access.md b/docs.it4i/salomon/shell-and-data-access.md index 1ee44fcd85d845f24effd829c883fa208f28521c..941b539eaad9d1af8640c078b27a3779378cb623 100644 --- a/docs.it4i/salomon/shell-and-data-access.md +++ b/docs.it4i/salomon/shell-and-data-access.md @@ -22,7 +22,7 @@ The authentication is by the [private key][1] only. md5: - f6:28:98:e4:f9:b2:a6:8f:f2:f4:2d:0a:09:67:69:80 (DSA) + f6:28:98:e4:f9:b2:a6:8f:f2:f4:2d:0a:09:67:69:80 (DSA)<br /> 70:01:c9:9a:5d:88:91:c7:1b:c0:84:d1:fa:4e:83:5c (RSA) sha256: diff --git a/docs.it4i/salomon/storage.md b/docs.it4i/salomon/storage.md index f043973cb93002b4c74dc36840ada5c00da5b139..b6ed820f07ee1983b0de4a40d142fbed055bafff 100644 --- a/docs.it4i/salomon/storage.md +++ b/docs.it4i/salomon/storage.md @@ -133,7 +133,7 @@ Filesystem: /scratch Space used: 377G Space limit: 93T Entries: 14k -Entries limit: 0 +Entries limit: 10m # based on Lustre quota Filesystem: /scratch @@ -144,6 +144,8 @@ Entries: 14k Filesystem: /scratch/work Space used: 377G Entries: 14k +Entries: 40k +Entries limit: 1.0m # based on Robinhood Filesystem: /scratch/temp @@ -235,42 +237,36 @@ The files on HOME will not be deleted until end of the [users lifecycle][10]. The workspace is backed up, such that it can be restored in case of catasthropic failure resulting in significant data loss. This backup however is not intended to restore old versions of user data or to restore (accidentaly) deleted files. -| HOME workspace | | -| -------------- | -------------- | -| Accesspoint | /home/username | -| Capacity | 0.5 PB | -| Throughput | 6 GB/s | -| User quota | 250 GB | -| Protocol | NFS, 2-Tier | +| HOME workspace | | +| ----------------- | -------------- | +| Accesspoint | /home/username | +| Capacity | 0.5 PB | +| Throughput | 6 GB/s | +| User space quota | 250 GB | +| User inodes quota | 500 k | +| Protocol | NFS, 2-Tier | -### Work +### Scratch -The WORK workspace resides on SCRATCH file system. Users may create subdirectories and files in directories **/scratch/work/user/username** and **/scratch/work/project/projectid. **The /scratch/work/user/username is private to user, much like the home directory. The /scratch/work/project/projectid is accessible to all users involved in project projectid. +The SCRATCH is realized as Lustre parallel file system and is available from all login and computational nodes. Default stripe size is 1 MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH file system. !!! note - The WORK workspace is intended to store users project data as well as for high performance access to input and output files. All project data should be removed once the project is finished. The data on the WORK workspace are not backed up. + Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience. - Files on the WORK file system are **persistent** (not automatically deleted) throughout duration of the project. +Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK. Individual users are restricted by file system usage quotas, set to 10 m inodes and 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire file system and deny service to other users. If 100 TB space or 10 m inodes should prove as insufficient for particular user, contact [support][d], the quota may be lifted upon request. + +#### Work -The WORK workspace is hosted on SCRATCH file system. The SCRATCH is realized as Lustre parallel file system and is available from all login and computational nodes. Default stripe size is 1 MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH file system. +The WORK workspace resides on SCRATCH file system. Users may create subdirectories and files in directories **/scratch/work/user/username** and **/scratch/work/project/projectid. **The /scratch/work/user/username is private to user, much like the home directory. The /scratch/work/project/projectid is accessible to all users involved in project projectid. !!! note - Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience. + The WORK workspace is intended to store users project data as well as for high performance access to input and output files. All project data should be removed once the project is finished. The data on the WORK workspace are not backed up. -| WORK workspace | | -| -------------------- | --------------------------------------------------------- | -| Accesspoints | /scratch/work/user/username, /scratch/work/user/projectid | -| Capacity | 1.6 PB | -| Throughput | 30 GB/s | -| User quota | 100 TB | -| Default stripe size | 1 MB | -| Default stripe count | 1 | -| Number of OSTs | 54 | -| Protocol | Lustre | + Files on the WORK file system are **persistent** (not automatically deleted) throughout duration of the project. -### Temp +#### Temp -The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoint is /scratch/temp. Users may freely create subdirectories and files on the workspace. Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK. Individual users are restricted by file system usage quotas, set to 100 TB per user. The purpose of this quota is to prevent runaway programs from filling the entire file system and deny service to other users. If 100 TB should prove as insufficient for particular user, contact [support][d], the quota may be lifted upon request. +The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoint is /scratch/temp. Users may freely create subdirectories and files on the workspace. Accessible capacity is 1.6 PB, shared among all users on TEMP and WORK. !!! note The TEMP workspace is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the TEMP workspace as their working directory. @@ -280,21 +276,50 @@ The TEMP workspace resides on SCRATCH file system. The TEMP workspace accesspoin !!! warning Files on the TEMP file system that are **not accessed for more than 90 days** will be automatically **deleted**. -The TEMP workspace is hosted on SCRATCH file system. The SCRATCH is realized as Lustre parallel file system and is available from all login and computational nodes. Default stripe size is 1 MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH file system. - -!!! note - Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience. - -| TEMP workspace | | -| -------------------- | ------------- | -| Accesspoint | /scratch/temp | -| Capacity | 1.6 PB | -| Throughput | 30 GB/s | -| User quota | 100 TB | -| Default stripe size | 1 MB | -| Default stripe count | 1 | -| Number of OSTs | 54 | -| Protocol | Lustre | +<table> + <tr> + <td style="background-color: rgba(0, 0, 0, 0.54); color: white;"></td> + <td style="background-color: rgba(0, 0, 0, 0.54); color: white;">WORK workspace</td> + <td style="background-color: rgba(0, 0, 0, 0.54); color: white;">TEMP workspace</td> + </tr> + <tr> + <td style="vertical-align : middle">Accesspoints</td> + <td>/scratch/work/user/username,<br />/scratch/work/user/projectid</td> + <td>/scratch/temp</td> + </tr> + <tr> + <td>Capacity</td> + <td colspan="2" style="vertical-align : middle;text-align:center;">1.6 PB</td> + </tr> + <tr> + <td>Throughput</td> + <td colspan="2" style="vertical-align : middle;text-align:center;">30 GB/s</td> + </tr> + <tr> + <td>User space quota</td> + <td colspan="2" style="vertical-align : middle;text-align:center;">100 TB</td> + </tr> + <tr> + <td>User inodes quota</td> + <td colspan="2" style="vertical-align : middle;text-align:center;">10 M</td> + </tr> + <tr> + <td>Default stripe size</td> + <td colspan="2" style="vertical-align : middle;text-align:center;">1 MB</td> + </tr> + <tr> + <td>Default stripe count</td> + <td colspan="2" style="vertical-align : middle;text-align:center;">1</td> + </tr> + <tr> + <td>Number of OSTs</td> + <td colspan="2" style="vertical-align : middle;text-align:center;">54</td> + </tr> + <tr> + <td>Protocol</td> + <td colspan="2" style="vertical-align : middle;text-align:center;">Lustre</td> + </tr> +</table> ## RAM Disk @@ -368,21 +393,72 @@ within a job. | ------------------ | --------------------------------------------------------------------------| | Mountpoint | /mnt/global_ramdisk | | Accesspoint | /mnt/global_ramdisk | -| Capacity | N*110 GB | -| Throughput | 3*(N+1) GB/s, 2GB/s single POSIX thread | +| Capacity | (N*110) GB | +| Throughput | 3*(N+1) GB/s, 2 GB/s single POSIX thread | | User quota | none | N = number of compute nodes in the job. ## Summary -| Mountpoint | Usage | Protocol | Net Capacity| Throughput | Limitations | Access | Service | -| ------------------- | ------------------------------ | ----------- | ------------| -------------- | ------------ | --------------------------- | --------------------------- | -| /home | home directory | NFS, 2-Tier | 0.5 PB | 6 GB/s | Quota 250GB | Compute and login nodes | backed up | -| /scratch/work | large project files | Lustre | 1.69 PB | 30 GB/s | Quota | Compute and login nodes | none | -| /scratch/temp | job temporary data | Lustre | 1.69 PB | 30 GB/s | Quota 100 TB | Compute and login nodes | files older 90 days removed | -| /ramdisk | job temporary data, node local | tmpfs | 110GB | 90 GB/s | none | Compute nodes, node local | purged after job ends | -| /mnt/global_ramdisk | job temporary data | BeeGFS | N*110GB | 3*(N+1) GB/s | none | Compute nodes, job shared | purged after job ends | +<table> + <tr> + <td style="background-color: rgba(0, 0, 0, 0.54); color: white;">Mountpoint</td> + <td style="background-color: rgba(0, 0, 0, 0.54); color: white;">Usage</td> + <td style="background-color: rgba(0, 0, 0, 0.54); color: white;">Protocol</td> + <td style="background-color: rgba(0, 0, 0, 0.54); color: white;">Net Capacity</td> + <td style="background-color: rgba(0, 0, 0, 0.54); color: white;">Throughput</td> + <td style="background-color: rgba(0, 0, 0, 0.54); color: white;">Space/Inodes quota</td> + <td style="background-color: rgba(0, 0, 0, 0.54); color: white;">Access</td> + <td style="background-color: rgba(0, 0, 0, 0.54); color: white;">Service</td> + </tr> + <tr> + <td>/home</td> + <td>home directory</td> + <td>NFS, 2-Tier</td> + <td>0.5 PB</td> + <td>6 GB/s</td> + <td>250 GB / 500 k</td> + <td>Compute and login nodes</td> + <td>backed up</td> + </tr> + <tr> + <td style="background-color: #D3D3D3;">/scratch/work</td> + <td style="background-color: #D3D3D3;">large project files</td> + <td rowspan="2" style="background-color: #D3D3D3; vertical-align : middle;text-align:center;">Lustre</td> + <td rowspan="2" style="background-color: #D3D3D3; vertical-align : middle;text-align:center;">1.69 PB</td> + <td rowspan="2" style="background-color: #D3D3D3; vertical-align : middle;text-align:center;">30 GB/s</td> + <td rowspan="2" style="background-color: #D3D3D3; vertical-align : middle;text-align:center;">100 TB / 10 M</td> + <td style="background-color: #D3D3D3;">Compute and login nodes</td> + <td style="background-color: #D3D3D3;">none</td> + </tr> + <tr> + <td style="background-color: #D3D3D3;">/scratch/temp</td> + <td style="background-color: #D3D3D3;">job temporary data</td> + <td style="background-color: #D3D3D3;">Compute and login nodes</td> + <td style="background-color: #D3D3D3;">files older 90 days removed</td> + </tr> + <tr> + <td>/ramdisk</td> + <td>job temporary data, node local</td> + <td>tmpfs</td> + <td>110 GB</td> + <td>90 GB/s</td> + <td>none / none</td> + <td>Compute nodes, node local</td> + <td>purged after job ends</td> + </tr> + <tr> + <td style="background-color: #D3D3D3;">/mnt/global_ramdisk</td> + <td style="background-color: #D3D3D3;">job temporary data</td> + <td style="background-color: #D3D3D3;">BeeGFS</td> + <td style="background-color: #D3D3D3;">(N*110) GB</td> + <td style="background-color: #D3D3D3;">3*(N+1) GB/s</td> + <td style="background-color: #D3D3D3;">none / none</td> + <td style="background-color: #D3D3D3;">Compute nodes, job shared</td> + <td style="background-color: #D3D3D3;">purged after job ends</td> + </tr> +</table> N = number of compute nodes in the job. diff --git a/docs.it4i/software/chemistry/molpro.md b/docs.it4i/software/chemistry/molpro.md index 73675a5cd322d67a87e08365f1aaa5fc47e83887..d5876eb083044d6ed3cd39b965f16ff64b33ed3d 100644 --- a/docs.it4i/software/chemistry/molpro.md +++ b/docs.it4i/software/chemistry/molpro.md @@ -61,6 +61,7 @@ molpro -d /scratch/$USER/$PBS_JOBID caffeine_opt_diis.com # delete scratch directory rm -rf /scratch/$USER/$PBS_JOBID ``` + [1]: ../../salomon/storage.md [a]: http://www.molpro.net/ diff --git a/docs.it4i/software/chemistry/phono3py.md b/docs.it4i/software/chemistry/phono3py.md index 1444ae916ee595fe67b7963bc3f83f5a5e3cea1e..9277cbea77b5c2c7d9630ea0eeb28dcf859630d8 100644 --- a/docs.it4i/software/chemistry/phono3py.md +++ b/docs.it4i/software/chemistry/phono3py.md @@ -166,6 +166,7 @@ Finally the thermal conductivity result is produced by grouping single conductiv ```console $ phono3py --fc3 --fc2 --dim="2 2 2" --mesh="9 9 9" --br --read_gamma ``` + [1]: ./POSCAR.txt [2]: ./KPOINTS.txt [3]: ./POTCAR.txt diff --git a/docs.it4i/software/isv_licenses.md b/docs.it4i/software/isv_licenses.md index b97859add79c61ab065077169435ed2cf0c0b17b..7a231eed56dbfe801cc0de1889b310001761cf55 100644 --- a/docs.it4i/software/isv_licenses.md +++ b/docs.it4i/software/isv_licenses.md @@ -18,6 +18,7 @@ If an ISV application was purchased for educational (research) purposes and also For each license there is a [table][a], which provides the information about the name, number of available (purchased/licensed), number of used and number of free license features. ### Text Interface + (Anselm only, obsolete) For each license there is a unique text file, which provides the information about the name, number of available (purchased/licensed), number of used and number of free license features. The text files are accessible from the Anselm command prompt. @@ -54,6 +55,7 @@ $ cat /apps/user/licenses/matlab_features_state.txt ``` ## License Aware Job Scheduling + Anselm cluster and Salomon cluster provide license aware job scheduling. Selected licenses are accounted and checked by the scheduler of PBS Pro. If you ask for certain licenses, the scheduler won't start the job until the asked licenses are free (available). This prevents to crash batch jobs, just because of unavailability of the needed licenses. diff --git a/docs.it4i/software/lang/julialang.md b/docs.it4i/software/lang/julialang.md index 9f7b87731b9facdcd07c14ab6be113dbf19aa9c7..53dc141464f927ca477ef08a4315f6725588afd2 100644 --- a/docs.it4i/software/lang/julialang.md +++ b/docs.it4i/software/lang/julialang.md @@ -46,7 +46,7 @@ end vol = sphere_vol(3) # @printf allows number formatting but does not automatically append the \n to statements, see below -@printf "volume = %0.3f\n" vol +@printf "volume = %0.3f\n" vol #> volume = 113.097 quad1, quad2 = quadratic2(2.0, -2.0, -12.0) @@ -156,7 +156,7 @@ println("e_str1 == e_str2: $(e_str1 == e_str2)") #> e_str1 == e_str2: true # available number format characters are [f, e, g, c, s, p, d](https://github.com/JuliaLang/julia/blob/master/base/printf.jl#L15): -# (pi is a predefined constant; however, since its type is +# (pi is a predefined constant; however, since its type is # "MathConst" it has to be converted to a float to be formatted) @printf "fix trailing precision: %0.3f\n" float(pi) #> fix trailing precision: 3.142 @@ -226,9 +226,9 @@ println(r) r = eachmatch(r"[\w]{4,}", s1) for i in r print("\"$(i.match)\" ") end println() -#> "quick" "brown" "jumps" "over" "lazy" +#> "quick" "brown" "jumps" "over" "lazy" -# a string can be repeated using the [repeat](http://julia.readthedocs.org/en/latest/manual/strings/#common-operations) function, +# a string can be repeated using the [repeat](http://julia.readthedocs.org/en/latest/manual/strings/#common-operations) function, # or more succinctly with the [^ syntax](http://julia.readthedocs.org/en/latest/stdlib/base/#Base.^): r = "hello "^3 show(r); println() #> "hello hello hello " @@ -487,7 +487,7 @@ println(1 in keys(a1)) #> true # where [keys](http://docs.julialang.org/en/latest/stdlib/base/#Base.keys) creates an iterator over the keys of the dictionary # similar to keys, [values](http://docs.julialang.org/en/latest/stdlib/base/#Base.values) get iterators over the dict's values: -printsum(values(a1)) +printsum(values(a1)) #> Base.ValueIterator for a Dict{Int64,String} with 3 entries: String["two","three","one"] # use [collect](http://docs.julialang.org/en/latest/stdlib/base/#Base.collect) to get an array: diff --git a/docs.it4i/software/mpi/ompi-examples.md b/docs.it4i/software/mpi/ompi-examples.md index 1a2be74bc4fc2cac87f38d20f24b831e64b82a03..7b7e8ef7d4dfc5f046a76f8f2b0650d32de6f4e4 100644 --- a/docs.it4i/software/mpi/ompi-examples.md +++ b/docs.it4i/software/mpi/ompi-examples.md @@ -8,7 +8,7 @@ There are two MPI examples, each using one of six different MPI interfaces: ### Hello World -``` c tab="C" +```c tab="C" /* * Copyright (c) 2004-2006 The Trustees of Indiana University and Indiana * University Research and Technology @@ -18,6 +18,7 @@ There are two MPI examples, each using one of six different MPI interfaces: * Sample MPI "hello world" application in C */ +<!-- markdownlint-disable MD018 MD025 --> #include <stdio.h> #include "mpi.h" @@ -38,7 +39,7 @@ int main(int argc, char* argv[]) } ``` -``` c++ tab="C++" +```c++ tab="C++" // // Copyright (c) 2004-2006 The Trustees of Indiana University and Indiana // University Research and Technology @@ -55,7 +56,9 @@ int main(int argc, char* argv[]) // bindings. // +<!-- markdownlint-disable MD018 MD025 --> #include "mpi.h" +<!-- markdownlint-disable MD022 --> #include <iostream> int main(int argc, char **argv) @@ -75,7 +78,7 @@ int main(int argc, char **argv) } ``` -``` fortran tab="F mpi.h" +```fortran tab="F mpi.h" C C Copyright (c) 2004-2006 The Trustees of Indiana University and Indiana C University Research and Technology @@ -105,7 +108,7 @@ C end ``` -``` fortran tab="F use mpi" +```fortran tab="F use mpi" ! ! Copyright (c) 2004-2006 The Trustees of Indiana University and Indiana ! University Research and Technology @@ -136,7 +139,7 @@ program main end ``` -``` java tab="Java" +```java tab="Java" /* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. @@ -166,7 +169,6 @@ import mpi.*; class Hello { static public void main(String[] args) throws MPIException { - MPI.Init(args); int myrank = MPI.COMM_WORLD.getRank(); @@ -187,6 +189,7 @@ class Hello { * C shmem.h: [hello_oshmem_c.c](../../src/ompi/hello_oshmem_c.c) * Fortran shmem.fh: [hello_oshmemfh.f90](../../src/ompi/hello_oshmemfh.f90) +<!-- markdownlint-disable MD001 --> ### Send a Trivial Message Around in a Ring * C: [ring_c.c](../../src/ompi/ring_c.c) @@ -200,6 +203,7 @@ class Hello { Additionally, there's one further example application, but this one only uses the MPI C bindings: +<!-- markdownlint-disable MD001 --> ### Test the Connectivity Between All Pross * C: [connectivity_c.c](../../src/ompi/connectivity_c.c) diff --git a/docs.it4i/software/numerical-languages/octave.md b/docs.it4i/software/numerical-languages/octave.md index 6049537bce2a3672fc1f2d02330f088700dbc712..6788bff11884fe6a88c5d50bd97fea27ffac5c35 100644 --- a/docs.it4i/software/numerical-languages/octave.md +++ b/docs.it4i/software/numerical-languages/octave.md @@ -106,6 +106,7 @@ $ ssh mic0 # login to the MIC card $ source /apps/tools/octave/3.8.2-mic/bin/octave-env.sh # set up environment variables $ octave -q /apps/tools/octave/3.8.2-mic/example/test0.m # run an example ``` + [1]: ../../salomon/job-submission-and-execution.md [2]: ../intel/intel-xeon-phi-salomon.md [3]: ../../salomon/compute-nodes.md