diff --git a/docs.it4i/anselm/introduction.md b/docs.it4i/anselm/introduction.md index b0fcc5dee2262a48b31d6a9cd8cfbed9622a3f84..c2eb4c71a54cca5e3398cfd669bae9f60d64630b 100644 --- a/docs.it4i/anselm/introduction.md +++ b/docs.it4i/anselm/introduction.md @@ -2,7 +2,7 @@ Welcome to Anselm supercomputer cluster. The Anselm cluster consists of 209 compute nodes, totaling 3344 compute cores with 15 TB RAM and giving over 94 TFLOP/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 16 cores, at least 64 GB RAM, and 500 GB hard disk drive. Nodes are interconnected by fully non-blocking fat-tree InfiniBand network and equipped with Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA Kepler GPU or Intel Xeon Phi MIC accelerators. Read more in [Hardware Overview](hardware-overview/). -The cluster runs [operating system](software/operating-system/), which is compatible with the RedHat [Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) We have installed a wide range of software packages targeted at different scientific domains. These packages are accessible via the [modules environment](environment-and-modules/). +The cluster runs operating system, which is compatible with the RedHat [Linux family.](http://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg) We have installed a wide range of software packages targeted at different scientific domains. These packages are accessible via the [modules environment](../environment-and-modules/). User data shared file-system (HOME, 320 TB) and job data shared file-system (SCRATCH, 146 TB) are available to users. diff --git a/docs.it4i/anselm/job-submission-and-execution.md b/docs.it4i/anselm/job-submission-and-execution.md index bb9500785456cede34396dd250b3255c5f601f5b..09e09620d11fbe8a02199c05811bb27cc071f301 100644 --- a/docs.it4i/anselm/job-submission-and-execution.md +++ b/docs.it4i/anselm/job-submission-and-execution.md @@ -373,9 +373,6 @@ exit In this example, input and executable files are assumed preloaded manually in /scratch/$USER/myjob directory. Note the **mpiprocs** and **ompthreads** qsub options, controlling behavior of the MPI execution. The mympiprog.x is executed as one process per node, on all 100 allocated nodes. If mympiprog.x implements OpenMP threads, it will run 16 threads per node. -More information is found in the [Running OpenMPI](software/mpi/Running_OpenMPI/) and [Running MPICH2](software/mpi/running-mpich2/) -sections. - ### Example Jobscript for Single Node Calculation !!! note diff --git a/docs.it4i/general/obtaining-login-credentials/certificates-faq.md b/docs.it4i/general/obtaining-login-credentials/certificates-faq.md index eb1c58907a95484170b49d0c3c743fbe0d22acfc..4d2315dd2e8306e7d8cac5dafa5523c5b2dcbdb1 100644 --- a/docs.it4i/general/obtaining-login-credentials/certificates-faq.md +++ b/docs.it4i/general/obtaining-login-credentials/certificates-faq.md @@ -134,7 +134,7 @@ Most grid services require the use of your certificate; however, the format of y If employing the PRACE version of GSISSH-term (also a Java Web Start Application), you may use either the PEM or p12 formats. Note that this service automatically installs up-to-date PRACE CA certificates. -If the grid service is UNICORE, then you bind your certificate, in either the p12 format or JKS, to UNICORE during the installation of the client on your local machine. For more information, please visit [UNICORE6 in PRACE](www.prace-ri.eu/UNICORE6-in-PRACE) +If the grid service is UNICORE, then you bind your certificate, in either the p12 format or JKS, to UNICORE during the installation of the client on your local machine. For more information, please visit [UNICORE6 in PRACE](http://www.prace-ri.eu/UNICORE6-in-PRACE) If the grid service is part of Globus, such as GSI-SSH, GriFTP or GRAM5, then the certificates can be in either p12 or PEM format and must reside in the "$HOME/.globus" directory for Linux and Mac users or %HOMEPATH%.globus for Windows users. (Windows users will have to use the DOS command ’cmd’ to create a directory which starts with a ’.’). Further, user certificates should be named either "usercred.p12" or "usercert.pem" and "userkey.pem", and the CA certificates must be kept in a pre-specified directory as follows. For Linux and Mac users, this directory is either $HOME/.globus/certificates or /etc/grid-security/certificates. For Windows users, this directory is %HOMEPATH%.globuscertificates. (If you are using GSISSH-Term from prace-ri.eu then you do not have to create the .globus directory nor install CA certificates to use this tool alone). diff --git a/docs.it4i/job-features.md b/docs.it4i/job-features.md index 679294ff729bd0554bf5fa55bdb9eff8692f9693..fc7362fd80df3d888916cca844273ca9521a6d14 100644 --- a/docs.it4i/job-features.md +++ b/docs.it4i/job-features.md @@ -44,7 +44,7 @@ Configure network for virtualization, create interconnect for fast communication $ qsub ... -l virt_network=true ``` -[See Tap Interconnect](/anselm/software/virtualization/#tap-interconnect) +[See Tap Interconnect](software/tools/virtualization/#tap-interconnect) ## x86 Adapt Support @@ -98,4 +98,4 @@ To enable Intel Hyper Threading on allocated nodes CPUs ```console $ qsub ... -l cpu_hyper_threading=true -``` \ No newline at end of file +``` diff --git a/docs.it4i/software/debuggers/Introduction.md b/docs.it4i/software/debuggers/Introduction.md index d5541e2f81ce812a8278cc54e35a3880be9e2cb1..87f642fdaaacb3ddada10be3d296198deee4d010 100644 --- a/docs.it4i/software/debuggers/Introduction.md +++ b/docs.it4i/software/debuggers/Introduction.md @@ -15,7 +15,7 @@ $ ml intel $ idb ``` -Read more at the [Intel Debugger](../intel-suite/intel-debugger/) page. +Read more at the [Intel Debugger](../intel/intel-suite/intel-debugger/) page. ## Allinea Forge (DDT/MAP) diff --git a/docs.it4i/software/debuggers/allinea-performance-reports.md b/docs.it4i/software/debuggers/allinea-performance-reports.md index 530c00c5b028786e8833c72a24ae8ceae1db1c77..cea9649a2be0125611719d0339a09c019992491a 100644 --- a/docs.it4i/software/debuggers/allinea-performance-reports.md +++ b/docs.it4i/software/debuggers/allinea-performance-reports.md @@ -28,7 +28,7 @@ Instead of [running your MPI program the usual way](../mpi/mpi/), use the the pe $ perf-report mpirun ./mympiprog.x ``` -The MPI program will run as usual. The perf-report creates two additional files, in \*.txt and \*.html format, containing the performance report. Note that demanding MPI codes should be run within [the queue system](../../job-submission-and-execution/). +The MPI program will run as usual. The perf-report creates two additional files, in \*.txt and \*.html format, containing the performance report. Note that demanding MPI codes should be run within [the queue system](../../salomon/job-submission-and-execution/). ## Example diff --git a/docs.it4i/software/debuggers/papi.md b/docs.it4i/software/debuggers/papi.md index a879f016497719ceb01c8b8decf4b2117add8a4b..c8c6619479d1c5ae9b3d37ee2e78b1e1b987bebe 100644 --- a/docs.it4i/software/debuggers/papi.md +++ b/docs.it4i/software/debuggers/papi.md @@ -193,7 +193,7 @@ $ ./matrix !!! note PAPI currently supports only a subset of counters on the Intel Xeon Phi processor compared to Intel Xeon, for example the floating point operations counter is missing. -To use PAPI in [Intel Xeon Phi](../intel-xeon-phi/) native applications, you need to load module with " -mic" suffix, for example " papi/5.3.2-mic" : +To use PAPI in [Intel Xeon Phi](../intel/intel-xeon-phi-salomon/) native applications, you need to load module with " -mic" suffix, for example " papi/5.3.2-mic" : ```console $ ml papi/5.3.2-mic diff --git a/docs.it4i/software/intel/intel-suite/intel-debugger.md b/docs.it4i/software/intel/intel-suite/intel-debugger.md index 9f1b21d6ab4c82dd3a5009f1a2de36646a1fc0dd..3317f814dc51cea8aff2e33fe9d72ae92ccc9c9a 100644 --- a/docs.it4i/software/intel/intel-suite/intel-debugger.md +++ b/docs.it4i/software/intel/intel-suite/intel-debugger.md @@ -4,7 +4,7 @@ IDB is no longer available since Intel Parallel Studio 2015 ## Debugging Serial Applications -The intel debugger version is available, via module intel/13.5.192. The debugger works for applications compiled with C and C++ compiler and the ifort fortran 77/90/95 compiler. The debugger provides java GUI environment. Use [X display](../../general/accessing-the-clusters/graphical-user-interface/x-window-system/) for running the GUI. +The intel debugger version is available, via module intel/13.5.192. The debugger works for applications compiled with C and C++ compiler and the ifort fortran 77/90/95 compiler. The debugger provides java GUI environment. Use [X display](../../../general/accessing-the-clusters/graphical-user-interface/x-window-system/) for running the GUI. ```console $ ml intel/13.5.192 @@ -18,7 +18,7 @@ The debugger may run in text mode. To debug in text mode, use $ idbc ``` -To debug on the compute nodes, module intel must be loaded. The GUI on compute nodes may be accessed using the same way as in [the GUI section](../../general/accessing-the-clusters/graphical-user-interface/x-window-system/) +To debug on the compute nodes, module intel must be loaded. The GUI on compute nodes may be accessed using the same way as in [the GUI section](../../../general/accessing-the-clusters/graphical-user-interface/x-window-system/) Example: @@ -40,7 +40,7 @@ In this example, we allocate 1 full compute node, compile program myprog.c with ### Small Number of MPI Ranks -For debugging small number of MPI ranks, you may execute and debug each rank in separate xterm terminal (do not forget the [X display](../../general/accessing-the-clusters/graphical-user-interface/x-window-system/)). Using Intel MPI, this may be done in following way: +For debugging small number of MPI ranks, you may execute and debug each rank in separate xterm terminal (do not forget the [X display](../../../general/accessing-the-clusters/graphical-user-interface/x-window-system/)). Using Intel MPI, this may be done in following way: ```console $ qsub -q qexp -l select=2:ncpus=24 -X -I @@ -70,4 +70,4 @@ Run the idb debugger in GUI mode. The menu Parallel contains number of tools for ## Further Information -Exhaustive manual on IDB features and usage is published at Intel website, <https://software.intel.com/sites/products/documentation/doclib/iss/2013/compiler/cpp-lin/> +Exhaustive manual on IDB features and usage is published at [Intel website](https://software.intel.com/sites/products/documentation/doclib/iss/2013/compiler/cpp-lin/). diff --git a/docs.it4i/software/intel/intel-suite/intel-mkl.md b/docs.it4i/software/intel/intel-suite/intel-mkl.md index 2053e958b2673acb4fc79e4e552bea5cf016d85e..d520f68cddda0add14682b0df088e26a41b6ac8e 100644 --- a/docs.it4i/software/intel/intel-suite/intel-mkl.md +++ b/docs.it4i/software/intel/intel-suite/intel-mkl.md @@ -109,7 +109,7 @@ In this example, we compile, link and run the cblas_dgemm example, using LP64 in ## MKL and MIC Accelerators -The Intel MKL is capable to automatically offload the computations o the MIC accelerator. See section [Intel Xeon Phi](../intel-xeon-phi/) for details. +The Intel MKL is capable to automatically offload the computations o the MIC accelerator. See section [Intel Xeon Phi](../intel-xeon-phi-salomon/) for details. ## LAPACKE C Interface diff --git a/docs.it4i/software/intel/intel-suite/intel-tbb.md b/docs.it4i/software/intel/intel-suite/intel-tbb.md index 59976aa7ef31d2e97e9799ced80578be11a2d8ab..e0de0d980b8ace53c07866b3766581fd3b4618f8 100644 --- a/docs.it4i/software/intel/intel-suite/intel-tbb.md +++ b/docs.it4i/software/intel/intel-suite/intel-tbb.md @@ -2,7 +2,7 @@ ## Intel Threading Building Blocks -Intel Threading Building Blocks (Intel TBB) is a library that supports scalable parallel programming using standard ISO C++ code. It does not require special languages or compilers. To use the library, you specify tasks, not threads, and let the library map tasks onto threads in an efficient manner. The tasks are executed by a runtime scheduler and may be offloaded to [MIC accelerator](../intel-xeon-phi/). +Intel Threading Building Blocks (Intel TBB) is a library that supports scalable parallel programming using standard ISO C++ code. It does not require special languages or compilers. To use the library, you specify tasks, not threads, and let the library map tasks onto threads in an efficient manner. The tasks are executed by a runtime scheduler and may be offloaded to [MIC accelerator](../intel-xeon-phi-salomon/). Intel is available on the cluster. diff --git a/docs.it4i/software/intel/intel-suite/intel-trace-analyzer-and-collector.md b/docs.it4i/software/intel/intel-suite/intel-trace-analyzer-and-collector.md index b735d9e58b65cfb179a65c1119746f8a2d2cde44..57e1518990a3be6f4ad7b5e86a91ba4c821eb6dc 100644 --- a/docs.it4i/software/intel/intel-suite/intel-trace-analyzer-and-collector.md +++ b/docs.it4i/software/intel/intel-suite/intel-trace-analyzer-and-collector.md @@ -21,7 +21,7 @@ The trace will be saved in file myapp.stf in the current directory. ## Viewing Traces -To view and analyze the trace, open ITAC GUI in a [graphical environment](../../general/accessing-the-clusters/graphical-user-interface/x-window-system/): +To view and analyze the trace, open ITAC GUI in a [graphical environment](../../../general/accessing-the-clusters/graphical-user-interface/x-window-system/): ```console $ ml itac/9.1.2.024 diff --git a/docs.it4i/software/machine-learning/introduction.md b/docs.it4i/software/machine-learning/introduction.md index 7d9e790f00afe03ac2ad3568e871e497bd8e9656..e329ef64ca4d4258a38bed228479127fa367024d 100644 --- a/docs.it4i/software/machine-learning/introduction.md +++ b/docs.it4i/software/machine-learning/introduction.md @@ -20,8 +20,8 @@ Read more about available versions at the [TensorFlow page](tensorflow/). ## Theano -Read more about available versions at the [Theano page](theano/). +Read more about [available versions](../../modules-matrix/). ## Keras -Read more about available versions at the [Keras page](keras/). +Read more about [available versions](../../modules-matrix/). diff --git a/docs.it4i/software/mpi/mpi.md b/docs.it4i/software/mpi/mpi.md index 271280bed3bf2ed6414a2e9fbc57c7d2d05de87c..ab774158938c0be8ba197f80fdd53c76ebf0e04b 100644 --- a/docs.it4i/software/mpi/mpi.md +++ b/docs.it4i/software/mpi/mpi.md @@ -138,4 +138,4 @@ In the previous two cases with one or two MPI processes per node, the operating The [**OpenMPI 1.8.6**](http://www.open-mpi.org/) is based on OpenMPI. Read more on [how to run OpenMPI](Running_OpenMPI/) based MPI. -The Intel MPI may run on the[Intel Xeon Ph](../intel-xeon-phi/)i accelerators as well. Read more on [how to run Intel MPI on accelerators](../intel-xeon-phi/). +The Intel MPI may run on the [Intel Xeon Ph](../intel/intel-xeon-phi-salomon/) accelerators as well. Read more on [how to run Intel MPI on accelerators](../intel/intel-xeon-phi-salomon/). diff --git a/docs.it4i/software/mpi/running-mpich2.md b/docs.it4i/software/mpi/running-mpich2.md index 7b37a811802ffe6aa142cad5773cfc20e842b6fd..30679021d7b1b4105ecbd2d5b7e8786d5888d179 100644 --- a/docs.it4i/software/mpi/running-mpich2.md +++ b/docs.it4i/software/mpi/running-mpich2.md @@ -152,4 +152,4 @@ $ mpirun -bindto numa echo $OMP_NUM_THREADS ## Intel MPI on Xeon Phi -The[MPI section of Intel Xeon Phi chapter](../intel-xeon-phi/) provides details on how to run Intel MPI code on Xeon Phi architecture. +The[MPI section of Intel Xeon Phi chapter](../intel/intel-xeon-phi-salomon/) provides details on how to run Intel MPI code on Xeon Phi architecture. diff --git a/docs.it4i/software/numerical-languages/octave.md b/docs.it4i/software/numerical-languages/octave.md index ca785e75dca4e83cccbdf25b68800363f33a841b..8a3eb55ce0b653414fe09cba1bf6f8b07c00cf42 100644 --- a/docs.it4i/software/numerical-languages/octave.md +++ b/docs.it4i/software/numerical-languages/octave.md @@ -60,11 +60,11 @@ Octave may use MPI for interprocess communication This functionality is currentl ## Xeon Phi Support -Octave may take advantage of the Xeon Phi accelerators. This will only work on the [Intel Xeon Phi](../intel-xeon-phi/) [accelerated nodes](../../salomon/compute-nodes/). +Octave may take advantage of the Xeon Phi accelerators. This will only work on the [Intel Xeon Phi](../intel/intel-xeon-phi-salomon/) [accelerated nodes](../../salomon/compute-nodes/). ### Automatic Offload Support -Octave can accelerate BLAS type operations (in particular the Matrix Matrix multiplications] on the Xeon Phi accelerator, via [Automatic Offload using the MKL library](../intel-xeon-phi/#section-3) +Octave can accelerate BLAS type operations (in particular the Matrix Matrix multiplications] on the Xeon Phi accelerator, via [Automatic Offload using the MKL library](../intel/intel-xeon-phi-salomon/) Example @@ -88,7 +88,7 @@ In this example, the calculation was automatically divided among the CPU cores a ### Native Support -A version of [native](../intel-xeon-phi/#section-4) Octave is compiled for Xeon Phi accelerators. Some limitations apply for this version: +A version of [native](../intel/intel-xeon-phi-salomon/) Octave is compiled for Xeon Phi accelerators. Some limitations apply for this version: * Only command line support. GUI, graph plotting etc. is not supported. * Command history in interactive mode is not supported. diff --git a/docs.it4i/software/numerical-languages/r.md b/docs.it4i/software/numerical-languages/r.md index 3322a89acbf62cde753cfc57adf36a001d986148..c7f112c638786169a8a25a7150ed1b85abf3c893 100644 --- a/docs.it4i/software/numerical-languages/r.md +++ b/docs.it4i/software/numerical-languages/r.md @@ -91,8 +91,6 @@ More information and examples may be obtained directly by reading the documentat > vignette("parallel") ``` -Download the package [parallell](package-parallel-vignette.pdf) vignette. - The forking is the most simple to use. Forking family of functions provide parallelized, drop in replacement for the serial apply() family of functions. !!! warning @@ -402,4 +400,4 @@ By leveraging MKL, R can accelerate certain computations, most notably linear al $ export MKL_MIC_ENABLE=1 ``` -[Read more about automatic offload](../intel-xeon-phi/) +[Read more about automatic offload](../intel/intel-xeon-phi-salomon/) diff --git a/docs.it4i/software/numerical-libraries/intel-numerical-libraries.md b/docs.it4i/software/numerical-libraries/intel-numerical-libraries.md index 471f766f4d58c3e88d91d552f6040d36634c73c5..5fbe5086f1949240a20d05cab6a2fa00b57056d7 100644 --- a/docs.it4i/software/numerical-libraries/intel-numerical-libraries.md +++ b/docs.it4i/software/numerical-libraries/intel-numerical-libraries.md @@ -10,7 +10,7 @@ Intel Math Kernel Library (Intel MKL) is a library of math kernel subroutines, e $ ml mkl **or** ml imkl ``` -Read more at the [Intel MKL](../intel-suite/intel-mkl/) page. +Read more at the [Intel MKL](../intel/intel-suite/intel-mkl/) page. ## Intel Integrated Performance Primitives @@ -20,7 +20,7 @@ Intel Integrated Performance Primitives, version 7.1.1, compiled for AVX is avai $ ml ipp ``` -Read more at the [Intel IPP](../intel-suite/intel-integrated-performance-primitives/) page. +Read more at the [Intel IPP](../intel/intel-suite/intel-integrated-performance-primitives/) page. ## Intel Threading Building Blocks @@ -30,4 +30,4 @@ Intel Threading Building Blocks (Intel TBB) is a library that supports scalable $ ml tbb ``` -Read more at the [Intel TBB](../intel-suite/intel-tbb/) page. +Read more at the [Intel TBB](../intel/intel-suite/intel-tbb/) page. diff --git a/docs.it4i/software/numerical-libraries/petsc.md b/docs.it4i/software/numerical-libraries/petsc.md index 10e202e9309a215f86692fef3e4d7a0c2f291400..71070af4fd27b4b9ddbc787b5c12184d22b4cc6b 100644 --- a/docs.it4i/software/numerical-libraries/petsc.md +++ b/docs.it4i/software/numerical-libraries/petsc.md @@ -54,5 +54,4 @@ All these libraries can be used also alone, without PETSc. Their static or share * [PT-Scotch](http://www.labri.fr/perso/pelegrin/scotch/) * preconditioners & multigrid * [Hypre](http://www.nersc.gov/users/software/programming-libraries/math-libraries/petsc/) - * [Trilinos ML](http://trilinos.sandia.gov/packages/ml/) * [SPAI - Sparse Approximate Inverse](https://bitbucket.org/petsc/pkg-spai) diff --git a/docs.it4i/software/numerical-libraries/trilinos.md b/docs.it4i/software/numerical-libraries/trilinos.md index c6c3fba1a8d1e4c765e7026ef6f77c4120e2da07..77aa53914807c9e0d04fb7bb4c520c16e1b8bb8b 100644 --- a/docs.it4i/software/numerical-libraries/trilinos.md +++ b/docs.it4i/software/numerical-libraries/trilinos.md @@ -32,7 +32,7 @@ First, load the appropriate module: $ ml trilinos ``` -For the compilation of CMake-aware project, Trilinos provides the FIND_PACKAGE( Trilinos ) capability, which makes it easy to build against Trilinos, including linking against the correct list of libraries. For details, see <http://trilinos.sandia.gov/Finding_Trilinos.txt> +For the compilation of CMake-aware project, Trilinos provides the FIND_PACKAGE( Trilinos ) capability, which makes it easy to build against Trilinos, including linking against the correct list of libraries. For compiling using simple Makefiles, Trilinos provides Makefile.export system, which allows users to include important Trilinos variables directly into their Makefiles. This can be done simply by inserting the following line into the Makefile: @@ -46,4 +46,4 @@ or include Makefile.export.<package> ``` -if you are interested only in a specific Trilinos package. This will give you access to the variables such as Trilinos_CXX_COMPILER, Trilinos_INCLUDE_DIRS, Trilinos_LIBRARY_DIRS etc. For the detailed description and example Makefile see <http://trilinos.sandia.gov/Export_Makefile.txt>. +if you are interested only in a specific Trilinos package. This will give you access to the variables such as Trilinos_CXX_COMPILER, Trilinos_INCLUDE_DIRS, Trilinos_LIBRARY_DIRS etc. diff --git a/docs.it4i/software/tools/ansys/ansys-cfx.md b/docs.it4i/software/tools/ansys/ansys-cfx.md index 5650e4cca323b5313f993629fe343e596e6663ae..cb77fdf45713578ca348ba4acc0847e98fed51c2 100644 --- a/docs.it4i/software/tools/ansys/ansys-cfx.md +++ b/docs.it4i/software/tools/ansys/ansys-cfx.md @@ -47,7 +47,7 @@ echo Machines: $hl /ansys_inc/v145/CFX/bin/cfx5solve -def input.def -size 4 -size-ni 4x -part-large -start-method "Platform MPI Distributed Parallel" -par-dist $hl -P aa_r ``` -Header of the PBS file (above) is common and description can be find on [this site](../../anselm/job-submission-and-execution/). SVS FEM recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. +Header of the PBS file (above) is common and description can be find on [this site](../../../anselm/job-submission-and-execution/). SVS FEM recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. Working directory has to be created before sending PBS job into the queue. Input file should be in working directory or full path to input file has to be specified. >Input file has to be defined by common CFX def file which is attached to the CFX solver via parameter -def diff --git a/docs.it4i/software/tools/ansys/ansys-fluent.md b/docs.it4i/software/tools/ansys/ansys-fluent.md index 74326f978d9a088b0ff523fda89d6d502b363dd0..ee8ac3a5e2a07cae0b7846b2f3fda08e53cebfd9 100644 --- a/docs.it4i/software/tools/ansys/ansys-fluent.md +++ b/docs.it4i/software/tools/ansys/ansys-fluent.md @@ -38,7 +38,7 @@ NCORES=`wc -l $PBS_NODEFILE |awk '{print $1}'` /ansys_inc/v145/fluent/bin/fluent 3d -t$NCORES -cnf=$PBS_NODEFILE -g -i fluent.jou ``` -Header of the pbs file (above) is common and description can be find on [this site](../../anselm/resources-allocation-policy/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. +Header of the pbs file (above) is common and description can be find on [this site](../../../salomon/resources-allocation-policy/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. Working directory has to be created before sending pbs job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common Fluent journal file which is attached to the Fluent solver via parameter -i fluent.jou diff --git a/docs.it4i/software/tools/ansys/ansys-ls-dyna.md b/docs.it4i/software/tools/ansys/ansys-ls-dyna.md index 46a8ed726fb4da82bb743a71a98aa5e4b9f88132..7bf643a9c40448012724d80ce02f5b7731d0c23a 100644 --- a/docs.it4i/software/tools/ansys/ansys-ls-dyna.md +++ b/docs.it4i/software/tools/ansys/ansys-ls-dyna.md @@ -50,6 +50,6 @@ echo Machines: $hl /ansys_inc/v145/ansys/bin/ansys145 -dis -lsdynampp i=input.k -machines $hl ``` -Header of the PBS file (above) is common and description can be find on [this site](../../anselm/job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. +Header of the PBS file (above) is common and description can be find on [this site](../../../anselm/job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. Working directory has to be created before sending PBS job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA .**k** file which is attached to the ANSYS solver via parameter i= diff --git a/docs.it4i/software/tools/ansys/ansys-mechanical-apdl.md b/docs.it4i/software/tools/ansys/ansys-mechanical-apdl.md index b33f77104100f5504e297484a586cb9a0a7e0201..116252df5aaa64a4e9c43f9443443db050791f4f 100644 --- a/docs.it4i/software/tools/ansys/ansys-mechanical-apdl.md +++ b/docs.it4i/software/tools/ansys/ansys-mechanical-apdl.md @@ -49,7 +49,7 @@ echo Machines: $hl /ansys_inc/v145/ansys/bin/ansys145 -b -dis -p aa_r -i input.dat -o file.out -machines $hl -dir $WORK_DIR ``` -Header of the PBS file (above) is common and description can be found on [this site](../../anselm/resources-allocation-policy/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allow to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. +Header of the PBS file (above) is common and description can be found on [this site](../../../anselm/resources-allocation-policy/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allow to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. Working directory has to be created before sending PBS job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common APDL file which is attached to the ANSYS solver via parameter -i diff --git a/docs.it4i/software/tools/ansys/ls-dyna.md b/docs.it4i/software/tools/ansys/ls-dyna.md index 3bd9deef62ba5ac1456c992f3a7ed74ddc034eff..dac7dbe9f066445cb21964aaeb031941cbd77346 100644 --- a/docs.it4i/software/tools/ansys/ls-dyna.md +++ b/docs.it4i/software/tools/ansys/ls-dyna.md @@ -30,6 +30,6 @@ module load lsdyna /apps/engineering/lsdyna/lsdyna700s i=input.k ``` -Header of the PBS file (above) is common and description can be find on [this site](../../anselm/job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. +Header of the PBS file (above) is common and description can be find on [this site](../../../anselm/job-submission-and-execution/). [SVS FEM](http://www.svsfem.cz) recommends to utilize sources by keywords: nodes, ppn. These keywords allows to address directly the number of nodes (computers) and cores (ppn) which will be utilized in the job. Also the rest of code assumes such structure of allocated resources. Working directory has to be created before sending PBS job into the queue. Input file should be in working directory or full path to input file has to be specified. Input file has to be defined by common LS-DYNA **.k** file which is attached to the LS-DYNA solver via parameter i= diff --git a/docs.it4i/software/viz/gpi2.md b/docs.it4i/software/viz/gpi2.md index e3a158bae823a54359592ad16fc420eec2b49bfe..f1df4a9334888414bae5cb02fa3b0904ff620534 100644 --- a/docs.it4i/software/viz/gpi2.md +++ b/docs.it4i/software/viz/gpi2.md @@ -4,7 +4,7 @@ Programming Next Generation Supercomputers: GPI-2 is an API library for asynchronous interprocess, cross-node communication. It provides a flexible, scalable and fault tolerant interface for parallel applications. -The GPI-2 library ([www.gpi-site.com/gpi2/](http://www.gpi-site.com/gpi2/)) implements the GASPI specification (Global Address Space Programming Interface, [www.gaspi.de](http://www.gaspi.de/en/project.html)). GASPI is a Partitioned Global Address Space (PGAS) API. It aims at scalable, flexible and failure tolerant computing in massively parallel environments. +The GPI-2 library implements the GASPI specification (Global Address Space Programming Interface, [www.gaspi.de](http://www.gaspi.de/en/project.html)). GASPI is a Partitioned Global Address Space (PGAS) API. It aims at scalable, flexible and failure tolerant computing in massively parallel environments. ## Modules diff --git a/docs.it4i/software/viz/openfoam.md b/docs.it4i/software/viz/openfoam.md index 27aefea264ca2414f8abde9cb734896ac1255faa..df6585429f6dc9d4fae6a52d6dcf713b534ab314 100644 --- a/docs.it4i/software/viz/openfoam.md +++ b/docs.it4i/software/viz/openfoam.md @@ -45,7 +45,7 @@ In /opt/modules/modulefiles/engineering you can see installed engineering softwa lsdyna/7.x.x openfoam/2.2.1-gcc481-openmpi1.6.5-SP ``` -For information how to use modules please [look here](../anselm/environment-and-modules/). +For information how to use modules please [look here](../../environment-and-modules/). ## Getting Started @@ -111,7 +111,7 @@ Job submission (example for Anselm): $ qsub -A OPEN-0-0 -q qprod -l select=1:ncpus=16,walltime=03:00:00 test.sh ``` -For information about job submission please [look here](../anselm/job-submission-and-execution/). +For information about job submission please [look here](../../anselm/job-submission-and-execution/). ## Running Applications in Parallel diff --git a/docs.it4i/software/viz/paraview.md b/docs.it4i/software/viz/paraview.md index 7e2bae9a95bc33c6f83756188a5c1c54e4037892..3ef96099c5d39b5463d4f950fa8e72f807d4a096 100644 --- a/docs.it4i/software/viz/paraview.md +++ b/docs.it4i/software/viz/paraview.md @@ -29,7 +29,7 @@ To launch the server, you must first allocate compute nodes, for example $ qsub -I -q qprod -A OPEN-0-0 -l select=2 ``` -to launch an interactive session on 2 nodes. Refer to [Resource Allocation and Job Execution](../salomon/job-submission-and-execution/) for details. +to launch an interactive session on 2 nodes. Refer to [Resource Allocation and Job Execution](../../salomon/job-submission-and-execution/) for details. After the interactive session is opened, load the ParaView module (following examples for Salomon, Anselm instructions in comments): diff --git a/mkdocs.yml b/mkdocs.yml index 18c7eec90cef0fd50bceeba3d4740e1d90f07dab..efbd57ff1e42340484c09663e01ff9fcfe5625e1 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -15,17 +15,17 @@ copyright: Copyright (c) 2013-2018 IT4Innovations__VERSION__ pages: - Home: index.md - - General: - - Applying for Resources: general/applying-for-resources.md + - Accessing the Clusters: - Obtaining Login Credentials: general/obtaining-login-credentials/obtaining-login-credentials.md + - Applying for Resources: general/applying-for-resources.md - Certificates FAQ: general/obtaining-login-credentials/certificates-faq.md - - Accessing the Clusters: + - Resource Allocation and Job Execution: general/resource_allocation_and_job_execution.md + - Connect to the Clusters: - OpenSSH Keys (UNIX): general/accessing-the-clusters/shell-access-and-data-transfer/ssh-keys.md - PuTTY (Windows): general/accessing-the-clusters/shell-access-and-data-transfer/putty.md - X Window System: general/accessing-the-clusters/graphical-user-interface/x-window-system.md - VNC: general/accessing-the-clusters/graphical-user-interface/vnc.md - VPN Access: general/accessing-the-clusters/vpn-access.md - - Resource Allocation and Job Execution: general/resource_allocation_and_job_execution.md - PRACE User Support: prace.md - Salomon Cluster: - Introduction: salomon/introduction.md