Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • sccs/docs.it4i.cz
  • soj0018/docs.it4i.cz
  • lszustak/docs.it4i.cz
  • jarosjir/docs.it4i.cz
  • strakpe/docs.it4i.cz
  • beranekj/docs.it4i.cz
  • tab0039/docs.it4i.cz
  • davidciz/docs.it4i.cz
  • gui0013/docs.it4i.cz
  • mrazek/docs.it4i.cz
  • lriha/docs.it4i.cz
  • it4i-vhapla/docs.it4i.cz
  • hol0598/docs.it4i.cz
  • sccs/docs-it-4-i-cz-fumadocs
  • siw019/docs-it-4-i-cz-fumadocs
15 results
Show changes
Showing
with 1489 additions and 0 deletions
MPI
===
Setting up MPI Environment
--------------------------
The Salomon cluster provides several implementations of the MPI library:
|MPI Library|Thread support|
|---|---|---
|**Intel MPI 4.1**|Full thread support up to, MPI_THREAD_MULTIPLE|
|**Intel MPI 5.0**|Full thread support up to, MPI_THREAD_MULTIPLE|
|OpenMPI 1.8.6|Full thread support up to, MPI_THREAD_MULTIPLE, MPI-3.0, support|
|SGI MPT 2.12 ||
MPI libraries are activated via the environment modules.
Look up section modulefiles/mpi in module avail
```bash
$ module avail
------------------------------ /apps/modules/mpi -------------------------------
impi/4.1.1.036-iccifort-2013.5.192
impi/4.1.1.036-iccifort-2013.5.192-GCC-4.8.3
impi/5.0.3.048-iccifort-2015.3.187
impi/5.0.3.048-iccifort-2015.3.187-GNU-5.1.0-2.25
MPT/2.12
OpenMPI/1.8.6-GNU-5.1.0-2.25
```
There are default compilers associated with any particular MPI implementation. The defaults may be changed, the MPI libraries may be used in conjunction with any compiler. The defaults are selected via the modules in following way
|Module|MPI|Compiler suite|
|---|---|
|impi-5.0.3.048-iccifort- Intel MPI 5.0.3| 2015.3.187||
| OpenMP-1.8.6-GNU-5.1.0-2 OpenMPI 1.8.6| .25||
Examples:
```bash
$ module load gompi/2015b
```
In this example, we activate the latest OpenMPI with latest GNU compilers (OpenMPI 1.8.6 and GCC 5.1). Please see more information about toolchains in section [Environment and Modules](../../environment-and-modules/) .
To use OpenMPI with the intel compiler suite, use
```bash
$ module load iompi/2015.03
```
In this example, the openmpi 1.8.6 using intel compilers is activated. It's used "iompi" toolchain.
Compiling MPI Programs
----------------------
After setting up your MPI environment, compile your program using one of the mpi wrappers
```bash
$ mpicc -v
$ mpif77 -v
$ mpif90 -v
```
When using Intel MPI, use the following MPI wrappers:
```bash
$ mpicc
$ mpiifort
```
Wrappers mpif90, mpif77 that are provided by Intel MPI are designed for gcc and gfortran. You might be able to compile MPI code by them even with Intel compilers, but you might run into problems (for example, native MIC compilation with -mmic does not work with mpif90).
Example program:
```cpp
// helloworld_mpi.c
#include <stdio.h>
#include<mpi.h>
int main(int argc, char **argv) {
int len;
int rank, size;
char node[MPI_MAX_PROCESSOR_NAME];
// Initiate MPI
MPI_Init(&argc, &argv);
MPI_Comm_rank(MPI_COMM_WORLD,&rank);
MPI_Comm_size(MPI_COMM_WORLD,&size);
// Get hostame and print
MPI_Get_processor_name(node,&len);
printf("Hello world! from rank %d of %d on host %sn",rank,size,node);
// Finalize and exit
MPI_Finalize();
return 0;
}
```
Compile the above example with
```bash
$ mpicc helloworld_mpi.c -o helloworld_mpi.x
```
Running MPI Programs
--------------------
The MPI program executable must be compatible with the loaded MPI module.
Always compile and execute using the very same MPI module.
It is strongly discouraged to mix mpi implementations. Linking an application with one MPI implementation and running mpirun/mpiexec form other implementation may result in unexpected errors.
The MPI program executable must be available within the same path on all nodes. This is automatically fulfilled on the /home and /scratch filesystem. You need to preload the executable, if running on the local scratch /lscratch filesystem.
### Ways to run MPI programs
Optimal way to run an MPI program depends on its memory requirements, memory access pattern and communication pattern.
Consider these ways to run an MPI program:
1. One MPI process per node, 24 threads per process
2. Two MPI processes per node, 12 threads per process
3. 24 MPI processes per node, 1 thread per process.
**One MPI** process per node, using 24 threads, is most useful for memory demanding applications, that make good use of processor cache memory and are not memory bound. This is also a preferred way for communication intensive applications as one process per node enjoys full bandwidth access to the network interface.
**Two MPI** processes per node, using 12 threads each, bound to processor socket is most useful for memory bandwidth bound applications such as BLAS1 or FFT, with scalable memory demand. However, note that the two processes will share access to the network interface. The 12 threads and socket binding should ensure maximum memory access bandwidth and minimize communication, migration and numa effect overheads.
!!! Note "Note"
Important! Bind every OpenMP thread to a core!
In the previous two cases with one or two MPI processes per node, the operating system might still migrate OpenMP threads between cores. You want to avoid this by setting the KMP_AFFINITY or GOMP_CPU_AFFINITY environment variables.
**24 MPI** processes per node, using 1 thread each bound to processor core is most suitable for highly scalable applications with low communication demand.
### Running OpenMPI
The [**OpenMPI 1.8.6**](http://www.open-mpi.org/) is based on OpenMPI. Read more on [how to run OpenMPI](Running_OpenMPI/) based MPI.
The Intel MPI may run on the[Intel Xeon Ph](../intel-xeon-phi/)i accelerators as well. Read more on [how to run Intel MPI on accelerators](../intel-xeon-phi/).
MPI4Py (MPI for Python)
=======================
OpenMPI interface to Python
Introduction
------------
MPI for Python provides bindings of the Message Passing Interface (MPI) standard for the Python programming language, allowing any Python program to exploit multiple processors.
This package is constructed on top of the MPI-1/2 specifications and provides an object oriented interface which closely follows MPI-2 C++ bindings. It supports point-to-point (sends, receives) and collective (broadcasts, scatters, gathers) communications of any picklable Python object, as well as optimized communications of Python object exposing the single-segment buffer interface (NumPy arrays, builtin bytes/string/array objects).
On Anselm MPI4Py is available in standard Python modules.
Modules
-------
MPI4Py is build for OpenMPI. Before you start with MPI4Py you need to load Python and OpenMPI modules. You can use toolchain, that loads Python and OpenMPI at once.
```bash
$ module load Python/2.7.9-foss-2015g
```
Execution
---------
You need to import MPI to your python program. Include the following line to the python script:
```bash
from mpi4py import MPI
```
The MPI4Py enabled python programs [execute as any other OpenMPI](Running_OpenMPI/) code.The simpliest way is to run
```bash
$ mpiexec python <script>.py
```
For example
```bash
$ mpiexec python hello_world.py
```
Examples
--------
### Hello world!
```cpp
from mpi4py import MPI
comm = MPI.COMM_WORLD
print "Hello! I'm rank %d from %d running in total..." % (comm.rank, comm.size)
comm.Barrier() # wait for everybody to synchronize
```
###Collective Communication with NumPy arrays
```cpp
from __future__ import division
from mpi4py import MPI
import numpy as np
comm = MPI.COMM_WORLD
print("-"*78)
print(" Running on %d cores" % comm.size)
print("-"*78)
comm.Barrier()
# Prepare a vector of N=5 elements to be broadcasted...
N = 5
if comm.rank == 0:
A = np.arange(N, dtype=np.float64) # rank 0 has proper data
else:
A = np.empty(N, dtype=np.float64) # all other just an empty array
# Broadcast A from rank 0 to everybody
comm.Bcast( [A, MPI.DOUBLE] )
# Everybody should now have the same...
print "[%02d] %s" % (comm.rank, A)
```
Execute the above code as:
```bash
$ qsub -q qexp -l select=4:ncpus=24:mpiprocs=24:ompthreads=1 -I
$ module load Python/2.7.9-foss-2015g
$ mpiexec --map-by core --bind-to core python hello_world.py
```
In this example, we run MPI4Py enabled code on 4 nodes, 24 cores per node (total of 96 processes), each python process is bound to a different core. More examples and documentation can be found on [MPI for Python webpage](https://pythonhosted.org/mpi4py/usrman/index.md).
Numerical languages
===================
Interpreted languages for numerical computations and analysis
Introduction
------------
This section contains a collection of high-level interpreted languages, primarily intended for numerical computations.
Matlab
------
MATLAB®^ is a high-level language and interactive environment for numerical computation, visualization, and programming.
```bash
$ module load MATLAB
$ matlab
```
Read more at the [Matlab page](matlab/).
Octave
------
GNU Octave is a high-level interpreted language, primarily intended for numerical computations. The Octave language is quite similar to Matlab so that most programs are easily portable.
```bash
$ module load Octave
$ octave
```
Read more at the [Octave page](octave/).
R
---
The R is an interpreted language and environment for statistical computing and graphics.
```bash
$ module load R
$ R
```
Read more at the [R page](r/).
Matlab
======
Introduction
------------
Matlab is available in versions R2015a and R2015b. There are always two variants of the release:
- Non commercial or so called EDU variant, which can be used for common research and educational purposes.
- Commercial or so called COM variant, which can used also for commercial activities. The licenses for commercial variant are much more expensive, so usually the commercial variant has only subset of features compared to the EDU available.
To load the latest version of Matlab load the module
```bash
$ module load MATLAB
```
By default the EDU variant is marked as default. If you need other version or variant, load the particular version. To obtain the list of available versions use
```bash
$ module avail MATLAB
```
If you need to use the Matlab GUI to prepare your Matlab programs, you can use Matlab directly on the login nodes. But for all computations use Matlab on the compute nodes via PBS Pro scheduler.
If you require the Matlab GUI, please follow the general informations about [running graphical applications](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/).
Matlab GUI is quite slow using the X forwarding built in the PBS (qsub -X), so using X11 display redirection either via SSH or directly by xauth (please see the "GUI Applications on Compute Nodes over VNC" part [here](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/)) is recommended.
To run Matlab with GUI, use
```bash
$ matlab
```
To run Matlab in text mode, without the Matlab Desktop GUI environment, use
```bash
$ matlab -nodesktop -nosplash
```
plots, images, etc... will be still available.
Running parallel Matlab using Distributed Computing Toolbox / Engine
------------------------------------------------------------------------
Distributed toolbox is available only for the EDU variant
The MPIEXEC mode available in previous versions is no longer available in MATLAB 2015. Also, the programming interface has changed. Refer to [Release Notes](http://www.mathworks.com/help/distcomp/release-notes.html#buanp9e-1).
Delete previously used file mpiLibConf.m, we have observed crashes when using Intel MPI.
To use Distributed Computing, you first need to setup a parallel profile. We have provided the profile for you, you can either import it in MATLAB command line:
```bash
> parallel.importProfile('/apps/all/MATLAB/2015b-EDU/SalomonPBSPro.settings')
ans =
SalomonPBSPro
```
Or in the GUI, go to tab HOME -&gt; Parallel -&gt; Manage Cluster Profiles..., click Import and navigate to :
/apps/all/MATLAB/2015b-EDU/SalomonPBSPro.settings
With the new mode, MATLAB itself launches the workers via PBS, so you can either use interactive mode or a batch mode on one node, but the actual parallel processing will be done in a separate job started by MATLAB itself. Alternatively, you can use "local" mode to run parallel code on just a single node.
### Parallel Matlab interactive session
Following example shows how to start interactive session with support for Matlab GUI. For more information about GUI based applications on Anselm see [this page](../../../get-started-with-it4innovations/accessing-the-clusters/graphical-user-interface/x-window-system/).
```bash
$ xhost +
$ qsub -I -v DISPLAY=$(uname -n):$(echo $DISPLAY | cut -d ':' -f 2) -A NONE-0-0 -q qexp -l select=1 -l walltime=00:30:00
-l feature__matlab__MATLAB=1
```
This qsub command example shows how to run Matlab on a single node.
The second part of the command shows how to request all necessary licenses. In this case 1 Matlab-EDU license and 48 Distributed Computing Engines licenses.
Once the access to compute nodes is granted by PBS, user can load following modules and start Matlab:
```bash
r1i0n17$ module load MATLAB/2015a-EDU
r1i0n17$ matlab &
```
### Parallel Matlab batch job in Local mode
To run matlab in batch mode, write an matlab script, then write a bash jobscript and execute via the qsub command. By default, matlab will execute one matlab worker instance per allocated core.
```bash
#!/bin/bash
#PBS -A PROJECT ID
#PBS -q qprod
#PBS -l select=1:ncpus=24:mpiprocs=24:ompthreads=1
# change to shared scratch directory
SCR=/scratch/work/user/$USER/$PBS_JOBID
mkdir -p $SCR ; cd $SCR || exit
# copy input file to scratch
cp $PBS_O_WORKDIR/matlabcode.m .
# load modules
module load MATLAB/2015a-EDU
# execute the calculation
matlab -nodisplay -r matlabcode > output.out
# copy output file to home
cp output.out $PBS_O_WORKDIR/.
```
This script may be submitted directly to the PBS workload manager via the qsub command. The inputs and matlab script are in matlabcode.m file, outputs in output.out file. Note the missing .m extension in the matlab -r matlabcodefile call, **the .m must not be included**. Note that the **shared /scratch must be used**. Further, it is **important to include quit** statement at the end of the matlabcode.m script.
Submit the jobscript using qsub
```bash
$ qsub ./jobscript
```
### Parallel Matlab Local mode program example
The last part of the configuration is done directly in the user Matlab script before Distributed Computing Toolbox is started.
```bash
cluster = parcluster('local')
```
This script creates scheduler object "cluster" of type "local" that starts workers locally.
Please note: Every Matlab script that needs to initialize/use matlabpool has to contain these three lines prior to calling parpool(sched, ...) function.
The last step is to start matlabpool with "cluster" object and correct number of workers. We have 24 cores per node, so we start 24 workers.
```bash
parpool(cluster,24);
... parallel code ...
parpool close
```
The complete example showing how to use Distributed Computing Toolbox in local mode is shown here.
```bash
cluster = parcluster('local');
cluster
parpool(cluster,24);
n=2000;
W = rand(n,n);
W = distributed(W);
x = (1:n)';
x = distributed(x);
spmd
[~, name] = system('hostname')
T = W*x; % Calculation performed on labs, in parallel.
% T and W are both codistributed arrays here.
end
T;
whos % T and W are both distributed arrays here.
parpool close
quit
```
You can copy and paste the example in a .m file and execute. Note that the parpool size should correspond to **total number of cores** available on allocated nodes.
### Parallel Matlab Batch job using PBS mode (workers spawned in a separate job)
This mode uses PBS scheduler to launch the parallel pool. It uses the SalomonPBSPro profile that needs to be imported to Cluster Manager, as mentioned before. This methodod uses MATLAB's PBS Scheduler interface - it spawns the workers in a separate job submitted by MATLAB using qsub.
This is an example of m-script using PBS mode:
```bash
cluster = parcluster('SalomonPBSPro');
set(cluster, 'SubmitArguments', '-A OPEN-0-0');
set(cluster, 'ResourceTemplate', '-q qprod -l select=10:ncpus=24');
set(cluster, 'NumWorkers', 240);
pool = parpool(cluster,240);
n=2000;
W = rand(n,n);
W = distributed(W);
x = (1:n)';
x = distributed(x);
spmd
[~, name] = system('hostname')
T = W*x; % Calculation performed on labs, in parallel.
% T and W are both codistributed arrays here.
end
whos % T and W are both distributed arrays here.
% shut down parallel pool
delete(pool)
```
Note that we first construct a cluster object using the imported profile, then set some important options, namely : SubmitArguments, where you need to specify accounting id, and ResourceTemplate, where you need to specify number of nodes to run the job.
You can start this script using batch mode the same way as in Local mode example.
### Parallel Matlab Batch with direct launch (workers spawned within the existing job)
This method is a "hack" invented by us to emulate the mpiexec functionality found in previous MATLAB versions. We leverage the MATLAB Generic Scheduler interface, but instead of submitting the workers to PBS, we launch the workers directly within the running job, thus we avoid the issues with master script and workers running in separate jobs (issues with license not available, waiting for the worker's job to spawn etc.)
Please note that this method is experimental.
For this method, you need to use SalomonDirect profile, import it using [the same way as SalomonPBSPro](matlab.md#running-parallel-matlab-using-distributed-computing-toolbox---engine)
This is an example of m-script using direct mode:
```bash
parallel.importProfile('/apps/all/MATLAB/2015b-EDU/SalomonDirect.settings')
cluster = parcluster('SalomonDirect');
set(cluster, 'NumWorkers', 48);
pool = parpool(cluster, 48);
n=2000;
W = rand(n,n);
W = distributed(W);
x = (1:n)';
x = distributed(x);
spmd
[~, name] = system('hostname')
T = W*x; % Calculation performed on labs, in parallel.
% T and W are both codistributed arrays here.
end
whos % T and W are both distributed arrays here.
% shut down parallel pool
delete(pool)
```
### Non-interactive Session and Licenses
If you want to run batch jobs with Matlab, be sure to request appropriate license features with the PBS Pro scheduler, at least the " -l __feature__matlab__MATLAB=1" for EDU variant of Matlab. More information about how to check the license features states and how to request them with PBS Pro, please [look here](../../../anselm-cluster-documentation/software/isv_licenses/).
The licensing feature of PBS is currently disabled.
In case of non-interactive session please read the [following information](../../../anselm-cluster-documentation/software/isv_licenses/) on how to modify the qsub command to test for available licenses prior getting the resource allocation.
### Matlab Distributed Computing Engines start up time
Starting Matlab workers is an expensive process that requires certain amount of time. For your information please see the following table:
|compute nodes|number of workers|start-up time[s]|
|---|---|---|
|16|384|831|
|8|192|807|
|4|96|483|
|2|48|16|
MATLAB on UV2000
-----------------
UV2000 machine available in queue "qfat" can be used for MATLAB computations. This is a SMP NUMA machine with large amount of RAM, which can be beneficial for certain types of MATLAB jobs. CPU cores are allocated in chunks of 8 for this machine.
You can use MATLAB on UV2000 in two parallel modes:
### Threaded mode
Since this is a SMP machine, you can completely avoid using Parallel Toolbox and use only MATLAB's threading. MATLAB will automatically detect the number of cores you have allocated and will set maxNumCompThreads accordingly and certain operations, such as fft, , eig, svd, etc. will be automatically run in threads. The advantage of this mode is that you don't need to modify your existing sequential codes.
### Local cluster mode
You can also use Parallel Toolbox on UV2000. Use l[ocal cluster mode](matlab/#parallel-matlab-batch-job-in-local-mode), "SalomonPBSPro" profile will not work.
Octave
======
GNU Octave is a high-level interpreted language, primarily intended for numerical computations. It provides capabilities for the numerical solution of linear and nonlinear problems, and for performing other numerical experiments. It also provides extensive graphics capabilities for data visualization and manipulation. Octave is normally used through its interactive command line interface, but it can also be used to write non-interactive programs. The Octave language is quite similar to Matlab so that most programs are easily portable. Read more on <http://www.gnu.org/software/octave/>
Two versions of octave are available on the cluster, via module
|Status | Version | module|
|---|---|
|**Stable** | Octave 3.8.2 | Octave|
```bash
$ module load Octave
```
The octave on the cluster is linked to highly optimized MKL mathematical library. This provides threaded parallelization to many octave kernels, notably the linear algebra subroutines. Octave runs these heavy calculation kernels without any penalty. By default, octave would parallelize to 24 threads. You may control the threads by setting the OMP_NUM_THREADS environment variable.
To run octave interactively, log in with ssh -X parameter for X11 forwarding. Run octave:
```bash
$ octave
```
To run octave in batch mode, write an octave script, then write a bash jobscript and execute via the qsub command. By default, octave will use 16 threads when running MKL kernels.
```bash
#!/bin/bash
# change to local scratch directory
mkdir -p /scratch/work/user/$USER/$PBS_JOBID
cd /scratch/work/user/$USER/$PBS_JOBID || exit
# copy input file to scratch
cp $PBS_O_WORKDIR/octcode.m .
# load octave module
module load Octave
# execute the calculation
octave -q --eval octcode > output.out
# copy output file to home
cp output.out $PBS_O_WORKDIR/.
#exit
exit
```
This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in octcode.m file, outputs in output.out file. See the single node jobscript example in the [Job execution section](../../resource-allocation-and-job-execution/).
The octave c compiler mkoctfile calls the GNU gcc 4.8.1 for compiling native c code. This is very useful for running native c subroutines in octave environment.
```bash
$ mkoctfile -v
```
Octave may use MPI for interprocess communication This functionality is currently not supported on the cluster cluster. In case you require the octave interface to MPI, please contact our [cluster support](https://support.it4i.cz/rt/).
R
===
Introduction
------------
The R is a language and environment for statistical computing and graphics. R provides a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, ...) and graphical techniques, and is highly extensible.
One of R's strengths is the ease with which well-designed publication-quality plots can be produced, including mathematical symbols and formulae where needed. Great care has been taken over the defaults for the minor design choices in graphics, but the user retains full control.
Another convenience is the ease with which the C code or third party libraries may be integrated within R.
Extensive support for parallel computing is available within R.
Read more on <http://www.r-project.org/>, <http://cran.r-project.org/doc/manuals/r-release/R-lang.html>
Modules
-------
**The R version 3.1.1 is available on the cluster, along with GUI interface Rstudio**
|Application|Version|module|
|---|---|
|**R**|R 3.1.1|R/3.1.1-intel-2015b|
|**Rstudio**|Rstudio 0.97|Rstudio|
```bash
$ module load R
```
Execution
---------
The R on Anselm is linked to highly optimized MKL mathematical library. This provides threaded parallelization to many R kernels, notably the linear algebra subroutines. The R runs these heavy calculation kernels without any penalty. By default, the R would parallelize to 24 threads. You may control the threads by setting the OMP_NUM_THREADS environment variable.
### Interactive execution
To run R interactively, using Rstudio GUI, log in with ssh -X parameter for X11 forwarding. Run rstudio:
```bash
$ module load Rstudio
$ rstudio
```
### Batch execution
To run R in batch mode, write an R script, then write a bash jobscript and execute via the qsub command. By default, R will use 24 threads when running MKL kernels.
Example jobscript:
```bash
#!/bin/bash
# change to local scratch directory
cd /lscratch/$PBS_JOBID || exit
# copy input file to scratch
cp $PBS_O_WORKDIR/rscript.R .
# load R module
module load R
# execute the calculation
R CMD BATCH rscript.R routput.out
# copy output file to home
cp routput.out $PBS_O_WORKDIR/.
#exit
exit
```
This script may be submitted directly to the PBS workload manager via the qsub command. The inputs are in rscript.R file, outputs in routput.out file. See the single node jobscript example in the [Job execution section](../../resource-allocation-and-job-execution/job-submission-and-execution/).
Parallel R
----------
Parallel execution of R may be achieved in many ways. One approach is the implied parallelization due to linked libraries or specially enabled functions, as [described above](r/#interactive-execution). In the following sections, we focus on explicit parallelization, where parallel constructs are directly stated within the R script.
Package parallel
--------------------
The package parallel provides support for parallel computation, including by forking (taken from package multicore), by sockets (taken from package snow) and random-number generation.
The package is activated this way:
```bash
$ R
> library(parallel)
```
More information and examples may be obtained directly by reading the documentation available in R
```bash
> ?parallel
> library(help = "parallel")
> vignette("parallel")
```
Download the package [parallell](package-parallel-vignette.pdf) vignette.
The forking is the most simple to use. Forking family of functions provide parallelized, drop in replacement for the serial apply() family of functions.
!!! Note "Note"
Forking via package parallel provides functionality similar to OpenMP construct omp parallel for
Only cores of single node can be utilized this way!
Forking example:
```cpp
library(parallel)
#integrand function
f <- function(i,h) {
x <- h*(i-0.5)
return (4/(1 + x*x))
}
#initialize
size <- detectCores()
while (TRUE)
{
#read number of intervals
cat("Enter the number of intervals: (0 quits) ")
fp<-file("stdin"); n<-scan(fp,nmax=1); close(fp)
if(n<=0) break
#run the calculation
n <- max(n,size)
h <- 1.0/n
i <- seq(1,n);
pi3 <- h*sum(simplify2array(mclapply(i,f,h,mc.cores=size)));
#print results
cat(sprintf("Value of PI %16.14f, diff= %16.14fn",pi3,pi3-pi))
}
```
The above example is the classic parallel example for calculating the number π. Note the **detectCores()** and **mclapply()** functions. Execute the example as:
```bash
$ R --slave --no-save --no-restore -f pi3p.R
```
Every evaluation of the integrad function runs in parallel on different process.
Package Rmpi
------------
package Rmpi provides an interface (wrapper) to MPI APIs.
It also provides interactive R slave environment. On the cluster, Rmpi provides interface to the [OpenMPI](../mpi/Running_OpenMPI/).
Read more on Rmpi at <http://cran.r-project.org/web/packages/Rmpi/>, reference manual is available at <http://cran.r-project.org/web/packages/Rmpi/Rmpi.pdf>
When using package Rmpi, both openmpi and R modules must be loaded
```bash
$ module load OpenMPI
$ module load R
```
Rmpi may be used in three basic ways. The static approach is identical to executing any other MPI programm. In addition, there is Rslaves dynamic MPI approach and the mpi.apply approach. In the following section, we will use the number π integration example, to illustrate all these concepts.
### static Rmpi
Static Rmpi programs are executed via mpiexec, as any other MPI programs. Number of processes is static - given at the launch time.
Static Rmpi example:
```cpp
library(Rmpi)
#integrand function
f <- function(i,h) {
x <- h*(i-0.5)
return (4/(1 + x*x))
}
#initialize
invisible(mpi.comm.dup(0,1))
rank <- mpi.comm.rank()
size <- mpi.comm.size()
n<-0
while (TRUE)
{
#read number of intervals
if (rank==0) {
cat("Enter the number of intervals: (0 quits) ")
fp<-file("stdin"); n<-scan(fp,nmax=1); close(fp)
}
#broadcat the intervals
n <- mpi.bcast(as.integer(n),type=1)
if(n<=0) break
#run the calculation
n <- max(n,size)
h <- 1.0/n
i <- seq(rank+1,n,size);
mypi <- h*sum(sapply(i,f,h));
pi3 <- mpi.reduce(mypi)
#print results
if (rank==0) cat(sprintf("Value of PI %16.14f, diff= %16.14fn",pi3,pi3-pi))
}
mpi.quit()
```
The above is the static MPI example for calculating the number π. Note the **library(Rmpi)** and **mpi.comm.dup()** function calls. Execute the example as:
```bash
$ mpirun R --slave --no-save --no-restore -f pi3.R
```
### dynamic Rmpi
Dynamic Rmpi programs are executed by calling the R directly. OpenMPI module must be still loaded. The R slave processes will be spawned by a function call within the Rmpi program.
Dynamic Rmpi example:
```cpp
#integrand function
f <- function(i,h) {
x <- h*(i-0.5)
return (4/(1 + x*x))
}
#the worker function
workerpi <- function()
{
#initialize
rank <- mpi.comm.rank()
size <- mpi.comm.size()
n<-0
while (TRUE)
{
#read number of intervals
if (rank==0) {
cat("Enter the number of intervals: (0 quits) ")
fp<-file("stdin"); n<-scan(fp,nmax=1); close(fp)
}
#broadcat the intervals
n <- mpi.bcast(as.integer(n),type=1)
if(n<=0) break
#run the calculation
n <- max(n,size)
h <- 1.0/n
i <- seq(rank+1,n,size);
mypi <- h*sum(sapply(i,f,h));
pi3 <- mpi.reduce(mypi)
#print results
if (rank==0) cat(sprintf("Value of PI %16.14f, diff= %16.14fn",pi3,pi3-pi))
}
}
#main
library(Rmpi)
cat("Enter the number of slaves: ")
fp<-file("stdin"); ns<-scan(fp,nmax=1); close(fp)
mpi.spawn.Rslaves(nslaves=ns)
mpi.bcast.Robj2slave(f)
mpi.bcast.Robj2slave(workerpi)
mpi.bcast.cmd(workerpi())
workerpi()
mpi.quit()
```
The above example is the dynamic MPI example for calculating the number π. Both master and slave processes carry out the calculation. Note the mpi.spawn.Rslaves(), mpi.bcast.Robj2slave()** and the mpi.bcast.cmd()** function calls.
Execute the example as:
```bash
$ mpirun -np 1 R --slave --no-save --no-restore -f pi3Rslaves.R
```
Note that this method uses MPI_Comm_spawn (Dynamic process feature of MPI-2) to start the slave processes - the master process needs to be launched with MPI. In general, Dynamic processes are not well supported among MPI implementations, some issues might arise. Also, environment variables are not propagated to spawned processes, so they will not see paths from modules.
### mpi.apply Rmpi
mpi.apply is a specific way of executing Dynamic Rmpi programs.
mpi.apply() family of functions provide MPI parallelized, drop in replacement for the serial apply() family of functions.
Execution is identical to other dynamic Rmpi programs.
mpi.apply Rmpi example:
```cpp
#integrand function
f <- function(i,h) {
x <- h*(i-0.5)
return (4/(1 + x*x))
}
#the worker function
workerpi <- function(rank,size,n)
{
#run the calculation
n <- max(n,size)
h <- 1.0/n
i <- seq(rank,n,size);
mypi <- h*sum(sapply(i,f,h));
return(mypi)
}
#main
library(Rmpi)
cat("Enter the number of slaves: ")
fp<-file("stdin"); ns<-scan(fp,nmax=1); close(fp)
mpi.spawn.Rslaves(nslaves=ns)
mpi.bcast.Robj2slave(f)
mpi.bcast.Robj2slave(workerpi)
while (TRUE)
{
#read number of intervals
cat("Enter the number of intervals: (0 quits) ")
fp<-file("stdin"); n<-scan(fp,nmax=1); close(fp)
if(n<=0) break
#run workerpi
i=seq(1,2*ns)
pi3=sum(mpi.parSapply(i,workerpi,2*ns,n))
#print results
cat(sprintf("Value of PI %16.14f, diff= %16.14fn",pi3,pi3-pi))
}
mpi.quit()
```
The above is the mpi.apply MPI example for calculating the number π. Only the slave processes carry out the calculation. Note the **mpi.parSapply(), ** function call. The package parallel [example](r/#package-parallel)[above](r/#package-parallel) may be trivially adapted (for much better performance) to this structure using the mclapply() in place of mpi.parSapply().
Execute the example as:
```bash
$ mpirun -np 1 R --slave --no-save --no-restore -f pi3parSapply.R
```
Combining parallel and Rmpi
---------------------------
Currently, the two packages can not be combined for hybrid calculations.
Parallel execution
------------------
The R parallel jobs are executed via the PBS queue system exactly as any other parallel jobs. User must create an appropriate jobscript and submit via the **qsub**
Example jobscript for [static Rmpi](r/#static-rmpi) parallel R execution, running 1 process per core:
```bash
#!/bin/bash
#PBS -q qprod
#PBS -N Rjob
#PBS -l select=100:ncpus=24:mpiprocs=24:ompthreads=1
# change to scratch directory
SCRDIR=/scratch/work/user/$USER/myjob
cd $SCRDIR || exit
# copy input file to scratch
cp $PBS_O_WORKDIR/rscript.R .
# load R and openmpi module
module load R
module load OpenMPI
# execute the calculation
mpiexec -bycore -bind-to-core R --slave --no-save --no-restore -f rscript.R
# copy output file to home
cp routput.out $PBS_O_WORKDIR/.
#exit
exit
```
For more information about jobscripts and MPI execution refer to the [Job submission](../../resource-allocation-and-job-execution/job-submission-and-execution/) and general [MPI](../mpi/mpi/) sections.
Xeon Phi Offload
----------------
By leveraging MKL, R can accelerate certain computations, most notably linear algebra operations on the Xeon Phi accelerator by using Automated Offload. To use MKL Automated Offload, you need to first set this environment variable before R execution:
```bash
$ export MKL_MIC_ENABLE=1
```
[Read more about automatic offload](../intel-xeon-phi/)
Operating System
================
The operating system on Salomon is Linux - **CentOS 6.x**
The CentOS Linux distribution is a stable, predictable, manageable and reproducible platform derived from the sources of Red Hat Enterprise Linux (RHEL).
\ No newline at end of file
Storage
=======
Introduction
------------
There are two main shared file systems on Salomon cluster, the [HOME](#home)and [SCRATCH](#shared-filesystems).
All login and compute nodes may access same data on shared filesystems. Compute nodes are also equipped with local (non-shared) scratch, ramdisk and tmp filesystems.
Policy (in a nutshell)
----------------------
!!! Note "Note"
Use [ for your most valuable data and programs.
Use [WORK](#work) for your large project files.
Use [TEMP](#temp) for large scratch data.
Do not use for [archiving](#archiving)!
Archiving
-------------
Please don't use shared filesystems as a backup for large amount of data or long-term archiving mean. The academic staff and students of research institutions in the Czech Republic can use [CESNET storage service](#cesnet-data-storage), which is available via SSHFS.
Shared Filesystems
----------------------
Salomon computer provides two main shared filesystems, the [ HOME filesystem](#home-filesystem) and the [SCRATCH filesystem](#scratch-filesystem). The SCRATCH filesystem is partitioned to [WORK and TEMP workspaces](#shared-workspaces). The HOME filesystem is realized as a tiered NFS disk storage. The SCRATCH filesystem is realized as a parallel Lustre filesystem. Both shared file systems are accessible via the Infiniband network. Extended ACLs are provided on both HOME/SCRATCH filesystems for the purpose of sharing data with other users using fine-grained control.
###HOME filesystem
The HOME filesystem is realized as a Tiered filesystem, exported via NFS. The first tier has capacity 100 TB, second tier has capacity 400 TB. The filesystem is available on all login and computational nodes. The Home filesystem hosts the [HOME workspace](#home).
###SCRATCH filesystem
The architecture of Lustre on Salomon is composed of two metadata servers (MDS) and six data/object storage servers (OSS). Accessible capacity is 1.69 PB, shared among all users. The SCRATCH filesystem hosts the [WORK and TEMP workspaces](#shared-workspaces).
Configuration of the SCRATCH Lustre storage
- SCRATCH Lustre object storage
- Disk array SFA12KX
- 540 4 TB SAS 7.2krpm disks
- 54 OSTs of 10 disks in RAID6 (8+2)
- 15 hot-spare disks
- 4 x 400 GB SSD cache
- SCRATCH Lustre metadata storage
- Disk array EF3015
- 12 600 GB SAS 15 krpm disks
### Understanding the Lustre Filesystems
(source <http://www.nas.nasa.gov>)
A user file on the Lustre filesystem can be divided into multiple chunks (stripes) and stored across a subset of the object storage targets (OSTs) (disks). The stripes are distributed among the OSTs in a round-robin fashion to ensure load balancing.
When a client (a compute node from your job) needs to create or access a file, the client queries the metadata server ( MDS) and the metadata target ( MDT) for the layout and location of the [file's stripes](http://www.nas.nasa.gov/hecc/support/kb/Lustre_Basics_224.html#striping). Once the file is opened and the client obtains the striping information, the MDS is no longer involved in the file I/O process. The client interacts directly with the object storage servers (OSSes) and OSTs to perform I/O operations such as locking, disk allocation, storage, and retrieval.
If multiple clients try to read and write the same part of a file at the same time, the Lustre distributed lock manager enforces coherency so that all clients see consistent results.
There is default stripe configuration for Salomon Lustre filesystems. However, users can set the following stripe parameters for their own directories or files to get optimum I/O performance:
1. stripe_size: the size of the chunk in bytes; specify with k, m, or g to use units of KB, MB, or GB, respectively; the size must be an even multiple of 65,536 bytes; default is 1MB for all Salomon Lustre filesystems
2. stripe_count the number of OSTs to stripe across; default is 1 for Salomon Lustre filesystems one can specify -1 to use all OSTs in the filesystem.
3. stripe_offset The index of the OST where the first stripe is to be placed; default is -1 which results in random selection; using a non-default value is NOT recommended.
!!! Note "Note"
Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
Use the lfs getstripe for getting the stripe parameters. Use the lfs setstripe command for setting the stripe parameters to get optimal I/O performance The correct stripe setting depends on your needs and file access patterns.
```bash
$ lfs getstripe dir|filename
$ lfs setstripe -s stripe_size -c stripe_count -o stripe_offset dir|filename
```
Example:
```bash
$ lfs getstripe /scratch/work/user/username
/scratch/work/user/username
stripe_count: 1 stripe_size: 1048576 stripe_offset: -1
$ lfs setstripe -c -1 /scratch/work/user/username/
$ lfs getstripe /scratch/work/user/username/
/scratch/work/user/username/
stripe_count: -1 stripe_size: 1048576 stripe_offset: -1
```
In this example, we view current stripe setting of the /scratch/username/ directory. The stripe count is changed to all OSTs, and verified. All files written to this directory will be striped over all (54) OSTs
Use lfs check OSTs to see the number and status of active OSTs for each filesystem on Salomon. Learn more by reading the man page
```bash
$ lfs check osts
$ man lfs
```
### Hints on Lustre Stripping
!!! Note "Note"
Increase the stripe_count for parallel I/O to the same file.
When multiple processes are writing blocks of data to the same file in parallel, the I/O performance for large files will improve when the stripe_count is set to a larger value. The stripe count sets the number of OSTs the file will be written to. By default, the stripe count is set to 1. While this default setting provides for efficient access of metadata (for example to support the ls -l command), large files should use stripe counts of greater than 1. This will increase the aggregate I/O bandwidth by using multiple OSTs in parallel instead of just one. A rule of thumb is to use a stripe count approximately equal to the number of gigabytes in the file.
Another good practice is to make the stripe count be an integral factor of the number of processes performing the write in parallel, so that you achieve load balance among the OSTs. For example, set the stripe count to 16 instead of 15 when you have 64 processes performing the writes.
!!! Note "Note"
Using a large stripe size can improve performance when accessing very large files
Large stripe size allows each client to have exclusive access to its own part of a file. However, it can be counterproductive in some cases if it does not match your I/O pattern. The choice of stripe size has no effect on a single-stripe file.
Read more on <http://wiki.lustre.org/manual/LustreManual20_HTML/ManagingStripingFreeSpace.html>
Disk usage and quota commands
------------------------------------------
User quotas on the Lustre file systems (SCRATCH) can be checked and reviewed using following command:
```bash
$ lfs quota dir
```
Example for Lustre SCRATCH directory:
```bash
$ lfs quota /scratch
Disk quotas for user user001 (uid 1234):
Filesystem kbytes quota limit grace files quota limit grace
/scratch 8 0 100000000000 - 3 0 0 -
Disk quotas for group user001 (gid 1234):
Filesystem kbytes quota limit grace files quota limit grace
/scratch 8 0 0 - 3 0 0 -
```
In this example, we view current quota size limit of 100TB and 8KB currently used by user001.
HOME directory is mounted via NFS, so a different command must be used to obtain quota information:
```bash
$ quota
```
Example output:
```bash
$ quota
Disk quotas for user vop999 (uid 1025):
Filesystem blocks quota limit grace files quota limit grace
home-nfs-ib.salomon.it4i.cz:/home
28 0 250000000 10 0 500000
```
To have a better understanding of where the space is exactly used, you can use following command to find out.
```bash
$ du -hs dir
```
Example for your HOME directory:
```bash
$ cd /home
$ du -hs * .[a-zA-z0-9]* | grep -E "[0-9]*G|[0-9]*M" | sort -hr
258M cuda-samples
15M .cache
13M .mozilla
5,5M .eclipse
2,7M .idb_13.0_linux_intel64_app
```
This will list all directories which are having MegaBytes or GigaBytes of consumed space in your actual (in this example HOME) directory. List is sorted in descending order from largest to smallest files/directories.
To have a better understanding of previous commands, you can read manpages.
```bash
$ man lfs
```
```bash
$ man du
```
Extended Access Control List (ACL)
----------------------------------
Extended ACLs provide another security mechanism beside the standard POSIX ACLs which are defined by three entries (for owner/group/others). Extended ACLs have more than the three basic entries. In addition, they also contain a mask entry and may contain any number of named user and named group entries.
ACLs on a Lustre file system work exactly like ACLs on any Linux file system. They are manipulated with the standard tools in the standard manner. Below, we create a directory and allow a specific user access.
```bash
[vop999@login1.salomon ~]$ umask 027
[vop999@login1.salomon ~]$ mkdir test
[vop999@login1.salomon ~]$ ls -ld test
drwxr-x--- 2 vop999 vop999 4096 Nov 5 14:17 test
[vop999@login1.salomon ~]$ getfacl test
# file: test
# owner: vop999
# group: vop999
user::rwx
group::r-x
other::---
[vop999@login1.salomon ~]$ setfacl -m user:johnsm:rwx test
[vop999@login1.salomon ~]$ ls -ld test
drwxrwx---+ 2 vop999 vop999 4096 Nov 5 14:17 test
[vop999@login1.salomon ~]$ getfacl test
# file: test
# owner: vop999
# group: vop999
user::rwx
user:johnsm:rwx
group::r-x
mask::rwx
other::---
```
Default ACL mechanism can be used to replace setuid/setgid permissions on directories. Setting a default ACL on a directory (-d flag to setfacl) will cause the ACL permissions to be inherited by any newly created file or subdirectory within the directory. Refer to this page for more information on Linux ACL:
[http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html ](http://www.vanemery.com/Linux/ACL/POSIX_ACL_on_Linux.html)
Shared Workspaces
---------------------
###HOME
Users home directories /home/username reside on HOME filesystem. Accessible capacity is 0.5PB, shared among all users. Individual users are restricted by filesystem usage quotas, set to 250GB per user. If 250GB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
!!! Note "Note"
The HOME filesystem is intended for preparation, evaluation, processing and storage of data generated by active Projects.
The HOME should not be used to archive data of past Projects or other unrelated data.
The files on HOME will not be deleted until end of the [users lifecycle](../../get-started-with-it4innovations/obtaining-login-credentials/obtaining-login-credentials/).
The workspace is backed up, such that it can be restored in case of catasthropic failure resulting in significant data loss. This backup however is not intended to restore old versions of user data or to restore (accidentaly) deleted files.
|HOME workspace||
|---|---|
|Accesspoint|/home/username|
|Capacity|0.5PB|
|Throughput|6GB/s|
|User quota|250GB|
|Protocol|NFS, 2-Tier|
### WORK
The WORK workspace resides on SCRATCH filesystem. Users may create subdirectories and files in directories **/scratch/work/user/username** and **/scratch/work/project/projectid. **The /scratch/work/user/username is private to user, much like the home directory. The /scratch/work/project/projectid is accessible to all users involved in project projectid.
!!! Note "Note"
The WORK workspace is intended to store users project data as well as for high performance access to input and output files. All project data should be removed once the project is finished. The data on the WORK workspace are not backed up.
Files on the WORK filesystem are **persistent** (not automatically deleted) throughout duration of the project.
The WORK workspace is hosted on SCRATCH filesystem. The SCRATCH is realized as Lustre parallel filesystem and is available from all login and computational nodes. Default stripe size is 1MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH filesystem.
!!! Note "Note"
Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
|WORK workspace||
|---|---|
|Accesspoints|/scratch/work/user/username, /scratch/work/user/projectid|
|Capacity |1.6P|
|Throughput|30GB/s|
|User quota|100TB|
|Default stripe size|1MB|
|Default stripe count|1|
|Number of OSTs|54|
|Protocol|Lustre|
### TEMP
The TEMP workspace resides on SCRATCH filesystem. The TEMP workspace accesspoint is /scratch/temp. Users may freely create subdirectories and files on the workspace. Accessible capacity is 1.6P, shared among all users on TEMP and WORK. Individual users are restricted by filesystem usage quotas, set to 100TB per user. The purpose of this quota is to prevent runaway programs from filling the entire filesystem and deny service to other users. >If 100TB should prove as insufficient for particular user, please contact [support](https://support.it4i.cz/rt), the quota may be lifted upon request.
!!! Note "Note"
The TEMP workspace is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. All I/O intensive jobs must use the TEMP workspace as their working directory.
Users are advised to save the necessary data from the TEMP workspace to HOME or WORK after the calculations and clean up the scratch files.
Files on the TEMP filesystem that are **not accessed for more than 90 days** will be automatically **deleted**.
The TEMP workspace is hosted on SCRATCH filesystem. The SCRATCH is realized as Lustre parallel filesystem and is available from all login and computational nodes. Default stripe size is 1MB, stripe count is 1. There are 54 OSTs dedicated for the SCRATCH filesystem.
!!! Note "Note"
Setting stripe size and stripe count correctly for your needs may significantly impact the I/O performance you experience.
|TEMP workspace||
|---|---|
|Accesspoint|/scratch/temp|
|Capacity|1.6P|
|Throughput|30GB/s|
|User quota|100TB|
|Default stripe size|1MB|
|Default stripe count|1|
|Number of OSTs|54|
|Protocol|Lustre|
RAM disk
--------
Every computational node is equipped with filesystem realized in memory, so called RAM disk.
!!! Note "Note"
Use RAM disk in case you need really fast access to your data of limited size during your calculation. Be very careful, use of RAM disk filesystem is at the expense of operational memory.
The local RAM disk is mounted as /ramdisk and is accessible to user at /ramdisk/$PBS_JOBID directory.
The local RAM disk filesystem is intended for temporary scratch data generated during the calculation as well as for high performance access to input and output files. Size of RAM disk filesystem is limited. Be very careful, use of RAM disk filesystem is at the expense of operational memory. It is not recommended to allocate large amount of memory and use large amount of data in RAM disk filesystem at the same time.
!!! Note "Note"
The local RAM disk directory /ramdisk/$PBS_JOBID will be deleted immediately after the calculation end. Users should take care to save the output data from within the jobscript.
|RAM disk||
|---|---|
|Mountpoint| /ramdisk|
|Accesspoint| /ramdisk/$PBS_JOBID|
|Capacity|120 GB|
|Throughput|over 1.5 GB/s write, over 5 GB/s read, single thread, over 10 GB/s write, over 50 GB/s read, 16 threads|
|User quota|none|
Summary
-------
|Mountpoint|Usage|Protocol|Net|Capacity|Throughput|Limitations|Access|
|---|---|
| /home|home directory|NFS, 2-Tier|0.5 PB|6 GB/s|Quota 250GB|Compute and login nodes|backed up|
|/scratch/work|large project files|Lustre|1.69 PB|30 GB/s|Quota|Compute and login nodes|none|
|/scratch/temp|job temporary data|Lustre|1.69 PB|30 GB/s|Quota 100TB|Compute and login nodes|files older 90 days removed|
|/ramdisk|job temporary data, node local|local|120GB|90 GB/s|none|Compute nodes|purged after job ends|
CESNET Data Storage
------------
Do not use shared filesystems at IT4Innovations as a backup for large amount of data or long-term archiving purposes.
!!! Note "Note"
../../img/The IT4Innovations does not provide storage capacity for data archiving. Academic staff and students of research institutions in the Czech Republic can use [CESNET Storage service](https://du.cesnet.cz/).
The CESNET Storage service can be used for research purposes, mainly by academic staff and students of research institutions in the Czech Republic.
User of data storage CESNET (DU) association can become organizations or an individual person who is either in the current employment relationship (employees) or the current study relationship (students) to a legal entity (organization) that meets the “Principles for access to CESNET Large infrastructure (Access Policy)”.
User may only use data storage CESNET for data transfer and storage which are associated with activities in science, research, development, the spread of education, culture and prosperity. In detail see “Acceptable Use Policy CESNET Large Infrastructure (Acceptable Use Policy, AUP)”.
The service is documented at <https://du.cesnet.cz/wiki/doku.php/en/start>. For special requirements please contact directly CESNET Storage Department via e-mail [du-support(at)cesnet.cz](mailto:du-support@cesnet.cz).
The procedure to obtain the CESNET access is quick and trouble-free.
(source [https://du.cesnet.cz/](https://du.cesnet.cz/wiki/doku.php/en/start "CESNET Data Storage"))
CESNET storage access
---------------------
### Understanding CESNET storage
!!! Note "Note"
It is very important to understand the CESNET storage before uploading data. Please read <https://du.cesnet.cz/en/navody/home-migrace-plzen/start> first.
Once registered for CESNET Storage, you may [access the storage](https://du.cesnet.cz/en/navody/faq/start) in number of ways. We recommend the SSHFS and RSYNC methods.
### SSHFS Access
!!! Note "Note"
SSHFS: The storage will be mounted like a local hard drive
The SSHFS provides a very convenient way to access the CESNET Storage. The storage will be mounted onto a local directory, exposing the vast CESNET Storage as if it was a local removable hard drive. Files can be than copied in and out in a usual fashion.
First, create the mount point
```bash
$ mkdir cesnet
```
Mount the storage. Note that you can choose among the ssh.du1.cesnet.cz (Plzen), ssh.du2.cesnet.cz (Jihlava), ssh.du3.cesnet.cz (Brno) Mount tier1_home **(only 5120M !)**:
```bash
$ sshfs username@ssh.du1.cesnet.cz:. cesnet/
```
For easy future access from Anselm, install your public key
```bash
$ cp .ssh/id_rsa.pub cesnet/.ssh/authorized_keys
```
Mount tier1_cache_tape for the Storage VO:
```bash
$ sshfs username@ssh.du1.cesnet.cz:/cache_tape/VO_storage/home/username cesnet/
```
View the archive, copy the files and directories in and out
```bash
$ ls cesnet/
$ cp -a mydir cesnet/.
$ cp cesnet/myfile .
```
Once done, please remember to unmount the storage
```bash
$ fusermount -u cesnet
```
### Rsync access
!!! Note "Note"
Rsync provides delta transfer for best performance, can resume interrupted transfers
Rsync is a fast and extraordinarily versatile file copying tool. It is famous for its delta-transfer algorithm, which reduces the amount of data sent over the network by sending only the differences between the source files and the existing files in the destination. Rsync is widely used for backups and mirroring and as an improved copy command for everyday use.
Rsync finds files that need to be transferred using a "quick check" algorithm (by default) that looks for files that have changed in size or in last-modified time. Any changes in the other preserved attributes (as requested by options) are made on the destination file directly when the quick check indicates that the file's data does not need to be updated.
More about Rsync at <https://du.cesnet.cz/en/navody/rsync/start#pro_bezne_uzivatele>
Transfer large files to/from CESNET storage, assuming membership in the Storage VO
```bash
$ rsync --progress datafile username@ssh.du1.cesnet.cz:VO_storage-cache_tape/.
$ rsync --progress username@ssh.du1.cesnet.cz:VO_storage-cache_tape/datafile .
```
Transfer large directories to/from CESNET storage, assuming membership in the Storage VO
```bash
$ rsync --progress -av datafolder username@ssh.du1.cesnet.cz:VO_storage-cache_tape/.
$ rsync --progress -av username@ssh.du1.cesnet.cz:VO_storage-cache_tape/datafolder .
```
Transfer rates of about 28 MB/s can be expected.
File added
File added
#!/bin/bash
curl -s https://code.it4i.cz/hrb33/modules-anselm/raw/master/anselm.csv -o modules-anselm.csv
curl -s https://code.it4i.cz/hrb33/modules-salomon/raw/master/salomon.csv -o modules-salomon.csv
#curl -s https://code.it4i.cz/hrb33/modules-salomon/raw/master/salomon-uv.csv -o modules-salomon-uv.csv
#!/bin/bash
curl -s https://code.it4i.cz/hrb33/modules-anselm/raw/master/anselm.md -o docs.it4i/modules-anselm.md
curl -s https://code.it4i.cz/hrb33/modules-salomon/raw/master/salomon.md -o docs.it4i/modules-salomon.md
curl -s https://code.it4i.cz/hrb33/modules-salomon/raw/master/salomon-uv.md -o docs.it4i/modules-salomon-uv.md
File added
<?xml version="1.0" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd" >
<svg xmlns="http://www.w3.org/2000/svg">
<metadata>Generated by IcoMoon</metadata>
<defs>
<font id="icon" horiz-adv-x="1024">
<font-face units-per-em="1024" ascent="960" descent="-64" />
<missing-glyph horiz-adv-x="1024" />
<glyph unicode="&#x20;" horiz-adv-x="512" d="" />
<glyph unicode="&#xe600;" glyph-name="search" d="M661.333 341.334h-33.92l-11.733 11.733c41.813 48.427 66.987 111.36 66.987 180.267 0 153.173-124.16 277.333-277.333 277.333s-277.333-124.16-277.333-277.333 124.16-277.333 277.333-277.333c68.907 0 131.84 25.173 180.267 66.773l11.733-11.733v-33.707l213.333-212.907 63.573 63.573-212.907 213.333zM405.333 341.334c-106.027 0-192 85.973-192 192s85.973 192 192 192 192-85.973 192-192-85.973-192-192-192z" />
<glyph unicode="&#xe601;" glyph-name="arrow-back" d="M853.333 469.334h-519.253l238.293 238.293-60.373 60.373-341.333-341.333 341.333-341.333 60.373 60.373-238.293 238.293h519.253v85.333z" />
<glyph unicode="&#xe602;" glyph-name="chevron-right" d="M426.667 682.667l-60.373-60.373 195.627-195.627-195.627-195.627 60.373-60.373 256 256z" />
<glyph unicode="&#xe603;" glyph-name="close" d="M810.667 664.96l-60.373 60.373-238.293-238.293-238.293 238.293-60.373-60.373 238.293-238.293-238.293-238.293 60.373-60.373 238.293 238.293 238.293-238.293 60.373 60.373-238.293 238.293z" />
<glyph unicode="&#xe604;" glyph-name="menu" d="M128 170.667h768v85.333h-768v-85.333zM128 384h768v85.333h-768v-85.333zM128 682.667v-85.333h768v85.333h-768z" />
<glyph unicode="&#xe605;" glyph-name="arrow-forward" d="M512 768l-60.373-60.373 238.293-238.293h-519.253v-85.333h519.253l-238.293-238.293 60.373-60.373 341.333 341.333z" />
<glyph unicode="&#xe606;" glyph-name="twitter" d="M1024 744.249c-37.676-16.708-78.164-28.002-120.66-33.080 43.372 26 76.686 67.17 92.372 116.23-40.596-24.078-85.556-41.56-133.41-50.98-38.32 40.83-92.922 66.34-153.346 66.34-116.022 0-210.088-94.058-210.088-210.078 0-16.466 1.858-32.5 5.44-47.878-174.6 8.764-329.402 92.4-433.018 219.506-18.084-31.028-28.446-67.116-28.446-105.618 0-72.888 37.088-137.192 93.46-174.866-34.438 1.092-66.832 10.542-95.154 26.278-0.020-0.876-0.020-1.756-0.020-2.642 0-101.788 72.418-186.696 168.522-206-17.626-4.8-36.188-7.372-55.348-7.372-13.538 0-26.698 1.32-39.528 3.772 26.736-83.46 104.32-144.206 196.252-145.896-71.9-56.35-162.486-89.934-260.916-89.934-16.958 0-33.68 0.994-50.116 2.94 92.972-59.61 203.402-94.394 322.042-94.394 386.422 0 597.736 320.124 597.736 597.744 0 9.108-0.206 18.168-0.61 27.18 41.056 29.62 76.672 66.62 104.836 108.748z" />
<glyph unicode="&#xe607;" glyph-name="github" d="M512.008 926.025c-282.738 0-512.008-229.218-512.008-511.998 0-226.214 146.704-418.132 350.136-485.836 25.586-4.738 34.992 11.11 34.992 24.632 0 12.204-0.48 52.542-0.696 95.324-142.448-30.976-172.504 60.41-172.504 60.41-23.282 59.176-56.848 74.916-56.848 74.916-46.452 31.778 3.51 31.124 3.51 31.124 51.4-3.61 78.476-52.766 78.476-52.766 45.672-78.27 119.776-55.64 149.004-42.558 4.588 33.086 17.852 55.68 32.506 68.464-113.73 12.942-233.276 56.85-233.276 253.032 0 55.898 20.004 101.574 52.76 137.428-5.316 12.9-22.854 64.972 4.952 135.5 0 0 43.006 13.752 140.84-52.49 40.836 11.348 84.636 17.036 128.154 17.234 43.502-0.198 87.336-5.886 128.256-17.234 97.734 66.244 140.656 52.49 140.656 52.49 27.872-70.528 10.35-122.6 5.036-135.5 32.82-35.856 52.694-81.532 52.694-137.428 0-196.654-119.778-239.95-233.79-252.624 18.364-15.89 34.724-47.046 34.724-94.812 0-68.508-0.596-123.644-0.596-140.508 0-13.628 9.222-29.594 35.172-24.566 203.322 67.776 349.842 259.626 349.842 485.768 0 282.78-229.234 511.998-511.992 511.998z" />
<glyph unicode="&#xe608;" glyph-name="download" d="M810.667 554.667h-170.667v256h-256v-256h-170.667l298.667-298.667 298.667 298.667zM213.333 170.667v-85.333h597.333v85.333h-597.333z" />
<glyph unicode="&#xe609;" glyph-name="star" d="M512 201.814l263.68-159.147-69.973 299.947 232.96 201.813-306.773 26.027-119.893 282.88-119.893-282.88-306.773-26.027 232.96-201.813-69.973-299.947z" />
<glyph unicode="&#xe610;" glyph-name="warning" d="M554 340.667v172h-84v-172h84zM554 170.667v86h-84v-86h84zM42 42.667l470 810 470-810h-940z" />
<glyph unicode="&#xe611;" glyph-name="hint" d="M614 682.667h240v-426h-300l-16 84h-240v-298h-84v726h384z" />
</font></defs></svg>
\ No newline at end of file
File added
File added
it4i_theme/assets/images/favicon-e565ddfa3b.ico

1.12 KiB

it4i_theme/assets/images/favicon.ico

1.12 KiB

function pegasus(t,e){return e=new XMLHttpRequest,e.open("GET",t),t=[],e.onreadystatechange=e.then=function(n,o,i,r){if(n&&n.call&&(t=[,n,o]),4==e.readyState&&(i=t[0|e.status/200])){try{r=JSON.parse(e.responseText)}catch(s){r=null}i(r,e)}},e.send(),e}if("document"in self&&("classList"in document.createElement("_")?!function(){"use strict";var t=document.createElement("_");if(t.classList.add("c1","c2"),!t.classList.contains("c2")){var e=function(t){var e=DOMTokenList.prototype[t];DOMTokenList.prototype[t]=function(t){var n,o=arguments.length;for(n=0;o>n;n++)t=arguments[n],e.call(this,t)}};e("add"),e("remove")}if(t.classList.toggle("c3",!1),t.classList.contains("c3")){var n=DOMTokenList.prototype.toggle;DOMTokenList.prototype.toggle=function(t,e){return 1 in arguments&&!this.contains(t)==!e?e:n.call(this,t)}}t=null}():!function(t){"use strict";if("Element"in t){var e="classList",n="prototype",o=t.Element[n],i=Object,r=String[n].trim||function(){return this.replace(/^\s+|\s+$/g,"")},s=Array[n].indexOf||function(t){for(var e=0,n=this.length;n>e;e++)if(e in this&&this[e]===t)return e;return-1},a=function(t,e){this.name=t,this.code=DOMException[t],this.message=e},c=function(t,e){if(""===e)throw new a("SYNTAX_ERR","An invalid or illegal string was specified");if(/\s/.test(e))throw new a("INVALID_CHARACTER_ERR","String contains an invalid character");return s.call(t,e)},l=function(t){for(var e=r.call(t.getAttribute("class")||""),n=e?e.split(/\s+/):[],o=0,i=n.length;i>o;o++)this.push(n[o]);this._updateClassName=function(){t.setAttribute("class",this.toString())}},u=l[n]=[],d=function(){return new l(this)};if(a[n]=Error[n],u.item=function(t){return this[t]||null},u.contains=function(t){return t+="",-1!==c(this,t)},u.add=function(){var t,e=arguments,n=0,o=e.length,i=!1;do t=e[n]+"",-1===c(this,t)&&(this.push(t),i=!0);while(++n<o);i&&this._updateClassName()},u.remove=function(){var t,e,n=arguments,o=0,i=n.length,r=!1;do for(t=n[o]+"",e=c(this,t);-1!==e;)this.splice(e,1),r=!0,e=c(this,t);while(++o<i);r&&this._updateClassName()},u.toggle=function(t,e){t+="";var n=this.contains(t),o=n?e!==!0&&"remove":e!==!1&&"add";return o&&this[o](t),e===!0||e===!1?e:!n},u.toString=function(){return this.join(" ")},i.defineProperty){var h={get:d,enumerable:!0,configurable:!0};try{i.defineProperty(o,e,h)}catch(f){-2146823252===f.number&&(h.enumerable=!1,i.defineProperty(o,e,h))}}else i[n].__defineGetter__&&o.__defineGetter__(e,d)}}(self)),function(){"use strict";function t(e,o){function i(t,e){return function(){return t.apply(e,arguments)}}var r;if(o=o||{},this.trackingClick=!1,this.trackingClickStart=0,this.targetElement=null,this.touchStartX=0,this.touchStartY=0,this.lastTouchIdentifier=0,this.touchBoundary=o.touchBoundary||10,this.layer=e,this.tapDelay=o.tapDelay||200,this.tapTimeout=o.tapTimeout||700,!t.notNeeded(e)){for(var s=["onMouse","onClick","onTouchStart","onTouchMove","onTouchEnd","onTouchCancel"],a=this,c=0,l=s.length;l>c;c++)a[s[c]]=i(a[s[c]],a);n&&(e.addEventListener("mouseover",this.onMouse,!0),e.addEventListener("mousedown",this.onMouse,!0),e.addEventListener("mouseup",this.onMouse,!0)),e.addEventListener("click",this.onClick,!0),e.addEventListener("touchstart",this.onTouchStart,!1),e.addEventListener("touchmove",this.onTouchMove,!1),e.addEventListener("touchend",this.onTouchEnd,!1),e.addEventListener("touchcancel",this.onTouchCancel,!1),Event.prototype.stopImmediatePropagation||(e.removeEventListener=function(t,n,o){var i=Node.prototype.removeEventListener;"click"===t?i.call(e,t,n.hijacked||n,o):i.call(e,t,n,o)},e.addEventListener=function(t,n,o){var i=Node.prototype.addEventListener;"click"===t?i.call(e,t,n.hijacked||(n.hijacked=function(t){t.propagationStopped||n(t)}),o):i.call(e,t,n,o)}),"function"==typeof e.onclick&&(r=e.onclick,e.addEventListener("click",function(t){r(t)},!1),e.onclick=null)}}var e=navigator.userAgent.indexOf("Windows Phone")>=0,n=navigator.userAgent.indexOf("Android")>0&&!e,o=/iP(ad|hone|od)/.test(navigator.userAgent)&&!e,i=o&&/OS 4_\d(_\d)?/.test(navigator.userAgent),r=o&&/OS [6-7]_\d/.test(navigator.userAgent),s=navigator.userAgent.indexOf("BB10")>0;t.prototype.needsClick=function(t){switch(t.nodeName.toLowerCase()){case"button":case"select":case"textarea":if(t.disabled)return!0;break;case"input":if(o&&"file"===t.type||t.disabled)return!0;break;case"label":case"iframe":case"video":return!0}return/\bneedsclick\b/.test(t.className)},t.prototype.needsFocus=function(t){switch(t.nodeName.toLowerCase()){case"textarea":return!0;case"select":return!n;case"input":switch(t.type){case"button":case"checkbox":case"file":case"image":case"radio":case"submit":return!1}return!t.disabled&&!t.readOnly;default:return/\bneedsfocus\b/.test(t.className)}},t.prototype.sendClick=function(t,e){var n,o;document.activeElement&&document.activeElement!==t&&document.activeElement.blur(),o=e.changedTouches[0],n=document.createEvent("MouseEvents"),n.initMouseEvent(this.determineEventType(t),!0,!0,window,1,o.screenX,o.screenY,o.clientX,o.clientY,!1,!1,!1,!1,0,null),n.forwardedTouchEvent=!0,t.dispatchEvent(n)},t.prototype.determineEventType=function(t){return n&&"select"===t.tagName.toLowerCase()?"mousedown":"click"},t.prototype.focus=function(t){var e;o&&t.setSelectionRange&&0!==t.type.indexOf("date")&&"time"!==t.type&&"month"!==t.type?(e=t.value.length,t.setSelectionRange(e,e)):t.focus()},t.prototype.updateScrollParent=function(t){var e,n;if(e=t.fastClickScrollParent,!e||!e.contains(t)){n=t;do{if(n.scrollHeight>n.offsetHeight){e=n,t.fastClickScrollParent=n;break}n=n.parentElement}while(n)}e&&(e.fastClickLastScrollTop=e.scrollTop)},t.prototype.getTargetElementFromEventTarget=function(t){return t.nodeType===Node.TEXT_NODE?t.parentNode:t},t.prototype.onTouchStart=function(t){var e,n,r;if(t.targetTouches.length>1)return!0;if(e=this.getTargetElementFromEventTarget(t.target),n=t.targetTouches[0],o){if(r=window.getSelection(),r.rangeCount&&!r.isCollapsed)return!0;if(!i){if(n.identifier&&n.identifier===this.lastTouchIdentifier)return t.preventDefault(),!1;this.lastTouchIdentifier=n.identifier,this.updateScrollParent(e)}}return this.trackingClick=!0,this.trackingClickStart=t.timeStamp,this.targetElement=e,this.touchStartX=n.pageX,this.touchStartY=n.pageY,t.timeStamp-this.lastClickTime<this.tapDelay&&t.preventDefault(),!0},t.prototype.touchHasMoved=function(t){var e=t.changedTouches[0],n=this.touchBoundary;return Math.abs(e.pageX-this.touchStartX)>n||Math.abs(e.pageY-this.touchStartY)>n?!0:!1},t.prototype.onTouchMove=function(t){return this.trackingClick?((this.targetElement!==this.getTargetElementFromEventTarget(t.target)||this.touchHasMoved(t))&&(this.trackingClick=!1,this.targetElement=null),!0):!0},t.prototype.findControl=function(t){return void 0!==t.control?t.control:t.htmlFor?document.getElementById(t.htmlFor):t.querySelector("button, input:not([type=hidden]), keygen, meter, output, progress, select, textarea")},t.prototype.onTouchEnd=function(t){var e,s,a,c,l,u=this.targetElement;if(!this.trackingClick)return!0;if(t.timeStamp-this.lastClickTime<this.tapDelay)return this.cancelNextClick=!0,!0;if(t.timeStamp-this.trackingClickStart>this.tapTimeout)return!0;if(this.cancelNextClick=!1,this.lastClickTime=t.timeStamp,s=this.trackingClickStart,this.trackingClick=!1,this.trackingClickStart=0,r&&(l=t.changedTouches[0],u=document.elementFromPoint(l.pageX-window.pageXOffset,l.pageY-window.pageYOffset)||u,u.fastClickScrollParent=this.targetElement.fastClickScrollParent),a=u.tagName.toLowerCase(),"label"===a){if(e=this.findControl(u)){if(this.focus(u),n)return!1;u=e}}else if(this.needsFocus(u))return t.timeStamp-s>100||o&&window.top!==window&&"input"===a?(this.targetElement=null,!1):(this.focus(u),this.sendClick(u,t),o&&"select"===a||(this.targetElement=null,t.preventDefault()),!1);return o&&!i&&(c=u.fastClickScrollParent,c&&c.fastClickLastScrollTop!==c.scrollTop)?!0:(this.needsClick(u)||(t.preventDefault(),this.sendClick(u,t)),!1)},t.prototype.onTouchCancel=function(){this.trackingClick=!1,this.targetElement=null},t.prototype.onMouse=function(t){return this.targetElement?t.forwardedTouchEvent?!0:t.cancelable&&(!this.needsClick(this.targetElement)||this.cancelNextClick)?(t.stopImmediatePropagation?t.stopImmediatePropagation():t.propagationStopped=!0,t.stopPropagation(),t.preventDefault(),!1):!0:!0},t.prototype.onClick=function(t){var e;return this.trackingClick?(this.targetElement=null,this.trackingClick=!1,!0):"submit"===t.target.type&&0===t.detail?!0:(e=this.onMouse(t),e||(this.targetElement=null),e)},t.prototype.destroy=function(){var t=this.layer;n&&(t.removeEventListener("mouseover",this.onMouse,!0),t.removeEventListener("mousedown",this.onMouse,!0),t.removeEventListener("mouseup",this.onMouse,!0)),t.removeEventListener("click",this.onClick,!0),t.removeEventListener("touchstart",this.onTouchStart,!1),t.removeEventListener("touchmove",this.onTouchMove,!1),t.removeEventListener("touchend",this.onTouchEnd,!1),t.removeEventListener("touchcancel",this.onTouchCancel,!1)},t.notNeeded=function(t){var e,o,i,r;if("undefined"==typeof window.ontouchstart)return!0;if(o=+(/Chrome\/([0-9]+)/.exec(navigator.userAgent)||[,0])[1]){if(!n)return!0;if(e=document.querySelector("meta[name=viewport]")){if(-1!==e.content.indexOf("user-scalable=no"))return!0;if(o>31&&document.documentElement.scrollWidth<=window.outerWidth)return!0}}if(s&&(i=navigator.userAgent.match(/Version\/([0-9]*)\.([0-9]*)/),i[1]>=10&&i[2]>=3&&(e=document.querySelector("meta[name=viewport]")))){if(-1!==e.content.indexOf("user-scalable=no"))return!0;if(document.documentElement.scrollWidth<=window.outerWidth)return!0}return"none"===t.style.msTouchAction||"manipulation"===t.style.touchAction?!0:(r=+(/Firefox\/([0-9]+)/.exec(navigator.userAgent)||[,0])[1],r>=27&&(e=document.querySelector("meta[name=viewport]"),e&&(-1!==e.content.indexOf("user-scalable=no")||document.documentElement.scrollWidth<=window.outerWidth))?!0:"none"===t.style.touchAction||"manipulation"===t.style.touchAction?!0:!1)},t.attach=function(e,n){return new t(e,n)},"function"==typeof define&&"object"==typeof define.amd&&define.amd?define(function(){return t}):"undefined"!=typeof module&&module.exports?(module.exports=t.attach,module.exports.FastClick=t):window.FastClick=t}(),function(){var t=function(e){var n=new t.Index;return n.pipeline.add(t.trimmer,t.stopWordFilter,t.stemmer),e&&e.call(n,n),n};t.version="0.6.0",t.utils={},t.utils.warn=function(t){return function(e){t.console&&console.warn&&console.warn(e)}}(this),t.utils.asString=function(t){return void 0===t||null===t?"":t.toString()},t.EventEmitter=function(){this.events={}},t.EventEmitter.prototype.addListener=function(){var t=Array.prototype.slice.call(arguments),e=t.pop(),n=t;if("function"!=typeof e)throw new TypeError("last argument must be a function");n.forEach(function(t){this.hasHandler(t)||(this.events[t]=[]),this.events[t].push(e)},this)},t.EventEmitter.prototype.removeListener=function(t,e){if(this.hasHandler(t)){var n=this.events[t].indexOf(e);this.events[t].splice(n,1),this.events[t].length||delete this.events[t]}},t.EventEmitter.prototype.emit=function(t){if(this.hasHandler(t)){var e=Array.prototype.slice.call(arguments,1);this.events[t].forEach(function(t){t.apply(void 0,e)})}},t.EventEmitter.prototype.hasHandler=function(t){return t in this.events},t.tokenizer=function(e){return arguments.length&&null!=e&&void 0!=e?Array.isArray(e)?e.map(function(e){return t.utils.asString(e).toLowerCase()}):e.toString().trim().toLowerCase().split(t.tokenizer.seperator):[]},t.tokenizer.seperator=/[\s\-]+/,t.Pipeline=function(){this._stack=[]},t.Pipeline.registeredFunctions={},t.Pipeline.registerFunction=function(e,n){n in this.registeredFunctions&&t.utils.warn("Overwriting existing registered function: "+n),e.label=n,t.Pipeline.registeredFunctions[e.label]=e},t.Pipeline.warnIfFunctionNotRegistered=function(e){var n=e.label&&e.label in this.registeredFunctions;n||t.utils.warn("Function is not registered with pipeline. This may cause problems when serialising the index.\n",e)},t.Pipeline.load=function(e){var n=new t.Pipeline;return e.forEach(function(e){var o=t.Pipeline.registeredFunctions[e];if(!o)throw new Error("Cannot load un-registered function: "+e);n.add(o)}),n},t.Pipeline.prototype.add=function(){var e=Array.prototype.slice.call(arguments);e.forEach(function(e){t.Pipeline.warnIfFunctionNotRegistered(e),this._stack.push(e)},this)},t.Pipeline.prototype.after=function(e,n){t.Pipeline.warnIfFunctionNotRegistered(n);var o=this._stack.indexOf(e);if(-1==o)throw new Error("Cannot find existingFn");o+=1,this._stack.splice(o,0,n)},t.Pipeline.prototype.before=function(e,n){t.Pipeline.warnIfFunctionNotRegistered(n);var o=this._stack.indexOf(e);if(-1==o)throw new Error("Cannot find existingFn");this._stack.splice(o,0,n)},t.Pipeline.prototype.remove=function(t){var e=this._stack.indexOf(t);-1!=e&&this._stack.splice(e,1)},t.Pipeline.prototype.run=function(t){for(var e=[],n=t.length,o=this._stack.length,i=0;n>i;i++){for(var r=t[i],s=0;o>s&&(r=this._stack[s](r,i,t),void 0!==r&&""!==r);s++);void 0!==r&&""!==r&&e.push(r)}return e},t.Pipeline.prototype.reset=function(){this._stack=[]},t.Pipeline.prototype.toJSON=function(){return this._stack.map(function(e){return t.Pipeline.warnIfFunctionNotRegistered(e),e.label})},t.Vector=function(){this._magnitude=null,this.list=void 0,this.length=0},t.Vector.Node=function(t,e,n){this.idx=t,this.val=e,this.next=n},t.Vector.prototype.insert=function(e,n){this._magnitude=void 0;var o=this.list;if(!o)return this.list=new t.Vector.Node(e,n,o),this.length++;if(e<o.idx)return this.list=new t.Vector.Node(e,n,o),this.length++;for(var i=o,r=o.next;void 0!=r;){if(e<r.idx)return i.next=new t.Vector.Node(e,n,r),this.length++;i=r,r=r.next}return i.next=new t.Vector.Node(e,n,r),this.length++},t.Vector.prototype.magnitude=function(){if(this._magnitude)return this._magnitude;for(var t,e=this.list,n=0;e;)t=e.val,n+=t*t,e=e.next;return this._magnitude=Math.sqrt(n)},t.Vector.prototype.dot=function(t){for(var e=this.list,n=t.list,o=0;e&&n;)e.idx<n.idx?e=e.next:e.idx>n.idx?n=n.next:(o+=e.val*n.val,e=e.next,n=n.next);return o},t.Vector.prototype.similarity=function(t){return this.dot(t)/(this.magnitude()*t.magnitude())},t.SortedSet=function(){this.length=0,this.elements=[]},t.SortedSet.load=function(t){var e=new this;return e.elements=t,e.length=t.length,e},t.SortedSet.prototype.add=function(){var t,e;for(t=0;t<arguments.length;t++)e=arguments[t],~this.indexOf(e)||this.elements.splice(this.locationFor(e),0,e);this.length=this.elements.length},t.SortedSet.prototype.toArray=function(){return this.elements.slice()},t.SortedSet.prototype.map=function(t,e){return this.elements.map(t,e)},t.SortedSet.prototype.forEach=function(t,e){return this.elements.forEach(t,e)},t.SortedSet.prototype.indexOf=function(t){for(var e=0,n=this.elements.length,o=n-e,i=e+Math.floor(o/2),r=this.elements[i];o>1;){if(r===t)return i;t>r&&(e=i),r>t&&(n=i),o=n-e,i=e+Math.floor(o/2),r=this.elements[i]}return r===t?i:-1},t.SortedSet.prototype.locationFor=function(t){for(var e=0,n=this.elements.length,o=n-e,i=e+Math.floor(o/2),r=this.elements[i];o>1;)t>r&&(e=i),r>t&&(n=i),o=n-e,i=e+Math.floor(o/2),r=this.elements[i];return r>t?i:t>r?i+1:void 0},t.SortedSet.prototype.intersect=function(e){for(var n=new t.SortedSet,o=0,i=0,r=this.length,s=e.length,a=this.elements,c=e.elements;;){if(o>r-1||i>s-1)break;a[o]!==c[i]?a[o]<c[i]?o++:a[o]>c[i]&&i++:(n.add(a[o]),o++,i++)}return n},t.SortedSet.prototype.clone=function(){var e=new t.SortedSet;return e.elements=this.toArray(),e.length=e.elements.length,e},t.SortedSet.prototype.union=function(t){var e,n,o;return this.length>=t.length?(e=this,n=t):(e=t,n=this),o=e.clone(),o.add.apply(o,n.toArray()),o},t.SortedSet.prototype.toJSON=function(){return this.toArray()},t.Index=function(){this._fields=[],this._ref="id",this.pipeline=new t.Pipeline,this.documentStore=new t.Store,this.tokenStore=new t.TokenStore,this.corpusTokens=new t.SortedSet,this.eventEmitter=new t.EventEmitter,this._idfCache={},this.on("add","remove","update",function(){this._idfCache={}}.bind(this))},t.Index.prototype.on=function(){var t=Array.prototype.slice.call(arguments);return this.eventEmitter.addListener.apply(this.eventEmitter,t)},t.Index.prototype.off=function(t,e){return this.eventEmitter.removeListener(t,e)},t.Index.load=function(e){e.version!==t.version&&t.utils.warn("version mismatch: current "+t.version+" importing "+e.version);var n=new this;return n._fields=e.fields,n._ref=e.ref,n.documentStore=t.Store.load(e.documentStore),n.tokenStore=t.TokenStore.load(e.tokenStore),n.corpusTokens=t.SortedSet.load(e.corpusTokens),n.pipeline=t.Pipeline.load(e.pipeline),n},t.Index.prototype.field=function(t,e){var e=e||{},n={name:t,boost:e.boost||1};return this._fields.push(n),this},t.Index.prototype.ref=function(t){return this._ref=t,this},t.Index.prototype.add=function(e,n){var o={},i=new t.SortedSet,r=e[this._ref],n=void 0===n?!0:n;this._fields.forEach(function(n){var r=this.pipeline.run(t.tokenizer(e[n.name]));o[n.name]=r,t.SortedSet.prototype.add.apply(i,r)},this),this.documentStore.set(r,i),t.SortedSet.prototype.add.apply(this.corpusTokens,i.toArray());for(var s=0;s<i.length;s++){var a=i.elements[s],c=this._fields.reduce(function(t,e){var n=o[e.name].length;if(!n)return t;var i=o[e.name].filter(function(t){return t===a}).length;return t+i/n*e.boost},0);this.tokenStore.add(a,{ref:r,tf:c})}n&&this.eventEmitter.emit("add",e,this)},t.Index.prototype.remove=function(t,e){var n=t[this._ref],e=void 0===e?!0:e;if(this.documentStore.has(n)){var o=this.documentStore.get(n);this.documentStore.remove(n),o.forEach(function(t){this.tokenStore.remove(t,n)},this),e&&this.eventEmitter.emit("remove",t,this)}},t.Index.prototype.update=function(t,e){var e=void 0===e?!0:e;this.remove(t,!1),this.add(t,!1),e&&this.eventEmitter.emit("update",t,this)},t.Index.prototype.idf=function(t){var e="@"+t;if(Object.prototype.hasOwnProperty.call(this._idfCache,e))return this._idfCache[e];var n=this.tokenStore.count(t),o=1;return n>0&&(o=1+Math.log(this.documentStore.length/n)),this._idfCache[e]=o},t.Index.prototype.search=function(e){var n=this.pipeline.run(t.tokenizer(e)),o=new t.Vector,i=[],r=this._fields.reduce(function(t,e){return t+e.boost},0),s=n.some(function(t){return this.tokenStore.has(t)},this);if(!s)return[];n.forEach(function(e,n,s){var a=1/s.length*this._fields.length*r,c=this,l=this.tokenStore.expand(e).reduce(function(n,i){var r=c.corpusTokens.indexOf(i),s=c.idf(i),l=1,u=new t.SortedSet;if(i!==e){var d=Math.max(3,i.length-e.length);l=1/Math.log(d)}r>-1&&o.insert(r,a*s*l);for(var h=c.tokenStore.get(i),f=Object.keys(h),p=f.length,m=0;p>m;m++)u.add(h[f[m]].ref);return n.union(u)},new t.SortedSet);i.push(l)},this);var a=i.reduce(function(t,e){return t.intersect(e)});return a.map(function(t){return{ref:t,score:o.similarity(this.documentVector(t))}},this).sort(function(t,e){return e.score-t.score})},t.Index.prototype.documentVector=function(e){for(var n=this.documentStore.get(e),o=n.length,i=new t.Vector,r=0;o>r;r++){var s=n.elements[r],a=this.tokenStore.get(s)[e].tf,c=this.idf(s);i.insert(this.corpusTokens.indexOf(s),a*c)}return i},t.Index.prototype.toJSON=function(){return{version:t.version,fields:this._fields,ref:this._ref,documentStore:this.documentStore.toJSON(),tokenStore:this.tokenStore.toJSON(),corpusTokens:this.corpusTokens.toJSON(),pipeline:this.pipeline.toJSON()}},t.Index.prototype.use=function(t){var e=Array.prototype.slice.call(arguments,1);e.unshift(this),t.apply(this,e)},t.Store=function(){this.store={},this.length=0},t.Store.load=function(e){var n=new this;return n.length=e.length,n.store=Object.keys(e.store).reduce(function(n,o){return n[o]=t.SortedSet.load(e.store[o]),n},{}),n},t.Store.prototype.set=function(t,e){this.has(t)||this.length++,this.store[t]=e},t.Store.prototype.get=function(t){return this.store[t]},t.Store.prototype.has=function(t){return t in this.store},t.Store.prototype.remove=function(t){this.has(t)&&(delete this.store[t],this.length--)},t.Store.prototype.toJSON=function(){return{store:this.store,length:this.length}},t.stemmer=function(){var t={ational:"ate",tional:"tion",enci:"ence",anci:"ance",izer:"ize",bli:"ble",alli:"al",entli:"ent",eli:"e",ousli:"ous",ization:"ize",ation:"ate",ator:"ate",alism:"al",iveness:"ive",fulness:"ful",ousness:"ous",aliti:"al",iviti:"ive",biliti:"ble",logi:"log"},e={icate:"ic",ative:"",alize:"al",iciti:"ic",ical:"ic",ful:"",ness:""},n="[^aeiou]",o="[aeiouy]",i=n+"[^aeiouy]*",r=o+"[aeiou]*",s="^("+i+")?"+r+i,a="^("+i+")?"+r+i+"("+r+")?$",c="^("+i+")?"+r+i+r+i,l="^("+i+")?"+o,u=new RegExp(s),d=new RegExp(c),h=new RegExp(a),f=new RegExp(l),p=/^(.+?)(ss|i)es$/,m=/^(.+?)([^s])s$/,v=/^(.+?)eed$/,g=/^(.+?)(ed|ing)$/,y=/.$/,w=/(at|bl|iz)$/,S=new RegExp("([^aeiouylsz])\\1$"),k=new RegExp("^"+i+o+"[^aeiouwxy]$"),E=/^(.+?[^aeiou])y$/,x=/^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/,b=/^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/,T=/^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/,C=/^(.+?)(s|t)(ion)$/,L=/^(.+?)e$/,_=/ll$/,A=new RegExp("^"+i+o+"[^aeiouwxy]$"),O=function(n){var o,i,r,s,a,c,l;if(n.length<3)return n;if(r=n.substr(0,1),"y"==r&&(n=r.toUpperCase()+n.substr(1)),s=p,a=m,s.test(n)?n=n.replace(s,"$1$2"):a.test(n)&&(n=n.replace(a,"$1$2")),s=v,a=g,s.test(n)){var O=s.exec(n);s=u,s.test(O[1])&&(s=y,n=n.replace(s,""))}else if(a.test(n)){var O=a.exec(n);o=O[1],a=f,a.test(o)&&(n=o,a=w,c=S,l=k,a.test(n)?n+="e":c.test(n)?(s=y,n=n.replace(s,"")):l.test(n)&&(n+="e"))}if(s=E,s.test(n)){var O=s.exec(n);o=O[1],n=o+"i"}if(s=x,s.test(n)){var O=s.exec(n);o=O[1],i=O[2],s=u,s.test(o)&&(n=o+t[i])}if(s=b,s.test(n)){var O=s.exec(n);o=O[1],i=O[2],s=u,s.test(o)&&(n=o+e[i])}if(s=T,a=C,s.test(n)){var O=s.exec(n);o=O[1],s=d,s.test(o)&&(n=o)}else if(a.test(n)){var O=a.exec(n);o=O[1]+O[2],a=d,a.test(o)&&(n=o)}if(s=L,s.test(n)){var O=s.exec(n);o=O[1],s=d,a=h,c=A,(s.test(o)||a.test(o)&&!c.test(o))&&(n=o)}return s=_,a=d,s.test(n)&&a.test(n)&&(s=y,n=n.replace(s,"")),"y"==r&&(n=r.toLowerCase()+n.substr(1)),n};return O}(),t.Pipeline.registerFunction(t.stemmer,"stemmer"),t.generateStopWordFilter=function(t){var e=t.reduce(function(t,e){return t[e]=e,t},{});return function(t){return t&&e[t]!==t?t:void 0}},t.stopWordFilter=t.generateStopWordFilter(["a","able","about","across","after","all","almost","also","am","among","an","and","any","are","as","at","be","because","been","but","by","can","cannot","could","dear","did","do","does","either","else","ever","every","for","from","get","got","had","has","have","he","her","hers","him","his","how","however","i","if","in","into","is","it","its","just","least","let","like","likely","may","me","might","most","must","my","neither","no","nor","not","of","off","often","on","only","or","other","our","own","rather","said","say","says","she","should","since","so","some","than","that","the","their","them","then","there","these","they","this","tis","to","too","twas","us","wants","was","we","were","what","when","where","which","while","who","whom","why","will","with","would","yet","you","your"]),t.Pipeline.registerFunction(t.stopWordFilter,"stopWordFilter"),t.trimmer=function(t){return t.replace(/^\W+/,"").replace(/\W+$/,"")},t.Pipeline.registerFunction(t.trimmer,"trimmer"),t.TokenStore=function(){this.root={docs:{}},this.length=0},t.TokenStore.load=function(t){var e=new this;return e.root=t.root,e.length=t.length,e},t.TokenStore.prototype.add=function(t,e,n){var n=n||this.root,o=t.charAt(0),i=t.slice(1);return o in n||(n[o]={docs:{}}),0===i.length?(n[o].docs[e.ref]=e,void(this.length+=1)):this.add(i,e,n[o])},t.TokenStore.prototype.has=function(t){if(!t)return!1;for(var e=this.root,n=0;n<t.length;n++){if(!e[t.charAt(n)])return!1;e=e[t.charAt(n)]}return!0},t.TokenStore.prototype.getNode=function(t){if(!t)return{};for(var e=this.root,n=0;n<t.length;n++){if(!e[t.charAt(n)])return{};e=e[t.charAt(n)]}return e},t.TokenStore.prototype.get=function(t,e){return this.getNode(t,e).docs||{}},t.TokenStore.prototype.count=function(t,e){return Object.keys(this.get(t,e)).length},t.TokenStore.prototype.remove=function(t,e){if(t){for(var n=this.root,o=0;o<t.length;o++){if(!(t.charAt(o)in n))return;n=n[t.charAt(o)]}delete n.docs[e]}},t.TokenStore.prototype.expand=function(t,e){var n=this.getNode(t),o=n.docs||{},e=e||[];return Object.keys(o).length&&e.push(t),Object.keys(n).forEach(function(n){"docs"!==n&&e.concat(this.expand(t+n,e))},this),e},t.TokenStore.prototype.toJSON=function(){return{root:this.root,length:this.length}},function(t,e){"function"==typeof define&&define.amd?define(e):"object"==typeof exports?module.exports=e():t.lunr=e()}(this,function(){return t})}(),String.prototype.truncate=function(t){if(this.length>t){for(;" "!=this[t]&&--t>0;);return this.substring(0,t)+"&hellip;"}return this},HTMLElement.prototype.wrap=function(t){t.length||(t=[t]);for(var e=t.length-1;e>=0;e--){var n=e>0?this.cloneNode(!0):this,o=t[e],i=o.parentNode,r=o.nextSibling;n.appendChild(o),r?i.insertBefore(n,r):i.appendChild(n)}},document.addEventListener("DOMContentLoaded",function(){"use strict";Modernizr.addTest("ios",function(){return!!navigator.userAgent.match(/(iPad|iPhone|iPod)/g)}),Modernizr.addTest("standalone",function(){return!!navigator.standalone}),FastClick.attach(document.body);var t=document.getElementById("toggle-search"),e=(document.getElementById("reset-search"),document.querySelector(".drawer")),n=document.querySelectorAll(".anchor"),o=document.querySelector(".search .field"),i=document.querySelector(".query"),r=document.querySelector(".results .meta");Array.prototype.forEach.call(n,function(t){t.querySelector("a").addEventListener("click",function(){document.getElementById("toggle-drawer").checked=!1,document.body.classList.remove("toggle-drawer")})});var s=window.pageYOffset,a=function(){var t=window.pageYOffset+window.innerHeight,n=Math.max(0,window.innerHeight-e.offsetHeight);t>document.body.clientHeight-(96-n)?"absolute"!=e.style.position&&(e.style.position="absolute",e.style.top=null,e.style.bottom=0):e.offsetHeight<window.innerHeight?"fixed"!=e.style.position&&(e.style.position="fixed",e.style.top=0,e.style.bottom=null):"fixed"!=e.style.position?t>e.offsetTop+e.offsetHeight?(e.style.position="fixed",e.style.top=null,e.style.bottom="-96px"):window.pageYOffset<e.offsetTop&&(e.style.position="fixed",e.style.top=0,e.style.bottom=null):window.pageYOffset>s?e.style.top&&(e.style.position="absolute",e.style.top=Math.max(0,s)+"px",e.style.bottom=null):e.style.bottom&&(e.style.position="absolute",e.style.top=t-e.offsetHeight+"px",e.style.bottom=null),s=Math.max(0,window.pageYOffset)},c=function(){var t=document.querySelector(".main");window.removeEventListener("scroll",a),matchMedia("only screen and (max-width: 959px)").matches?(e.style.position=null,e.style.top=null,e.style.bottom=null):e.offsetHeight+96<t.offsetHeight&&(window.addEventListener("scroll",a),a())};Modernizr.ios||(window.addEventListener("resize",c),c());var l=function(){pegasus(base_url+"/mkdocs/search_index.json").then(function(e,n){var o=lunr(function(){this.field("title",{boost:10}),this.field("text"),this.ref("location")}),s={};e.docs.map(function(t){t.location=base_url+t.location,s[t.location]=t,o.add(t)}),i.addEventListener("keyup",function(){for(var e=document.querySelector(".results .list");e.firstChild;)e.removeChild(e.firstChild);var n=document.querySelector(".bar.search");if(!i.value.length){for(;r.firstChild;)r.removeChild(r.firstChild);return void n.classList.remove("non-empty")}n.classList.add("non-empty");var a=o.search(i.value);a.map(function(n){var o=s[n.ref],i=document.createElement("article");i.classList.add("result");var r=document.createElement("h1");r.innerHTML=o.title,i.appendChild(r);var a=document.createElement("a");a.href=o.location,a.appendChild(i);var c=document.createElement("span");c.innerHTML=a.href.split("#")[0],i.appendChild(c);var l=a.href.split("#");l[0]==document.location.href.split("#")[0]&&a.addEventListener("click",function(e){if(document.body.classList.remove("toggle-search"),document.body.classList.remove("locked"),t.checked=!1,!matchMedia("only screen and (min-width: 960px)").matches&&(e.preventDefault(),e.stopPropagation(),1!=l.length)){var n=document.getElementById(l[1]);n&&setTimeout(function(){n.scrollIntoView&&n.scrollIntoView()||window.scrollTo(0,n.offsetTop)},100)}}),e.appendChild(a)});var c=document.createElement("strong");for(c.innerHTML=a.length+" search result"+(1!=a.length?"s":"");r.firstChild;)r.removeChild(r.firstChild);r.appendChild(c)})},function(t,e){console.error(t,e.status)}),t.removeEventListener("click",l)};t.addEventListener("click",l);var u=0;t.addEventListener("click",function(t){var e=document.body.classList,n=!matchMedia("only screen and (min-width: 960px)").matches;e.contains("locked")?(e.remove("locked"),n&&setTimeout(function(){window.scrollTo(0,u)},100)):(u=window.scrollY,n&&setTimeout(function(){window.scrollTo(0,0)},400),setTimeout(function(){this.checked&&(n&&e.add("locked"),setTimeout(function(){i.focus()},200))}.bind(this),450))}),o.addEventListener("touchstart",function(){i.focus()}),window.addEventListener("keyup",function(e){var n=e.keyCode||e.which;27==n&&(i.blur(),document.body.classList.remove("toggle-search"),document.body.classList.remove("locked"),t.checked=!1)});var d=document.getElementById("reset-search");d.addEventListener("click",function(){for(var t=document.querySelector(".results .list");t.firstChild;)t.removeChild(t.firstChild);var e=document.querySelector(".bar.search");e.classList.remove("non-empty"),r.innerHTML="",i.value="",i.focus()});var h=document.querySelectorAll("h2");h=Array.prototype.map.call(h,function(t){return t.offsetTop});var f=null;document.addEventListener("scroll",function(){for(var t=window.scrollY+window.innerHeight/3,e=h.length-1,o=0;e>o;o++)t<h[o+1]&&(e=o);e!=f&&(f=e,Array.prototype.forEach.call(n,function(t,e){var n=t.querySelector("a");(e!=f||n.classList.add("current"))&&n.classList.remove("current")}))});var p=document.querySelectorAll(".n + .p");Array.prototype.forEach.call(p,function(t){var e=t.innerText||t.textContent;e&&"("==e[0]&&t.previousSibling.classList.add("f")});var m=document.querySelectorAll("table");if(Array.prototype.forEach.call(m,function(t){var e=document.createElement("div");e.classList.add("data"),e.wrap(t)}),Modernizr.ios){var v=document.querySelectorAll(".scrollable, .standalone .article");Array.prototype.forEach.call(v,function(t){t.addEventListener("touchstart",function(){var t=this.scrollTop;0==t?this.scrollTop=1:t+this.offsetHeight==this.scrollHeight&&(this.scrollTop=t-1)})})}var g=document.querySelectorAll(".project, .overlay, .header");Array.prototype.forEach.call(g,function(t){t.addEventListener("touchmove",function(t){t.preventDefault()})});var y=document.querySelectorAll(".toggle");Array.prototype.forEach.call(y,function(t){t.addEventListener("click",function(){document.body.classList.toggle(this.id)})}),repo_id&&pegasus("https://api.github.com/repos/"+repo_id).then(function(t,e){var n=t.stargazers_count;n>1e4?n=(n/1e3).toFixed(0)+"k":n>1e3&&(n=(n/1e3).toFixed(1)+"k");var o=document.querySelector(".repo-stars .count");o.innerHTML=n},function(t,e){console.error(t,e.status)})}),"standalone"in window.navigator&&window.navigator.standalone){var node,remotes=!1;document.addEventListener("click",function(t){for(node=t.target;"A"!==node.nodeName&&"HTML"!==node.nodeName;)node=node.parentNode;"href"in node&&-1!==node.href.indexOf("http")&&(-1!==node.href.indexOf(document.location.host)||remotes)&&(t.preventDefault(),document.location.href=node.href)},!1)}
\ No newline at end of file
!function(e,t,n){function r(e,t){return typeof e===t}function i(){var e,t,n,i,o,a,s;for(var l in x)if(x.hasOwnProperty(l)){if(e=[],t=x[l],t.name&&(e.push(t.name.toLowerCase()),t.options&&t.options.aliases&&t.options.aliases.length))for(n=0;n<t.options.aliases.length;n++)e.push(t.options.aliases[n].toLowerCase());for(i=r(t.fn,"function")?t.fn():t.fn,o=0;o<e.length;o++)a=e[o],s=a.split("."),1===s.length?S[s[0]]=i:(!S[s[0]]||S[s[0]]instanceof Boolean||(S[s[0]]=new Boolean(S[s[0]])),S[s[0]][s[1]]=i),w.push((i?"":"no-")+s.join("-"))}}function o(e){var t=C.className,n=S._config.classPrefix||"";if(T&&(t=t.baseVal),S._config.enableJSClass){var r=new RegExp("(^|\\s)"+n+"no-js(\\s|$)");t=t.replace(r,"$1"+n+"js$2")}S._config.enableClasses&&(t+=" "+n+e.join(" "+n),T?C.className.baseVal=t:C.className=t)}function a(e,t){if("object"==typeof e)for(var n in e)b(e,n)&&a(n,e[n]);else{e=e.toLowerCase();var r=e.split("."),i=S[r[0]];if(2==r.length&&(i=i[r[1]]),"undefined"!=typeof i)return S;t="function"==typeof t?t():t,1==r.length?S[r[0]]=t:(!S[r[0]]||S[r[0]]instanceof Boolean||(S[r[0]]=new Boolean(S[r[0]])),S[r[0]][r[1]]=t),o([(t&&0!=t?"":"no-")+r.join("-")]),S._trigger(e,t)}return S}function s(e,t){return function(){return e.apply(t,arguments)}}function l(e,t){return!!~(""+e).indexOf(t)}function u(){return"function"!=typeof t.createElement?t.createElement(arguments[0]):T?t.createElementNS.call(t,"http://www.w3.org/2000/svg",arguments[0]):t.createElement.apply(t,arguments)}function c(){var e=t.body;return e||(e=u(T?"svg":"body"),e.fake=!0),e}function f(e,n,r,i){var o,a,s,l,f="modernizr",d=u("div"),p=c();if(parseInt(r,10))for(;r--;)s=u("div"),s.id=i?i[r]:f+(r+1),d.appendChild(s);return o=u("style"),o.type="text/css",o.id="s"+f,(p.fake?p:d).appendChild(o),p.appendChild(d),o.styleSheet?o.styleSheet.cssText=e:o.appendChild(t.createTextNode(e)),d.id=f,p.fake&&(p.style.background="",p.style.overflow="hidden",l=C.style.overflow,C.style.overflow="hidden",C.appendChild(p)),a=n(d,e),p.fake?(p.parentNode.removeChild(p),C.style.overflow=l,C.offsetHeight):d.parentNode.removeChild(d),!!a}function d(e){return e.replace(/([A-Z])/g,function(e,t){return"-"+t.toLowerCase()}).replace(/^ms-/,"-ms-")}function p(t,r){var i=t.length;if("CSS"in e&&"supports"in e.CSS){for(;i--;)if(e.CSS.supports(d(t[i]),r))return!0;return!1}if("CSSSupportsRule"in e){for(var o=[];i--;)o.push("("+d(t[i])+":"+r+")");return o=o.join(" or "),f("@supports ("+o+") { #modernizr { position: absolute; } }",function(e){return"absolute"==getComputedStyle(e,null).position})}return n}function m(e){return e.replace(/([a-z])-([a-z])/g,function(e,t,n){return t+n.toUpperCase()}).replace(/^-/,"")}function h(e,t,i,o){function a(){c&&(delete _.style,delete _.modElem)}if(o=r(o,"undefined")?!1:o,!r(i,"undefined")){var s=p(e,i);if(!r(s,"undefined"))return s}for(var c,f,d,h,g,v=["modernizr","tspan"];!_.style;)c=!0,_.modElem=u(v.shift()),_.style=_.modElem.style;for(d=e.length,f=0;d>f;f++)if(h=e[f],g=_.style[h],l(h,"-")&&(h=m(h)),_.style[h]!==n){if(o||r(i,"undefined"))return a(),"pfx"==t?h:!0;try{_.style[h]=i}catch(y){}if(_.style[h]!=g)return a(),"pfx"==t?h:!0}return a(),!1}function g(e,t,n){var i;for(var o in e)if(e[o]in t)return n===!1?e[o]:(i=t[e[o]],r(i,"function")?s(i,n||t):i);return!1}function v(e,t,n,i,o){var a=e.charAt(0).toUpperCase()+e.slice(1),s=(e+" "+P.join(a+" ")+a).split(" ");return r(t,"string")||r(t,"undefined")?h(s,t,i,o):(s=(e+" "+A.join(a+" ")+a).split(" "),g(s,t,n))}function y(e,t,r){return v(e,n,n,t,r)}var x=[],E={_version:"3.3.1",_config:{classPrefix:"",enableClasses:!0,enableJSClass:!0,usePrefixes:!0},_q:[],on:function(e,t){var n=this;setTimeout(function(){t(n[e])},0)},addTest:function(e,t,n){x.push({name:e,fn:t,options:n})},addAsyncTest:function(e){x.push({name:null,fn:e})}},S=function(){};S.prototype=E,S=new S;var b,w=[],C=t.documentElement,T="svg"===C.nodeName.toLowerCase();!function(){var e={}.hasOwnProperty;b=r(e,"undefined")||r(e.call,"undefined")?function(e,t){return t in e&&r(e.constructor.prototype[t],"undefined")}:function(t,n){return e.call(t,n)}}(),E._l={},E.on=function(e,t){this._l[e]||(this._l[e]=[]),this._l[e].push(t),S.hasOwnProperty(e)&&setTimeout(function(){S._trigger(e,S[e])},0)},E._trigger=function(e,t){if(this._l[e]){var n=this._l[e];setTimeout(function(){var e,r;for(e=0;e<n.length;e++)(r=n[e])(t)},0),delete this._l[e]}},S._q.push(function(){E.addTest=a});T||!function(e,t){function n(e,t){var n=e.createElement("p"),r=e.getElementsByTagName("head")[0]||e.documentElement;return n.innerHTML="x<style>"+t+"</style>",r.insertBefore(n.lastChild,r.firstChild)}function r(){var e=C.elements;return"string"==typeof e?e.split(" "):e}function i(e,t){var n=C.elements;"string"!=typeof n&&(n=n.join(" ")),"string"!=typeof e&&(e=e.join(" ")),C.elements=n+" "+e,u(t)}function o(e){var t=w[e[S]];return t||(t={},b++,e[S]=b,w[b]=t),t}function a(e,n,r){if(n||(n=t),g)return n.createElement(e);r||(r=o(n));var i;return i=r.cache[e]?r.cache[e].cloneNode():E.test(e)?(r.cache[e]=r.createElem(e)).cloneNode():r.createElem(e),!i.canHaveChildren||x.test(e)||i.tagUrn?i:r.frag.appendChild(i)}function s(e,n){if(e||(e=t),g)return e.createDocumentFragment();n=n||o(e);for(var i=n.frag.cloneNode(),a=0,s=r(),l=s.length;l>a;a++)i.createElement(s[a]);return i}function l(e,t){t.cache||(t.cache={},t.createElem=e.createElement,t.createFrag=e.createDocumentFragment,t.frag=t.createFrag()),e.createElement=function(n){return C.shivMethods?a(n,e,t):t.createElem(n)},e.createDocumentFragment=Function("h,f","return function(){var n=f.cloneNode(),c=n.createElement;h.shivMethods&&("+r().join().replace(/[\w\-:]+/g,function(e){return t.createElem(e),t.frag.createElement(e),'c("'+e+'")'})+");return n}")(C,t.frag)}function u(e){e||(e=t);var r=o(e);return!C.shivCSS||h||r.hasCSS||(r.hasCSS=!!n(e,"article,aside,dialog,figcaption,figure,footer,header,hgroup,main,nav,section{display:block}mark{background:#FF0;color:#000}template{display:none}")),g||l(e,r),e}function c(e){for(var t,n=e.getElementsByTagName("*"),i=n.length,o=RegExp("^(?:"+r().join("|")+")$","i"),a=[];i--;)t=n[i],o.test(t.nodeName)&&a.push(t.applyElement(f(t)));return a}function f(e){for(var t,n=e.attributes,r=n.length,i=e.ownerDocument.createElement(N+":"+e.nodeName);r--;)t=n[r],t.specified&&i.setAttribute(t.nodeName,t.nodeValue);return i.style.cssText=e.style.cssText,i}function d(e){for(var t,n=e.split("{"),i=n.length,o=RegExp("(^|[\\s,>+~])("+r().join("|")+")(?=[[\\s,>+~#.:]|$)","gi"),a="$1"+N+"\\:$2";i--;)t=n[i]=n[i].split("}"),t[t.length-1]=t[t.length-1].replace(o,a),n[i]=t.join("}");return n.join("{")}function p(e){for(var t=e.length;t--;)e[t].removeNode()}function m(e){function t(){clearTimeout(a._removeSheetTimer),r&&r.removeNode(!0),r=null}var r,i,a=o(e),s=e.namespaces,l=e.parentWindow;return!_||e.printShived?e:("undefined"==typeof s[N]&&s.add(N),l.attachEvent("onbeforeprint",function(){t();for(var o,a,s,l=e.styleSheets,u=[],f=l.length,p=Array(f);f--;)p[f]=l[f];for(;s=p.pop();)if(!s.disabled&&T.test(s.media)){try{o=s.imports,a=o.length}catch(m){a=0}for(f=0;a>f;f++)p.push(o[f]);try{u.push(s.cssText)}catch(m){}}u=d(u.reverse().join("")),i=c(e),r=n(e,u)}),l.attachEvent("onafterprint",function(){p(i),clearTimeout(a._removeSheetTimer),a._removeSheetTimer=setTimeout(t,500)}),e.printShived=!0,e)}var h,g,v="3.7.3",y=e.html5||{},x=/^<|^(?:button|map|select|textarea|object|iframe|option|optgroup)$/i,E=/^(?:a|b|code|div|fieldset|h1|h2|h3|h4|h5|h6|i|label|li|ol|p|q|span|strong|style|table|tbody|td|th|tr|ul)$/i,S="_html5shiv",b=0,w={};!function(){try{var e=t.createElement("a");e.innerHTML="<xyz></xyz>",h="hidden"in e,g=1==e.childNodes.length||function(){t.createElement("a");var e=t.createDocumentFragment();return"undefined"==typeof e.cloneNode||"undefined"==typeof e.createDocumentFragment||"undefined"==typeof e.createElement}()}catch(n){h=!0,g=!0}}();var C={elements:y.elements||"abbr article aside audio bdi canvas data datalist details dialog figcaption figure footer header hgroup main mark meter nav output picture progress section summary template time video",version:v,shivCSS:y.shivCSS!==!1,supportsUnknownElements:g,shivMethods:y.shivMethods!==!1,type:"default",shivDocument:u,createElement:a,createDocumentFragment:s,addElements:i};e.html5=C,u(t);var T=/^$|\b(?:all|print)\b/,N="html5shiv",_=!g&&function(){var n=t.documentElement;return!("undefined"==typeof t.namespaces||"undefined"==typeof t.parentWindow||"undefined"==typeof n.applyElement||"undefined"==typeof n.removeNode||"undefined"==typeof e.attachEvent)}();C.type+=" print",C.shivPrint=m,m(t),"object"==typeof module&&module.exports&&(module.exports=C)}("undefined"!=typeof e?e:this,t);var N={elem:u("modernizr")};S._q.push(function(){delete N.elem});var _={style:N.elem.style};S._q.unshift(function(){delete _.style});var z=(E.testProp=function(e,t,r){return h([e],n,t,r)},function(){function e(e,t){var i;return e?(t&&"string"!=typeof t||(t=u(t||"div")),e="on"+e,i=e in t,!i&&r&&(t.setAttribute||(t=u("div")),t.setAttribute(e,""),i="function"==typeof t[e],t[e]!==n&&(t[e]=n),t.removeAttribute(e)),i):!1}var r=!("onblur"in t.documentElement);return e}());E.hasEvent=z,S.addTest("inputsearchevent",z("search"));var k=E.testStyles=f,$=function(){var e=navigator.userAgent,t=e.match(/applewebkit\/([0-9]+)/gi)&&parseFloat(RegExp.$1),n=e.match(/w(eb)?osbrowser/gi),r=e.match(/windows phone/gi)&&e.match(/iemobile\/([0-9])+/gi)&&parseFloat(RegExp.$1)>=9,i=533>t&&e.match(/android/gi);return n||i||r}();$?S.addTest("fontface",!1):k('@font-face {font-family:"font";src:url("https://")}',function(e,n){var r=t.getElementById("smodernizr"),i=r.sheet||r.styleSheet,o=i?i.cssRules&&i.cssRules[0]?i.cssRules[0].cssText:i.cssText||"":"",a=/src/i.test(o)&&0===o.indexOf(n.split(" ")[0]);S.addTest("fontface",a)});var j="Moz O ms Webkit",P=E._config.usePrefixes?j.split(" "):[];E._cssomPrefixes=P;var A=E._config.usePrefixes?j.toLowerCase().split(" "):[];E._domPrefixes=A,E.testAllProps=v,E.testAllProps=y;var R="CSS"in e&&"supports"in e.CSS,F="supportsCSS"in e;S.addTest("supports",R||F),S.addTest("csstransforms3d",function(){var e=!!y("perspective","1px",!0),t=S._config.usePrefixes;if(e&&(!t||"webkitPerspective"in C.style)){var n,r="#modernizr{width:0;height:0}";S.supports?n="@supports (perspective: 1px)":(n="@media (transform-3d)",t&&(n+=",(-webkit-transform-3d)")),n+="{#modernizr{width:7px;height:18px;margin:0;padding:0;border:0}}",k(r+n,function(t){e=7===t.offsetWidth&&18===t.offsetHeight})}return e}),S.addTest("json","JSON"in e&&"parse"in JSON&&"stringify"in JSON),S.addTest("checked",function(){return k("#modernizr {position:absolute} #modernizr input {margin-left:10px} #modernizr :checked {margin-left:20px;display:block}",function(e){var t=u("input");return t.setAttribute("type","checkbox"),t.setAttribute("checked","checked"),e.appendChild(t),20===t.offsetLeft})}),S.addTest("target",function(){var t=e.document;if(!("querySelectorAll"in t))return!1;try{return t.querySelectorAll(":target"),!0}catch(n){return!1}}),S.addTest("contains",r(String.prototype.contains,"function")),i(),o(w),delete E.addTest,delete E.addAsyncTest;for(var M=0;M<S._q.length;M++)S._q[M]();e.Modernizr=S}(window,document),function(e){"use strict";e.matchMedia=e.matchMedia||function(e,t){var n,r=e.documentElement,i=r.firstElementChild||r.firstChild,o=e.createElement("body"),a=e.createElement("div");return a.id="mq-test-1",a.style.cssText="position:absolute;top:-100em",o.style.background="none",o.appendChild(a),function(e){return a.innerHTML='&shy;<style media="'+e+'"> #mq-test-1 { width: 42px; }</style>',r.insertBefore(o,i),n=42===a.offsetWidth,r.removeChild(o),{matches:n,media:e}}}(e.document)}(this),function(e){"use strict";function t(){E(!0)}var n={};e.respond=n,n.update=function(){};var r=[],i=function(){var t=!1;try{t=new e.XMLHttpRequest}catch(n){t=new e.ActiveXObject("Microsoft.XMLHTTP")}return function(){return t}}(),o=function(e,t){var n=i();n&&(n.open("GET",e,!0),n.onreadystatechange=function(){4!==n.readyState||200!==n.status&&304!==n.status||t(n.responseText)},4!==n.readyState&&n.send(null))};if(n.ajax=o,n.queue=r,n.regex={media:/@media[^\{]+\{([^\{\}]*\{[^\}\{]*\})+/gi,keyframes:/@(?:\-(?:o|moz|webkit)\-)?keyframes[^\{]+\{(?:[^\{\}]*\{[^\}\{]*\})+[^\}]*\}/gi,urls:/(url\()['"]?([^\/\)'"][^:\)'"]+)['"]?(\))/g,findStyles:/@media *([^\{]+)\{([\S\s]+?)$/,only:/(only\s+)?([a-zA-Z]+)\s?/,minw:/\([\s]*min\-width\s*:[\s]*([\s]*[0-9\.]+)(px|em)[\s]*\)/,maxw:/\([\s]*max\-width\s*:[\s]*([\s]*[0-9\.]+)(px|em)[\s]*\)/},n.mediaQueriesSupported=e.matchMedia&&null!==e.matchMedia("only all")&&e.matchMedia("only all").matches,!n.mediaQueriesSupported){var a,s,l,u=e.document,c=u.documentElement,f=[],d=[],p=[],m={},h=30,g=u.getElementsByTagName("head")[0]||c,v=u.getElementsByTagName("base")[0],y=g.getElementsByTagName("link"),x=function(){var e,t=u.createElement("div"),n=u.body,r=c.style.fontSize,i=n&&n.style.fontSize,o=!1;return t.style.cssText="position:absolute;font-size:1em;width:1em",n||(n=o=u.createElement("body"),n.style.background="none"),c.style.fontSize="100%",n.style.fontSize="100%",n.appendChild(t),o&&c.insertBefore(n,c.firstChild),e=t.offsetWidth,o?c.removeChild(n):n.removeChild(t),c.style.fontSize=r,i&&(n.style.fontSize=i),e=l=parseFloat(e)},E=function(t){var n="clientWidth",r=c[n],i="CSS1Compat"===u.compatMode&&r||u.body[n]||r,o={},m=y[y.length-1],v=(new Date).getTime();if(t&&a&&h>v-a)return e.clearTimeout(s),void(s=e.setTimeout(E,h));a=v;for(var S in f)if(f.hasOwnProperty(S)){var b=f[S],w=b.minw,C=b.maxw,T=null===w,N=null===C,_="em";w&&(w=parseFloat(w)*(w.indexOf(_)>-1?l||x():1)),C&&(C=parseFloat(C)*(C.indexOf(_)>-1?l||x():1)),b.hasquery&&(T&&N||!(T||i>=w)||!(N||C>=i))||(o[b.media]||(o[b.media]=[]),o[b.media].push(d[b.rules]))}for(var z in p)p.hasOwnProperty(z)&&p[z]&&p[z].parentNode===g&&g.removeChild(p[z]);p.length=0;for(var k in o)if(o.hasOwnProperty(k)){var $=u.createElement("style"),j=o[k].join("\n");$.type="text/css",$.media=k,g.insertBefore($,m.nextSibling),$.styleSheet?$.styleSheet.cssText=j:$.appendChild(u.createTextNode(j)),p.push($)}},S=function(e,t,r){var i=e.replace(n.regex.keyframes,"").match(n.regex.media),o=i&&i.length||0;t=t.substring(0,t.lastIndexOf("/"));var a=function(e){return e.replace(n.regex.urls,"$1"+t+"$2$3")},s=!o&&r;t.length&&(t+="/"),s&&(o=1);for(var l=0;o>l;l++){var u,c,p,m;s?(u=r,d.push(a(e))):(u=i[l].match(n.regex.findStyles)&&RegExp.$1,d.push(RegExp.$2&&a(RegExp.$2))),p=u.split(","),m=p.length;for(var h=0;m>h;h++)c=p[h],f.push({media:c.split("(")[0].match(n.regex.only)&&RegExp.$2||"all",rules:d.length-1,hasquery:c.indexOf("(")>-1,minw:c.match(n.regex.minw)&&parseFloat(RegExp.$1)+(RegExp.$2||""),maxw:c.match(n.regex.maxw)&&parseFloat(RegExp.$1)+(RegExp.$2||"")})}E()},b=function(){if(r.length){var t=r.shift();o(t.href,function(n){S(n,t.href,t.media),m[t.href]=!0,e.setTimeout(function(){b()},0)})}},w=function(){for(var t=0;t<y.length;t++){var n=y[t],i=n.href,o=n.media,a=n.rel&&"stylesheet"===n.rel.toLowerCase();i&&a&&!m[i]&&(n.styleSheet&&n.styleSheet.rawCssText?(S(n.styleSheet.rawCssText,i,o),m[i]=!0):(!/^([a-zA-Z:]*\/\/)/.test(i)&&!v||i.replace(RegExp.$1,"").split("/")[0]===e.location.host)&&("//"===i.substring(0,2)&&(i=e.location.protocol+i),r.push({href:i,media:o})))}b()};w(),n.update=w,n.getEmValue=x,e.addEventListener?e.addEventListener("resize",t,!1):e.attachEvent&&e.attachEvent("onresize",t)}}(this);
\ No newline at end of file