|
|
<img src="https://code.it4i.cz/ADAS/HEAppE/Middleware/uploads/d142c8ba0e360842c0974c84a0c45064/it4i-logo-new.png" alt="drawing" height="120" align="right"/>
|
|
|
|
|
|
<img src="uploads/daff3c4fcf40f18878bf0f42a79fa85f/it4i-logo-new.png" alt="drawing" height="55" align="right"/>
|
|
|
<br/>
|
|
|
|
|
|
# HEAppE Middleware
|
|
|
*High-End Application Execution Middleware* former *HPC as a Service Middleware*
|
... | ... | @@ -10,6 +12,7 @@ To provide this simple and intuitive access to the supercomputing infrastructure |
|
|
## References
|
|
|
HEAppE Middleware has already been successfully used in several public or commercial projects:
|
|
|
|
|
|
* in **H2020 project LEXIS** as a part of **LEXIS Platform** to provide the platform's job orchestrator access to a number of HPC systems in several HPC centers; https://lexis-project.eu
|
|
|
* in **crisis decision support system Floreon+** for What-If analysis workflow utilizing HPC clusters; https://floreon.eu
|
|
|
* in **Urban Thematic Exploitation Platform** (Urban-TEP) financed by ESA as a middleware enabling **sandbox execution of user-defined docker images** on the cluster; https://urban-tep.eo.esa.int
|
|
|
* in **H2020 project ExCaPE** as a part of **Drug Discovery Platform** enabling execution of drug discovery scientific pipelines on a supercomputer; http://excape-h2020.eu
|
... | ... | @@ -19,14 +22,28 @@ HEAppE Middleware has already been successfully used in several public or commer |
|
|
## Licence and Contact Information
|
|
|
HEAppE Middleware is licensed under the **GNU General Public License v3.0**. For commercial use, contact us via **support.heappe@it4i.cz** regarding the proprietary license information.
|
|
|
|
|
|
## Next Release Information
|
|
|
Next major release will contain the new **multi-platform .NET Core version** of HEAppE Middleware, **REST API**, **dockerized deployment and management** support, updated **PBS** and new **SLURM** adapter, and a various functional and security updates.
|
|
|
|
|
|
## IT4Innovations national supercomputing center
|
|
|
The IT4Innovations national supercomputing center operates supercomputers Salomon and Anselm. The supercomputers are available to academic community within the Czech Republic and Europe and industrial community worldwide. Both supercomputers are available to users via HEAppE Middleware.
|
|
|
The IT4Innovations national supercomputing center operates four supercomputers: Anselm (94 TFlop/s, installed in 2013), Salomon (2 PFlop/s, installed 2015), Barbora (826 TFlop/s, installed 2019) and a special system for AI computation, DGX-2 (2
|
|
|
PFlop/s in AI, installed in 2019). A petascale EURO IT4I system will be installed at the centre in 2020 as part of the EuroHPC project. The supercomputers are available to academic community within the Czech Republic and Europe and industrial community worldwide via HEAppE Middleware.
|
|
|
|
|
|
### Salomon
|
|
|
The Salomon cluster consists of 1008 compute nodes, totaling 24192 compute cores with 129 TB RAM and giving over 2 Pflop/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 24 cores, at least 128 GB RAM. Nodes are interconnected by 7D Enhanced hypercube InfiniBand network and equipped with Intel Xeon E5-2680v3 processors. The Salomon cluster consists of 576 nodes without accelerators and 432 nodes equipped with Intel Xeon Phi MIC accelerators.
|
|
|
|
|
|
https://docs.it4i.cz/salomon/hardware-overview/
|
|
|
|
|
|
### Barbora
|
|
|
The Barbora cluster consists of 201 compute nodes, totaling 7232 compute cores with 44544 GB RAM, giving over 848 TFLOP/s theoretical peak performance. Nodes are interconnected through a fully non-blocking fat-tree InfiniBand network, and are equipped with Intel Cascade Lake processors. A few nodes are also equipped with NVIDIA Tesla V100-SXM2.
|
|
|
|
|
|
https://docs.it4i.cz/barbora/hardware-overview/
|
|
|
|
|
|
### NVIDIA DGX-2
|
|
|
The DGX-2 is a very powerful computational node, featuring high end x86_64 processors and 16 NVIDIA V100-SXM3 GPUs. The DGX-2 introduces NVIDIA’s new NVSwitch, enabling 300 GB/s chip-to-chip communication at 12 times the speed of PCIe. With NVLink2, it enables 16x NVIDIA V100-SXM3 GPUs in a single system, for a total bandwidth going beyond 14 TB/s. Featuring pair of Xeon 8168 CPUs, 1.5 TB of memory, and 30 TB of NVMe storage, we get a system that consumes 10 kW, weighs 163.29 kg, but offers double precision performance in excess of 130TF.
|
|
|
|
|
|
https://docs.it4i.cz/dgx2/introduction/
|
|
|
|
|
|
### Anselm
|
|
|
The Anselm cluster consists of 209 compute nodes, totaling 3344 compute cores with 15 TB RAM and giving over 94 TFLOP/s theoretical peak performance. Each node is a powerful x86-64 computer, equipped with 16 cores, at least 64 GB RAM, and 500 GB hard disk drive. Nodes are interconnected by fully non-blocking fat-tree InfiniBand network and equipped with Intel Sandy Bridge processors. A few nodes are also equipped with NVIDIA Kepler GPU or Intel Xeon Phi MIC accelerators.
|
|
|
|
... | ... | @@ -41,8 +58,7 @@ This work was supported by The Ministry of Education, Youth and Sports from the |
|
|
|
|
|
*HEAppE's* universally designed software architecture enables unified access to~different HPC systems through a simple object-oriented client-server interface using standard web services. Thus providing HPC capabilities to the users but without the necessity to manage the running jobs form the command-line interface of the HPC scheduler directly on the cluster.
|
|
|
|
|
|
## REST API
|
|
|
https://app.swaggerhub.com/apis/vsvaton/heappe/1.0.0
|
|
|
<!-- ## REST API https://app.swaggerhub.com/apis/vsvaton/heappe/1.0.0 -->
|
|
|
|
|
|
## Web Services
|
|
|
|
... | ... | |