Skip to content
Snippets Groups Projects
Select Git revision
  • 93a90d2829e799e3694abb0b0b2b61ef7ee9ab52
  • master default protected
  • patch-1
  • salomon_upgrade
  • Urx
  • virtual_environment2
  • tabs
  • john_branch
  • anselm2
  • mkdocs_update
  • pbs
  • hot_fix
  • MPDATABenchmark
  • 20180621-revision
  • 20180621-before_revision
15 results

compute-nodes.md

Blame
  • Forked from SCS / docs.it4i.cz
    1582 commits behind, 112 commits ahead of the upstream repository.

    Compute Nodes

    Nodes Configuration

    Anselm is cluster of x86-64 Intel based nodes built on Bull Extreme Computing bullx technology. The cluster contains four types of compute nodes.

    ###Compute Nodes Without Accelerator

    • 180 nodes
    • 2880 cores in total
    • two Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node
    • 64 GB of physical memory per node
    • one 500GB SATA 2,5” 7,2 krpm HDD per node
    • bullx B510 blade servers
    • cn[1-180]

    ###Compute Nodes With GPU Accelerator

    • 23 nodes
    • 368 cores in total
    • two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node
    • 96 GB of physical memory per node
    • one 500GB SATA 2,5” 7,2 krpm HDD per node
    • GPU accelerator 1x NVIDIA Tesla Kepler K20 per node
    • bullx B515 blade servers
    • cn[181-203]

    ###Compute Nodes With MIC Accelerator

    • 4 nodes
    • 64 cores in total
    • two Intel Sandy Bridge E5-2470, 8-core, 2.3GHz processors per node
    • 96 GB of physical memory per node
    • one 500GB SATA 2,5” 7,2 krpm HDD per node
    • MIC accelerator 1x Intel Phi 5110P per node
    • bullx B515 blade servers
    • cn[204-207]

    ###Fat Compute Nodes

    • 2 nodes
    • 32 cores in total
    • 2 Intel Sandy Bridge E5-2665, 8-core, 2.4GHz processors per node
    • 512 GB of physical memory per node
    • two 300GB SAS 3,5”15krpm HDD (RAID1) per node
    • two 100GB SLC SSD per node
    • bullx R423-E3 servers
    • cn[208-209]

    Figure Anselm bullx B510 servers

    Compute Nodes Summary

    Node type Count Range Memory Cores Access
    Nodes without accelerator 180 cn[1-180] 64GB 16 @ 2.4Ghz qexp, qprod, qlong, qfree
    Nodes with GPU accelerator 23 cn[181-203] 96GB 16 @ 2.3Ghz qgpu, qprod
    Nodes with MIC accelerator 4 cn[204-207] 96GB 16 @ 2.3GHz qmic, qprod
    Fat compute nodes 2 cn[208-209] 512GB 16 @ 2.4GHz qfat, qprod

    Processor Architecture

    Anselm is equipped with Intel Sandy Bridge processors Intel Xeon E5-2665 (nodes without accelerator and fat nodes) and Intel Xeon E5-2470 (nodes with accelerator). Processors support Advanced Vector Extensions (AVX) 256-bit instruction set.

    Intel Sandy Bridge E5-2665 Processor

    • eight-core
    • speed: 2.4 GHz, up to 3.1 GHz using Turbo Boost Technology
    • peak performance: 19.2 Gflop/s per core
    • caches:
      • L2: 256 KB per core
      • L3: 20 MB per processor
    • memory bandwidth at the level of the processor: 51.2 GB/s

    Intel Sandy Bridge E5-2470 Processor

    • eight-core
    • speed: 2.3 GHz, up to 3.1 GHz using Turbo Boost Technology
    • peak performance: 18.4 Gflop/s per core
    • caches:
      • L2: 256 KB per core
      • L3: 20 MB per processor
    • memory bandwidth at the level of the processor: 38.4 GB/s

    Nodes equipped with Intel Xeon E5-2665 CPU have set PBS resource attribute cpu_freq = 24, nodes equipped with Intel Xeon E5-2470 CPU have set PBS resource attribute cpu_freq = 23.

    $ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=16:cpu_freq=24 -I

    In this example, we allocate 4 nodes, 16 cores at 2.4GHhz per node.

    Intel Turbo Boost Technology is used by default,  you can disable it for all nodes of job by using resource attribute cpu_turbo_boost.

        $ qsub -A OPEN-0-0 -q qprod -l select=4:ncpus=16 -l cpu_turbo_boost=0 -I

    Memory Architecture

    Compute Node Without Accelerator

    • 2 sockets
    • Memory Controllers are integrated into processors.
      • 8 DDR3 DIMMS per node
      • 4 DDR3 DIMMS per CPU
      • 1 DDR3 DIMMS per channel
      • Data rate support: up to 1600MT/s
    • Populated memory: 8x 8GB DDR3 DIMM 1600Mhz

    Compute Node With GPU or MIC Accelerator

    • 2 sockets
    • Memory Controllers are integrated into processors.
      • 6 DDR3 DIMMS per node
      • 3 DDR3 DIMMS per CPU
      • 1 DDR3 DIMMS per channel
      • Data rate support: up to 1600MT/s
    • Populated memory: 6x 16GB DDR3 DIMM 1600Mhz

    Fat Compute Node

    • 2 sockets
    • Memory Controllers are integrated into processors.
      • 16 DDR3 DIMMS per node
      • 8 DDR3 DIMMS per CPU
      • 2 DDR3 DIMMS per channel
      • Data rate support: up to 1600MT/s
    • Populated memory: 16x 32GB DDR3 DIMM 1600Mhz