The Aurora HPC 10-10 excels in density, energy efficiency and reliability: it can thermally manage the top of the Xeon E5 series, the E5-2687W at 3.1 Ghz (150 W TDP) this means a standard rack of the Aurora HPC 10-10 can propel 100 Tflops of pure CPU computational power at 110 Kw of peak consumption. That is 1 Petflop in 15 m2 or 160 sqft. Such densities are possible because of a hot cooling system (coolant at 50°C +) that has improved compared to the previous AU 5600 supercomputer, becoming even more efficient in extracting heat from the system.
The HPC 10-10 inherits the efficient power conversion of the AU 5600 and allow data center PUEs as low as 1.05. This means great energy savings and together with a very respectable 900 Mflops/W this means also the Aurora high performance computer is one of most efficient and greenest systems in the market
Eurotech has leveraged their long experience in developing embedded systems, to bring to HPC1 0-10 a high degree of RAS (reliability, availability and serviceability). The 10-10 has no moving parts, no hot spots, no noise. It has 3 independent sensor networks, soldered memory and its nodes are very manageable 50x16 cm (19”x6”) blades, hot pluggable despite being liquid cooled. Also, there is limited cabling inside each rack, where the backplanes are handling most of the I/O communication
High computational powerAurora uses the latest and fastest technology available. A high end solution, capable to deliver unparalleled computational power, maintaining all of the flexibility and compatibility of a CPU based x86 solution.
High packaging densityAurora systems are the best in class in terms of computing density per unit rack. Aurora mounting Sandy Bridge can have up to to 4096 cores/512 CPUs/256 blades hosted in a single 48U rack. In other words, this means over 66 Tflops per m2, or 2 Petaflops in a studio flat! A reduced floor occupation and easier installation.
Hot liquid coolingAurora removes component generated heat using hot liquid cooling (water up to 50 °C), with no need for air conditioning, in any climate zone. This allows reaching a data center PUE as low as 1.05.
Thermal Energy ReuseEach Aurora computational node can produce a temperature gap between 3 °C and 5 °C in the cooling liquid. Setting up more the racks in a multistage heating configuration, it is possible to warm up the coolant enough to be utilized for producing air conditioning, generating electricity or simply heating a building.
Cooling directly on the components
Direct on component liquid cooling, a feature that limits on board hot spots.
Reliability and availabilityQuality, on component liquid cooling, redundancy of all critical components, vibration less functioning, solid state storage, temperature control, monitoring networks (IPMI), ease of maintenance and last but not least Eurotech HPC experience contribute to high reliability and longer system availability.
No moving parts
Aurora doesn’t shake, rattle or make noise. It does not have moving fans or require a dedicated room for installation.
Unified network architectureAn Infiniband switched network coexists with an optional FPGA driven 3D Torus nearest neighbor network.
Synchronization networksThree independent synchronization networks (system, subdomain and local) preserve efficiency at Petascale by guaranteeing that the communication and the scheduling of all nodes are automatically handled.
Excellent design and ease of maintainabilityWhile the Aurora supercomputers show an appealing design, they have been thought to guarantee ease of access and operation to easy maintenance.
- from 42 to over 100 Tflops/rack
- 340W-390W/node, 11.2kW/chassis, 90kW - 100 KW/rack typ.
- Intel Xeon E5 and Xeon 5600 series
- from 3072 to 4096 cores/rack
- 8/16/32 GB or above soldered on board ECC DDR3 SDRAM per node
- 6/12/24 GB or above soldered on board ECC DDR3 SDRAM per node
- Memory bandwidth: 40 GB per second per node
- 80 / 160 / 256 / 512 / 1024 GB 1.8” SATA disk
- 80 / 160 / 256 GB 1.8” SATA SSD
- QDR Infiniband port per node (BW: 40Gbps, Latency <2us)
- 20+20 QDR IB ports (QSFP connections) per chassis
- OPTIONAL: 1+1 3D Torus nearest neighbour switchless per node (BW:60+60Gbps, Latency: ~1us )
- External ACDC converter (85-300VAC to 48VDC), n+1 redundant, 97% efficiency
- In rack DCDC trays (48VDC to 10VDC), 97% efficiency
- Entirely liquid cooled, ambient heat spillage <2%
Monitoring and Control
- IPMI 960 measurement points per rack.
* Dimensions (Rack): H 2260mm x W 1095mm x D 1500mm
- Weight (Maximum): 1560kg (3440 lbs.) per fully populated rack
- Acoustical Noise Level: <20 dB at 1 m
Adoption of Intel processors ensures compatibility with a vast range of applications, tools, OS's and specific HPC middleware. The advantage of being x86-based allows Aurora to have an almost unlimited choice of compilers, debuggers, libraries, applications, OS's, specific HPC middleware, clustering and administration tools, open source or proprietary.
- Scientific Linux and others
- Intel Cluster Toolkit
- GNU toolchain
- Portland CDK
- Intel MPI
- Portland MVAPICH
Debuggers and performance tools
- Intel Trace Analyzer and Collector
- Intel VTune
Compatibility of Math libraries implementation specific
- Intel MKL
- IMSL (with ICT requires adaptation)
Resource Management/ Deployment
- OpenPBS, SunGridEngine
- PBS Professional
- Bright Cluster Manager
- Platform LSF/Cluster Manager
- Rocks, Rocks+
- Torque, MOAB, xCAT
Distributed File System
- Lustre over QDR Infiniband, either via OFED or TCP.
- pNFS, panFS, GPFS under test
Maintenance and Management