Centers

LRZ

The Leibniz Supercomputing Centre of the Bavarian Academy of Sciences and Humanities (Leibniz-Rechenzentrum, LRZ) provides comprehensive services to scientific and academic communities by:

  • Giving general IT services to more than 100,000 university customers in Munich and for the Bavarian Academy of Sciences
  • Running and managing the powerful communication infrastructure of the Munich Scientific Network (MWN)
  • Acting as a competence centre for data communication networks
  • Being a centre for large-scale archiving and backup, and by
  • Providing High Performance Computing resources, training and support on the local, regional, national and international level.

Research in HPC is carried out in collaboration with the distributed, statewide Competence Network for Technical and Scientific High Performance Computing in Bavaria (KONWIHR).

Contact:

  • Leibniz Supercomputing Centre
  • Prof. Kranzlmüller
  • Boltzmannstr. 1
  • 85478 Garching near Munich
  • +49 - 89 - 358 - 31- 80 00
  • www.lrz.de

Compute servers currently operated by LRZ

System Size Peak ­Performance (TFlop/s) Purpose User ­Community
“SuperMUC Phase 1” IBM System x
  • IBM iDataPlex
  • 9216 nodes, 147456 cores,
  • 288 TByte, FDR10
3,185 Capability Computing
  • 205 nodes, 8,200 cores
  • Westmere EX
  • 52 TByte, QDR
78 Capability Computing
  • 32 accelerated nodes
  • Knights Corner
  • 76 GByte, FDR14
100 Prototype system
“SuperMUC Phase 2” Lenovo Nextscale
  • 3072 nodes, 86016 cores,
  • Haswell EP
  • 197 TByte, FDR 14 IB
3,580 Capability Computing German universities and research institutes, PRACE (Tier-0 System)
“CooLMUC2” Lenovo Nextscale
  • 252 nodes, 7,056 cores
  • Haswell EP
  • 16.1 TByte, FDR 14 IB
270 Capability Computing Bavarian Universities (Tier-2)
“CoolMUC3” Megware Slide SX
  • 148 nodes, 9472 cores,
  • Knights Landing,
  • 14.2 TByte, Omnipath
383 Capability Computing Bavarian Universities (Tier-2)
Compute Cloud Linux-Cluster
  • 200 nodes, 2700 cores
18 Capability Computing Bavarian Universities, LCC Grid

A detailed description can be found on LRZ’s web pages: www.lrz.de/services/compute

Picture of the Petascale system SuperMUC at the Leibniz Supercomputing Centre.

Picture of the Petascale system SuperMUC at the Leibniz Supercomputing Centre.

HLRS

First German National Center

Based on a long tradition in supercomputing at University of Stuttgart, HLRS (Höchstleistungsrechenzentrum Stuttgart) was founded in 1995 as the first German federal Centre for High Performance Computing. HLRS serves researchers at universities and research laboratories in Europe and Germany and their external and industrial partners with high-end computing power for engineering and scientific applications.

Service for Industry

Service provisioning for industry is done to gether with T-Systems, T-Systems sfr, and Porsche in the public-private joint venture hww (Höchstleistungsrechner für Wissenschaft und Wirtschaft). Through this co-operation industry always has access to the most recent HPC technology.

Bundling Competencies

In order to bundle service resources in the state of Baden-Württemberg HLRS has teamed up with the Steinbuch Center for Computing of the Karlsruhe Institute of Technology. This collab oration has been implemented in the non-profit organization SICOS BW GmbH.

World Class Research

As one of the largest research centers for HPC HLRS takes a leading role in research. Participation in the German national initiative of excellence makes HLRS an outstanding place in the field.

Contact:

  • Höchstleistungs­rechenzentrum ­Stuttgart (HLRS),
    Universität Stuttgart
  • Prof. Dr.-Ing. Dr. h.c. Dr. h.c. Prof. E.h. Michael M. Resch
  • Nobelstraße 19
  • 70569 Stuttgart Germany
  • +49 - 711-685 - 8 72 69
  • www.hlrs.de

Compute servers currently operated by HLRS

System Size Peak Performance (TFlop/s) Purpose User Community
Cray XC40 "Hazel Hen"
  • 7,712 nodes
  • 185,088 cores
  • 1 PB memory
7,420 Capability Computing German and European (PRACE) Research Organizations and Industry
NEC Cluster (Laki, Laki2) heterogenous compunting platform of 2 independent clusters
  • 911 nodes
  • 9988 cores
  • 23 TB memory
  • 170
  • Laki: 120,5 TFlops
  • Laki2: 47,2 TFlops
Capacity Computing German Universities, Research Institutes and Industry
NEC SX-ACE
  • 64 compute nodes
  • 256 cores
  • 4 TB memory
16 Vector Computing German Universities, Research Institutes and Industry

A detailed description can be found on HLRS’s web pages: www.hlrs.de/systems

View of the HLRS Cray XC40 "Hazel Hen"

View of the HLRS Cray XC40 "Hazel Hen"

JSC

The Jülich Supercomputing Centre (JSC) at Forschungszentrum Jülich is committed to enabling scientists and engineers explore some of the most complex grand challenges facing science and society. Our research is performed through collaborative infrastructures exploiting extreme-scale supercomputing and federated data services.

Provision of supercomputer resources of the highest performance class for projects in science, research and industry in the fields of modeling and computer simulation including their methods. The selection of the projects is performed by international peer-review procedures implemented by the John von Neumann Institute for Computing (NIC), GCS, and PRACE.

Supercomputer-oriented research and development in selected fields of physics and other natural sciences by research groups and in technology, e.g. by doing co-design together with leading HPC companies.

Higher education for master and doctoral students in close cooperation with neighbouring universities.

Implementation of strategic support infrastructures including community-oriented simulation laboratories and cross-sectional teams, e.g. on mathematical methods and algorithms and parallel performance tools, enabling the effective usage of the supercomputer resources.

Contact:

  • Jülich Supercomputing Centre (JSC)
    Forschungszentrum Jülich
  • Prof. Dr. Dr. Thomas Lippert
  • 52425 Jülich Germany
  • Phone +49 - 24 61- 61- 64 02
  • www.fz-juelich.de/jsc

Compute servers currently operated by JSC

System Size Peak ­Performance (TFlop/s) Purpose User ­Community
IBM Blue Gene/Q "JUQUEEN"
  • 28 racks
  • 28,672 nodes
  • 458,752 processors
  • IBM PowerPC® A2
  • 448 Tbyte main memory
5,872 Capability Computing European Universities and Research Institutes, PRACE
Intel Linux CLuster "JURECA"
  • 1,884 SMT nodes with 2 Intel Haswell 12-core 2.5 GHz processors each
  • 150 graphics processors (NVIDIA K80)
  • 281 TByte memory
2,245 Capacity and Capability Computing European Universities, Research Institutes and Industry
Intel Xeon Phi Cluster "QPACE"
  • 352 Intel Xeon Phi 7210
  • 17 TByte memory
937 Capability Computing QCD Applications SFB TR55
JSC's supercomputer "JUQUEEN", an IBM Blue Gene/Q system.

JSC's supercomputer "JUQUEEN", an IBM Blue Gene/Q system.