The Leibniz Supercomputing Centre of the Bavarian Academy of Sciences and Humanities (Leibniz-Rechenzentrum, LRZ) provides compre hensive services to scientific and academic communities by:

  • giving general IT services to more than 100,000 university customers in Munich and for the Bavarian Academy of Sciences
  • running and managing the powerful communication infrastructure of the Munich Scientific Network (MWN)
  • acting as a competence centre for data communication networks
  • being a centre for large-scale archiving and backup, and by
  • providing High Performance Computing resources, training and support on the local, regional, national and international level.

Research in HPC is carried out in collaboration with the distributed, statewide Competence Network for Technical and Scientific High Performance Computing in Bavaria (KONWIHR).


  • Leibniz Supercomputing Centre
  • Prof. Dr. Arndt Bode
  • Boltzmannstr. 1
  • 85478 Garching near Munich
  • +49 - 89 - 358 - 31- 80 00

Compute servers currently operated by LRZ

System Size Peak ­Performance (TFlop/s) Purpose User ­Community
IBM/Lenovo System x Cluster “SuperMUC Phase 1”
  • 18 thin node islands
  • IBM DataPlex
  • 512 nodes, 147,456 cores
  • 288 TByte, FDR 10 1B
3,185 Capability Computing German universities and research institutes, PRACE (Tier-0 System)
  • 1 fat node island with
  • 205 nodes IBM Bladecenter HX5
  • Westmere EX
  • 52 TByte, QDR IB
78 Capability Computing
  • 32 accelerated nodes
  • Intel Ivy Bridge EP,
  • Intel Xeon Phi
  • 76 GByte,
  • Dual-Rail FDR 14 IB
100 Prototype system
Lenovo/IBM Nextscale Cluster “SuperMUC Phase 2”
  • 6 medium node islands
  • Lenovo/IBM Nextscale
  • 512 nodes, 86,016 cores
  • Haswell EP
  • 197 TByte, FDR 14 IB
3,580 Capability Computing German universities and research institutes, PRACE (Tier-0 System)
Lenovo/IBM Nextscale Cluster “CooLMUC2”
  • 252 nodes, 7,056 cores
  • Haswell EP
  • 16.1 TByte
  • FDR 14 IB
270 Capability Computing Bavarian Universities
Megware IB-Cluster "CoolMUC"
  • 178 nodes, 2,848 Cores
  • AMD Magny Cours
  • 2.8 TByte
  • QDR IB
22.7 Capability Computing, PRACE Prototype Bavarian Universities
SGI Altix Ultraviolet
  • 2 partitions, 2,080 cores
  • Intel Westmere EX
  • 6.0 TByte main memory
20.0 Capability Computing Bavarian Universities
  • 510 nodes, 2,030 cores
  • Intel Xeon EM64T
  • AMD Opteron
  • 2-, 4-, 8-, 16-, 32-way
  • 4.7 TByte main memory
13.2 Capability Computing Bavarian and Munich Universities, LCG Grid

A detailed description can be found on LRZ’s web pages:

Picture of the Petascale system SuperMUC at the Leibniz Supercomputing Centre.

Picture of the Petascale system SuperMUC at the Leibniz Supercomputing Centre.


First German National Center

Based on a long tradition in supercomputing at University of Stuttgart, HLRS (Höchstleistungsrechenzentrum Stuttgart) was founded in 1995 as the first German federal Centre for High Performance Computing. HLRS serves researchers at universities and research laboratories in Europe and Germany and their external and industrial partners with high-end computing power for engineering and scientific applications.

Service for Industry

Service provisioning for industry is done to gether with T-Systems, T-Systems sfr, and Porsche in the public-private joint venture hww (Höchstleistungsrechner für Wissenschaft und Wirtschaft). Through this co-operation industry always has access to the most recent HPC technology.

Bundling Competencies

In order to bundle service resources in the state of Baden-Württemberg HLRS has teamed up with the Steinbuch Center for Computing of the Karlsruhe Institute of Technology. This collab oration has been implemented in the non-profit organization SICOS BW GmbH.

World Class Research

As one of the largest research centers for HPC HLRS takes a leading role in research. Participation in the German national initiative of excellence makes HLRS an outstanding place in the field.


  • Höchstleistungs­rechenzentrum ­Stuttgart (HLRS),
    Universität Stuttgart
  • Prof. Dr.-Ing. Dr. h.c. Dr. h.c. Michael M. Resch
  • Nobelstraße 19
  • 70569 Stuttgart Germany
  • +49 - 711-685 - 8 72 69

Compute servers currently operated by HLRS

System Size Peak Performance (TFlop/s) Purpose User Community
Cray XC40 "Hazel Hen"
  • 7,712 nodes
  • 185,088 cores
  • 1 PB memory
7,420 Capability Computing German and European (PRACE) Research Organizations and Industry
NEC Cluster (Laki, Laki2) heterogenous compunting platform of 2 independent clusters
  • 911 nodes, 9988 cores
  • 23 TB memory
170 Capacity Computing German Universities, Research Institutes and Industry
  • 64 compute nodes,256 cores
  • 4 TB memory
16 Vector Computing German Universities, Research Institutes and Industry

A detailed description can be found on HLRS’s web pages:

View of the HLRS Cray XC40 "Hazel Hen"

View of the HLRS Cray XC40 "Hazel Hen"


The Jülich Supercomputing Centre (JSC) at Forschungszentrum Jülich is committed to enabling scientists and engineers explore some of the most complex grand challenges facing science and society. Our research is performed through collaborative infrastructures exploiting extreme-scale supercomputing and federated data services.

Provision of supercomputer resources of the highest performance class for projects in science, research and industry in the fields of modeling and computer simulation including their methods. The selection of the projects is performed by international peer-review procedures implemented by the John von Neumann Institute for Computing (NIC), GCS, and PRACE.

Supercomputer-oriented research and development in selected fields of physics and other natural sciences by research groups and in technology, e.g. by doing co-design together with leading HPC companies.

Higher education for master and doctoral students in cooperation e.g. with the German Research School for Simulation Sciences.

Implementation of strategic support infrastructures including community-oriented simulation laboratories and cross-sectional teams, e.g. on mathematical methods and algorithms and parallel performance tools, enabling the effective usage of the supercomputer resources.


  • Jülich Supercomputing Centre (JSC)
    Forschungszentrum Jülich
  • Prof. Dr. Dr. Thomas Lippert
  • 52425 Jülich Germany
  • Phone +49 - 24 61- 61- 64 02

Compute servers currently operated by JSC

System Size Peak ­Performance (TFlop/s) Purpose User ­Community
  • 28 racks
  • 28,672 nodes
  • 458,752 processors
  • IBM PowerPC® A2
  • 448 Tbyte main memory
5,872 Capability Computing European Universities and Research Institutes, PRACE
Intel Linux CLuster "JURECA"
  • 1,884 SMT nodes with 2 Intel Haswell 12-core 2.5 GHz processors each
  • 150 graphics processors (NVIDIA K80)
  • 281 TByte memory
2,245 Capacity and Capability Computing European Universities, Research Institutes and Industry
IBM Cell System “QPACE”
  • 1,024 PowerXCell 8i processors
  • 4 TByte memory
100 Capability Computing QCD Applications SFB TR55, PRACE
JSC's supercomputer "JUQUEEN", an IBM Blue Gene/Q system.

JSC's supercomputer "JUQUEEN", an IBM Blue Gene/Q system.