Diese Seite ist aus Gründen der Barrierefreiheit optimiert für aktuelle Browser. Sollten Sie einen älteren Browser verwenden, kann es zu Einschränkungen der Darstellung und Benutzbarkeit der Website kommen!
Geophysics Homepage
Log in

TETHYS - Tectonic High-Performance Simulator


Between 2006 and 2012 Munich Geophysics operated a 160 core AMD Opteron cluster with a theoretical peak performance of 0.8 TFLOP. TETHYS served as a system for capacity simulation of geophysical models. It was a system intended to allow a high-throughput of compute jobs large enough to require a high-performance system, but small enough to run on a departmental supercomputer and not require the full resources of a tera- or peta-scale system at a computing center. As such TETHYS was mostly (around 80% of the time) used for parameter studies and to a smaller extent for pre-studies before running models on larger systems and for parallel code development. The cluster was designed for topical computing, i.e. its intended use was limited to a set of certain key applications. These included:

  • Simulation of mantle convection with our code TERRA
  • Global wave field simulation with SpecFEM3D GLOBE
  • Local wave field simulations with our inhouse codes SeisSol and SES3D
  • Computation of filter operations on large data sets for picture/movie generation and interactive 3D visualisation on our GeoWall

In our opinion, the main advantages of building a high-end cluster at the departmental level, as compared e.g. to parallel systems at regional computing centres, are three-fold:

  • Since the cluster was dedicated for a small set of key applications we were able to optimally tailor its hardware design with respect to these applications in contrast to the multi-purpose systems available at computing centres, which must serve the needs of a broad set of very diverse programs. An overview of the key applications and a closer description of design process and decisions was published in

  • The dedicated mode of operation also lead to faster turn-around times, i.e. the time between submission and completion of the simulation run, allowing for real capacity computing.
  • Before porting a numerical model to a real supercomputer, such as the National Supercomputer HLRB II at the Leibniz Computing Center in Garching, preliminary studies to determine optimal parameters and algorithms are required. Due to scaling effects such pre-studies are only feasible on intermediate level systems, such as TETHYS.

During its six years of operation TETHYS was an indispensable tool for research in Computational Seismology and Geodynamics. From 2006 to 2012 there were 19 PhD thesis written in the Geodynamics and Seismology groups all of which involved computer simulation and many of them profitted from TETHYS. At the date of its decommissioning the ISI Web of Knowledge listed 14 reviewed papers in renowned journals that cite Oeser (2006) to credit the fact that they include results from simulation runs in TETHYS.

TETHYS, the first generation of the Munich Tectonic High-Performance Simulator, was replaced in 2012 by TETHYS-2G a new Intel Xeon cluster.

Technical Specification

total number of processors 160
number of processors per node two per node
type of processors AMD Opteron 250 (64 bit, single core)
clock speed 2.4 GHz
L1 cache 64/64KB (data/instruction)
L2 cache 1MB (data + instruction)
local memory 2GB RAM (DDR1)
local storage 160 GB
network interface 1000T Ethernet (2 ports)
commissioned February 2006
decommissioned April 2012

Topology & Interconnect

Topology of TETHYS

Schematics of topology

The compute nodes of TETHYS were arranged in four conceptual blocks that were interconnected via a hierarchical structure. Each block consisted of 20 nodes / 40 CPUs which could directly communicate via a 1 GBit cluster node switch (intra-block communication). All four 1 GBit cluster node switches were linked via a central 10 GBit cluster core switch for inter-block communication. Availability in the case of failure of the central cluster node switch was guaranteed via a circular contingency connection of the cluster node switches.

Due to the moderate communication requirements of our key applications evaluation studies showed no need to invest into comparatively high-cost interconnect technologies such as InfiniBand or Myrinet. Instead Gigabit Ethernet was chosen and the money saved invested into additional compute nodes.


This first phase of TETHYS was financed jointly by the Free State of Bavaria and the German Ministry of Education and Research (BMBF) by means of the HBFG program. Some extra nodes were sponsored by the German Science Foundation (DFG) in the context of grant KA 2281/2-1.

We would also like to thank our industry partners Microstaxx GmbH and the High-Performance Group of Fujitsu-Siemens Computers for their support in building and maintaining the cluster.

by Marcus Mohr last modified 02. Apr 2012 10:49
ImprintPrivacy PolicyContact
Printed 03. Apr 2020 01:21