Germany - WMO

... Haïti, Îles Salomon, Kiribati, Lesotho, Libéria, Madagascar, Malawi, Maldives,
Mali, ..... Un examen et une évaluation d'ensemble de la mise en oeuvre des
résultats .... 2012 (13 ans plus tard) .... une série de facteurs, parmi lesquels le
revenu, la santé et le niveau d'instruction. ...... Technical background document,
No.

Part of the document

JOINT WMO TECHNICAL PROGRESS REPORT ON THE GLOBAL DATA PROCESSING AND
FORECASTING SYSTEM AND NUMERICAL WEATHER PREDICTION RESEARCH ACTIVITIES FOR
2013 Country: Germany Centre: NMC Offenbach
Summary of highlights
The operational deterministic modelling suite of DWD consists of three
models, namely the global icosahedral-hexagonal grid point model GME (grid
spacing 20 km, i.e. 1.474.562 grid points/layer, 60 layers), the non-
hydrostatic regional model COSMO-EU (COSMO model Europe, grid spacing 7 km,
665x657 grid points/layer, 40 layers), and finally the convection-resolving
model COSMO-DE, covering Germany and its surroundings with a grid spacing
of 2.8 km, 421x461 grid points/layer and 50 layers. The probabilistic ensemble prediction system on the convective scale,
called COSMO-DE-EPS, became operational with 20 EPS members on 22 May 2012.
It is based on COSMO-DE with a grid spacing of 2.8 km, 421x461 grid
points/layer and 50 layers. Four global models, namely GME (DWD), IFS
(ECMWF), GFS (NOAA-NCEP) and GSM (JMA) provide lateral boundary conditions
to intermediate 7-km COSMO models which in turn provide lateral boundary
conditions to COSMO-DE-EPS. To sample the PDF and estimate forecast
uncertainty, variations of the initial state and physical parameterizations
are used to generate additional EPS members. The forecast range of COSMO-
DE-EPS is 27 h with new forecasts every three hours. The COSMO model (http://cosmo-model.org/) is used operationally at the
national meteorological services of Germany, Greece, Italy, Poland,
Romania, Russia and Switzerland, and at the regional meteorological service
in Bologna (Italy). The military weather service of Germany operates a
relocatable version of the COSMO model for worldwide applications. Recently
the Meteorological Service of Israel (IMS) became applicant member of
COSMO. Six national meteorological services, namely Botswana Department of
Meteorological Services, INMET (Brazil), DHN (Brazil), Namibia
Meteorological Service, DGMAN (Oman) and NCMS (United Arab Emirates) as
well as the regional meteorological service of Catalunya (Spain) use the
COSMO model in the framework of an operational licence agreement including
a license fee. National meteorological services in developing countries (e.g. Egypt,
Indonesia, Kenya, Mozambique, Nigeria, Philippines, Rwanda, Tanzania,
Vietnam) use the COSMO model free of charge. For lateral boundary conditions, GME data are sent via the internet to the
COSMO model users up to four times per day. Each user receives only data
from those GME grid points (at the grid spacing of 20 km for all 60 model
layers plus all 7 soil layers) which correspond to the regional COSMO model
domain. Currently DWD is sending GME data to more than 40 COSMO model
users.
The main improvements of DWD's modelling suite included: For GME:
14/02/2013: Replacement of RTTOV07 by RTTOV10 for the assimilation of
satellite radiances in the 3D-Var. 14/02/2013: Introduction of online bias correction scheme for aircraft
temperature measurements. 24/04/2013: Assimilation of radiance data from the instrument HIRS (6
channels on Metop-A7-B, NOAA-17/-19). Assimilation of Metop-B data (AMSU-A
radiances, AMV winds and radio occultation). 25/09/2013: Extension of the forecast range of the 06 and 18 UTC forecasts
from 48 to 78 hours. 09/10/2013: Assimilation of humidity measurements from aircrafts over North
America. Assimilation of additional wind profiler stations in Canada.
For COSMO-EU:
16/01/2013: Improved fast wave solver with higher accuracy and stability in
regions of steep terrain. 24/04/2013: Introduction of a new shortwave albedo based on MODIS satellite
data over land. 25/09/2013: Extension of the forecast range of the 06 and 18 UTC forecasts
from 48 to 78 hours. 09/10/2013: Correction of the water loading in the buoyancy term. The
quality control of surface pressure observations was extended by a check
against the fields which provide the lateral boundary conditions, i.e. the
interpolated GME or IFS fields. This was mainly to address rare cases of
analysis failures where other checks were not able to reasonably detect
increasingly large observations errors from a single buoy, when these data
have been presented to the nudging scheme for continuous assimilation at
high frequency.
For COSMO-DE:
16/01/2013: Improved fast wave solver with higher accuracy and stability in
regions of steep terrain. 06/03/2013: Extension of the forecast range from 21 to 27 hours.
24/04/2013: Introduction of a new shortwave albedo based on MODIS satellite
data over land. 09/10/2013: Correction of the water loading in the buoyancy term. Improved
quality control of surface pressure observations.
For COSMO-DE-EPS: 06/03/2013: Extension of the forecast range from 21 to 27 hours.
11/12/2013: Extension of the operational ensemble products (probabilities,
quantiles, mean, spread, min, max) by additional thresholds and variables.
29/01/2014: Enlargement of the setup of model physics perturbations by
variation of the minimum diffusion coefficient for heat and momentum.
29/01/2014: Introduction of initial soil moisture perturbations derived
from differences between COSMO-EU and COSMO-DE soil moisture analyses.
29/01/2014: Extension of the forecast range from 21 to 27 hours for all
ensemble products (probabilities, quantiles, mean, spread, min, max).
2. Equipment in use
2.1 Main computers
2.1.1 Two NEC SX-8R Clusters Each Cluster:
Operating System NEC Super-UX 20.1
7 NEC SX-8R nodes (8 processors per node, 2.2 GHz, 35.2 GFlops/s peak
processor
performance, 281.6 GFlops/s peak node performance)
1.97 TFlops/s peak system performance
64 GiB physical memory per node, complete system 448 GiB physical
memory
NEC Internode crossbar switch IXS (bandwidth 16 GiB/s bidirectional)
FC SAN attached global disk space (NEC GFS), see 2.1.4
Both NEC SX-8R clusters are used for climate modelling and research;
one being de-
commissioned end of August 2013. 2. Two NEC SX-9 Clusters Each cluster:
Operating System NEC Super-UX 20.1
30 NEC SX-9 nodes (16 processors per node, 3.2 GHz, 102.4 GFlops/s
peak processor
performance, 1638.4 GFlops/s peak node performance)
49.15 TFlops/s peak system performance
512 GiB physical memory per node, complete system 15 TiB physical
memory
NEC Internode crossbar switch IXS (bandwidth 128 GiB/s bidirectional)
FC SAN attached global disk space (NEC GFS), see 2.1.4
One NEC SX-9 cluster is used to run the operational weather forecasts;
the second one serves as research and development system.
2.1.3 Two SUN X4600 Clusters
Each cluster:
Operating System SuSE Linux SLES 10
15 SUN X4600 nodes (8 AMD Opteron quad core CPUs per node, 2.3 GHz,
36.8 GFlops/s
peak processor performance, 294.4 GFlops/s peak node performance)
4.4 TFlops/s peak system performance
128 GiB physical memory per node, complete system 1.875 TiB physical
memory
Voltaire Infiniband Interconnect for multinode applications (bandwidth
10 GBit/s bidirectional)
Network connectivity 10 Gbit Ethernet
FC SAN attached global disk space (NEC GFS), see 2.1.4
One SUN X4600 cluster is used to run operational tasks (pre-/post-
processing, special
product applications), the other one research and development tasks. 2.1.4 NEC Global Disk Space Three storage clusters: 51 TiB + 240 TiB + 360 TiB
SAN based on 4 GBit/s FC-AL technology
4 GiB/s sustained aggregate performance
Software: NEC global filesystem GFS-II
Hardware components: NEC NV7300G High redundancy metadata server, NEC
Storage D3-10
The three storage clusters are accessible from systems in 2.1.1,
2.1.2, 2.1.3. 2.1.5 Three SGI Altix 4700 systems SGI Altix 4700 systems are used as data handling systems for
meteorological data.
Two Redundancy Cluster SGI_1/2 each consisting of 2 SGI Altix 4700 for
operational tasks and research/development each with:
Operating System SuSE Linux SLES 10
96 Intel Itanium dual core processors 1.6 GHz
1104 GiB physical memory
Network connectivity 10 Gbit Ethernet
680 TiB (SATA) and 30 TiB (SAS) disk space on redundancy cluster SGI_1
for meteorological data
Backup System SGI_B: one SGI Altix 4700 for operational tasks with
Operating System SuSE Linux SLES 10
24 Intel Itanium dual core processors 1.6 GHz
288 GiB physical memory
Network connectivity 10 Gbit Ethernet
70 TiB (SATA) and 10 TiB (SAS) disk space for meteorological data
2.1.6 IBM System x3650 Server Operating System RedHat RHEL5
9 IBM System x3640 M2 (2 quadcore processors, 2.8 GHz)
24 GB of physical memory each
480 TB of disk space for HPSS archives
50 Archives (currently 14.7 PB)
connected to 2 Storage-Tek Tape Libraries via SAN This high-available cluster is used for HSM based archiving of
meteorological data and forecasts. 2.1.7 STK SL8500 Tape Library Attached are 60 Oracle STK FC-tape drives
20 x T10000B (1 TB, 120 MB/s)
40 x T10000C (5 TB, 240 MB/s)
2.2 Networks The main computers are interconnected via Gigabit Ethernet
(Etherchannel) and connected to the LAN via Fast Ethernet. 2.3 Special systems 2.3.1 RTH Offenbach Telecommunication systems The Message Switching System (MSS) in Offenbach is acting as RTH on
the MTN within the WMO GTS. It is called Weather Information System
Offenbach (WISO) and based on