Météo-France
Proper scoring rules are an essential tool to assess the predictive performance of probabilistic forecasts. However, propriety alone does not ensure an informative characterization of predictive performance and it is recommended to compare forecasts using multiple scoring rules. With that in mind, interpretable scoring rules providing complementary information are necessary. We formalize a framework based on aggregation and transformation to build interpretable multivariate proper scoring rules. Aggregation-and-transformation-based scoring rules are able to target specific features of the probabilistic forecasts; which improves the characterization of the predictive performance. This framework is illustrated through examples taken from the literature and studied using numerical experiments showcasing its benefits. In particular, it is shown that it can help bridge the gap between proper scoring rules and spatial verification tools.
Atmospheric pollution regulations have emerged as a dominant obstacle to prescribed burns. Thus, forecasting the pollution caused by wildland fires has acquired high importance. WRF and SFIRE model wildland fire spread in a two-way interaction with the atmosphere. The surface heat flux from the fire causes strong updrafts, which in turn change the winds and affect the fire spread. Fire emissions, estimated from the burning organic matter, are inserted in every time step into WRF-Chem tracers at the lowest atmospheric layer. The buoyancy caused by the fire then naturally simulates plume dynamics, and the chemical transport in WRF-Chem provides a forecast of the pollution spread. We discuss the choice of wood burning models and compatible chemical transport models in WRF-Chem, and demonstrate the results on case studies.
DRAIN is a deep learning model developed by LATMOS-IPSL that retrieves rain rates from GPM passive microwave radiometer data by processing brightness temperatures as images using a U-Net architecture. It performs comparably to or better than the operational GPM GPROF algorithm in terms of Root Mean Square Error and False Alarm Ratio, while also providing 99 quantiles of precipitation for uncertainty estimation.
12 Mar 2025
Data assimilation involves estimating the state of a system by combining observations from various sources with a background estimate of the state. The weights given to the observations and background state depend on their specified error covariance matrices. Observation errors are often assumed to be uncorrelated even though this assumption is inaccurate for many modern data-sets such as those from satellite observing systems. As methods allowing for a more realistic representation of observation-error correlations are emerging, our aim in this article is to provide insight on their expected impact in data assimilation. First, we use a simple idealised system to analyse the effect of observation-error correlations on the spectral characteristics of the solution. Next, we assess the relevance of these results in a more realistic setting in which simulated alongtrack (nadir) altimeter observations with correlated errors are assimilated in a global ocean model using a three-dimensional variational assimilation (3D-Var) method. Correlated observation errors are modelled in the 3D-Var system using a diffusion operator. When the correlation length scale of observation error is small compared to that of background error, inflating the observation-error variances can mitigate most of the negative effects from neglecting the observation-error correlations. Accounting for observation-error correlations in this situation still outperforms variance inflation since it allows small-scale information in the observations to be more effectively extracted and does not affect the convergence of the minimization. Conversely, when the correlation length scale of observation error is large compared to that of background error, the effect of observation-error correlations cannot be properly approximated with variance inflation. However, the correlation model needs to be constructed carefully to ensure the minimization problem is adequately conditioned so that a robust solution can be obtained. Practical ways to achieve this are discussed.
In this study, we improve a neural network (NN) parameterization of deep convection in the global atmosphere model ARP-GEM. To take into account the sporadic nature of convection, we develop a NN parameterization that includes a triggering mechanism that can detect whether deep convection is active or not within a grid-cell. This new data-driven parameterization outperforms the existing NN parameterization in present climate when replacing the original deep convection scheme of ARP-GEM. Online simulations with the NN parameterization run without stability issues. Then, this NN parameterization is evaluated online in a warmer climate. We confirm that using relative humidity instead of the specific total humidity as input for the NN (trained with present data) improves the performance and generalization in warmer climate. Finally, we perform the training of the NN parameterization with data from a warmer climate and this configuration get similar results when used in simulations in present or warmer climates.
Cloud cover is crucial information for many applications such as planning land observation missions from space. It remains however a challenging variable to forecast, and Numerical Weather Prediction (NWP) models suffer from significant biases, hence justifying the use of statistical post-processing techniques. In this study, ARPEGE (M\'et\'eo-France global NWP) cloud cover is post-processed using a convolutional neural network (CNN). CNN is the most popular machine learning tool to deal with images. In our case, CNN allows the integration of spatial information contained in NWP outputs. We use a gridded cloud cover product derived from satellite observations over Europe as ground truth, and predictors are spatial fields of various variables produced by ARPEGE at the corresponding lead time. We show that a simple U-Net architecture produces significant improvements over Europe. Moreover, the U-Net outclasses more traditional machine learning methods used operationally such as a random forest and a logistic quantile regression. We introduced a weighting predictor layer prior to the traditional U-Net architecture which produces a ranking of predictors by importance, facilitating the interpretation of the results. Using NN predictors, only NN additional weights are trained which does not impact the computational time, representing a huge advantage compared to traditional methods of ranking (permutation importance, sequential selection, ...).
This article describes the third law of thermodynamics. This law is often poorly known and is often decried, or even considered optional and irrelevant to describe weather and climate phenomena. This, however, is inaccurate and contrary to scientific facts. A rather exhaustive historical study is proposed here in order to better understand, in another article to come, why the third principle can be interesting for the atmosphere sciences.
The available enthalpy is an early form of the modern thermodynamic concept of exergy, which is the generic name for the amount of work obtainable when some matter is brought to a state of equilibrium with its surroundings by means of reversible processes. It is shown in this paper that a study of the hydrodynamic properties of available enthalpy leads to a generalization of the global meteorological available energies previously introduced by Lorenz, Dutton and Pearce. A local energy cycle is derived without approximation. Moreover, static instabilities or topography do not prevent this theory from having practical applications. The concept of available enthalpy is also presented in terms of the potential change in total entropy. Using the hydrostatic assumption, limited-area energetics is then rigorously defined, including new boundary fluxes and new energy components. This innovative approach is especially suitable for the study of energy conversions between isobaric layers of an open limited atmospheric domain. Numerical evaluations of various energy components are presented for a hemispheric field of zonal-average temperature. It is further shown that this new energetic scheme realizes a hierarchical partition of the components so that the smallest of those available enthalpy reservoirs are almost of the same magnitude as the kinetic energy. This is actually the fundamental property that induced Margules to define the primary concept of available kinetic energy in meteorology.
This is the second part of a series of two articles focused on the development and evaluation of the ARP-GEM1 global atmosphere model. The first paper introduced the model's new physics and speedup improvements. In this second part, we evaluate ARP-GEM1 through a set of 30-year prescribed sea-surface-temperature simulations at 55, 25, 12.6, and 6.3 km resolutions. The model demonstrates reliability in representing global climate metrics, comparable to the top-performing CMIP6 models, while maintaining high computational efficiency at all resolutions. These simulations further demonstrate the feasibility of O(10)-km climate simulations, and highlight their added value, particularly for capturing phenomena such as cyclones. Ultimately, these exploratory simulations should be considered an intermediate step toward the development and tuning of even higher-resolution, convection-permitting kilometer-scale configurations.
Rainfall ensemble forecasts have to be skillful for both low precipitation and extreme events. We present statistical post-processing methods based on Quantile Regression Forests (QRF) and Gradient Forests (GF) with a parametric extension for heavy-tailed distributions. Our goal is to improve ensemble quality for all types of precipitation events, heavy-tailed included, subject to a good overall performance. Our hybrid proposed methods are applied to daily 51-h forecasts of 6-h accumulated precipitation from 2012 to 2015 over France using the M{\'e}t{\'e}o-France ensemble prediction system called PEARP. They provide calibrated pre-dictive distributions and compete favourably with state-of-the-art methods like Analogs method or Ensemble Model Output Statistics. In particular, hybrid forest-based procedures appear to bring an added value to the forecast of heavy rainfall.
Verifying probabilistic forecasts for extreme events is a highly active research area because popular media and public opinions are naturally focused on extreme events, and biased conclusions are readily made. In this context, classical verification methods tailored for extreme events, such as thresholded and weighted scoring rules, have undesirable properties that cannot be mitigated, and the well-known continuous ranked probability score (CRPS) is no exception. In this paper, we define a formal framework for assessing the behavior of forecast evaluation procedures with respect to extreme events, which we use to demonstrate that assessment based on the expectation of a proper score is not suitable for extremes. Alternatively, we propose studying the properties of the CRPS as a random variable by using extreme value theory to address extreme event verification. An index is introduced to compare calibrated forecasts, which summarizes the ability of probabilistic forecasts for predicting extremes. The strengths and limitations of this method are discussed using both theoretical arguments and simulations.
Accurate precipitation forecasts have a high socio-economic value due to their role in decision-making in various fields such as transport networks and farming. We propose a global statistical postprocessing method for grid-based precipitation ensemble forecasts. This U-Net-based distributional regression method predicts marginal distributions in the form of parametric distributions inferred by scoring rule minimization. Distributional regression U-Nets are compared to state-of-the-art postprocessing methods for daily 21-h forecasts of 3-h accumulated precipitation over the South of France. Training data comes from the M\'et\'eo-France weather model AROME-EPS and spans 3 years. A practical challenge appears when consistent data or reforecasts are not available. Distributional regression U-Nets compete favorably with the raw ensemble. In terms of continuous ranked probability score, they reach a performance comparable to quantile regression forests (QRF). However, they are unable to provide calibrated forecasts in areas associated with high climatological precipitation. In terms of predictive power for heavy precipitation events, they outperform both QRF and semi-parametric QRF with tail extensions.
5
Heat pumps (HPs) have emerged as a key technology for reducing energy use and greenhouse gas emissions. This study evaluates the potential switch to air-to-air HPs (AAHPs) in Toulouse, France, where conventional space heating is split between electric and gas sources. In this context, we find that AAHPs reduce heating energy consumption by 57% to 76%, with electric heating energy consumption decreasing by 6% to 47%, resulting in virtually no local heating-related CO2_{2} emissions. We observe a slight reduction in near-surface air temperature of up to 0.5 °C during cold spells, attributable to a reduction in sensible heat flux, which is unlikely to compromise AAHPs operational efficiency. While Toulouse's heating energy mix facilitates large energy savings, electric energy consumption may increase in cities where gas or other fossil fuel sources prevail. Furthermore, as AAHPs efficiency varies with internal and external conditions, their impact on the electrical grid is more complex than conventional heating systems. The results underscore the importance of matching heating system transitions with sustainable electricity generation to maximize environmental benefits. The study highlights the intricate balance between technological advancements in heating and their broader environmental and policy implications, offering key insights for urban energy policy and sustainability efforts.
This is the first part of a series of two articles describing the ARP-GEM global atmosphere model version 1 and its evaluation in simulations from 55 km to 6 km resolutions. This article provides a complete description of ARP-GEM1, focusing on its new physical parameterizations and acceleration factors aimed at improving computational efficiency. ARP-GEM1 is approximately 15 times faster than its base model version, ARPEGE-Climat v6.3, while maintaining accurate simulations and enhancing model performance, as shown in Part II.This significant acceleration results from a combination of optimizations, rather than a single factor, with each factor's contribution quantified. Additionally, a detailed decomposition of the model's speedup per component is presented. Streamlining the model reduces computational costs, enhances transparency and portability, and maximizes the benefits of future advanced computing technologies. The results presented here suggest that kilometer-scale global climate simulations should become feasible in the near future.
Recent researches in data assimilation lead to the introduction of the parametric Kalman filter (PKF): an implementation of the Kalman filter, where the covariance matrices are approximated by a parameterized covariance model. In the PKF, the dynamics of the covariance during the forecast step relies on the prediction of the covariance parameters. Hence, the design of the parameter dynamics is crucial while it can be tedious to do this by hand. This contribution introduces a python package, SymPKF, able to compute PKF dynamics for univariate statistics and when the covariance model is parameterized from the variance and the local anisotropy of the correlations. The ability of SymPKF to produce the PKF dynamics is shown on a non-linear diffusive advection (Burgers equation) over a 1D domain and the linear advection over a 2D domain. The computation of the PKF dynamics is performed at a symbolic level, but an automatic code generator is also introduced to perform numerical simulations. A final multivariate example illustrates the potential of SymPKF to go beyond the univariate case.
In this study, we present the integration of a neural network-based parameterization into the global atmospheric model ARP-GEM1, leveraging the Python interface of the OASIS coupler. This approach facilitates the exchange of fields between the Fortran-based ARP-GEM1 model and a Python component responsible for neural network inference. As a proof-of-concept experiment, we trained a neural network to emulate the deep convection parameterization of ARP-GEM1. Using the flexible Fortran/Python interface, we have successfully replaced ARP-GEM1's deep convection scheme with a neural network emulator. To assess the performance of the neural network deep convection scheme, we have run a 5-years ARP-GEM1 simulation using the neural network emulator. The evaluation of averaged fields showed good agreement with output from an ARP-GEM1 simulation using the physics-based deep convection scheme. The Python component was deployed on a separate partition from the general circulation model, using GPUs to increase inference speed of the neural network.
We introduce a theoretical framework that accurately describes resonances beyond the reach of both multiplet-based and density-functional-theory (DFT)-based codes. When a resonance acquires strong continuum character, multiplet approaches struggle with the exponential growth of the Hilbert space driven by the increasing number of relevant orbitals, if they extend beyond a few atomic and ligand orbitals. Conversely, existing \emph{ab initio} codes, supplemented by diagrammatic techniques, remain largely confined to the Bethe--Salpeter equation, which tracks only two-particle excitations. However, many systems of interest, particularly those containing open dd or ff shells, require the propagation of an NN-particle system, yielding a richer spectral landscape dominated by significant continuum effects. Here, we propose a theoretical framework, together with its numerical implementation, that bridges this gap and enables the rigorous exploration of such resonances.
The condensation Probability Function defined in papers of X.R. Wang is criticized on many aspects. The modified latent heat and potential temperature are plotted and compared to usual atmospheric formulations.
Saturn's moon Titan undergoes a long annual cycle of 29.45 Earth years. Titan's northern winter and spring were investigated in detail by the Cassini-Huygens spacecraft (2004-2017), but the northern summer season remains sparsely studied. Here we present new observations from the James Webb Space Telescope (JWST) and Keck II telescope made in 2022 and 2023 during Titan's late northern summer. Using JWST's mid-infrared instrument, we spectroscopically detected the methyl radical, the primary product of methane break-up and key to the formation of ethane and heavier molecules. Using the near-infrared spectrograph onboard JWST, we detected several non-local thermodynamic equilibrium CO and CO2 emission bands, which allowed us to measure these species over a wide altitude range. Lastly, using the near-infrared camera onboard JWST and Keck II, we imaged northern hemisphere tropospheric clouds evolving in altitude, which provided new insights and constraints on seasonal convection patterns. These observations pave the way for new observations and modelling of Titan's climate and meteorology as it progresses through the northern fall equinox, when its atmosphere is expected to show notable seasonal changes.
There are no more papers matching your filters at the moment.