scholarly journals A Study on Future Projections of Precipitation Characteristics around Japan in Early Summer Combining GPM DPR Observation and CMIP5 Large-Scale Environments

2019 ◽  
Vol 32 (16) ◽  
pp. 5251-5274 ◽  
Author(s):  
Chie Yokoyama ◽  
Yukari N. Takayabu ◽  
Osamu Arakawa ◽  
Tomoaki Ose

AbstractThis study estimates future changes in the early summer precipitation characteristics around Japan using changes in the large-scale environment, by combining Global Precipitation Measurement precipitation radar observations and phase 5 of the Coupled Models Intercomparison Project climate model large-scale projections. Analyzing satellite-based data, we first relate precipitation in three types of rain events (small, organized, and midlatitude), which are identified via their characteristics, to the large-scale environment. Two environmental fields are chosen to determine the large-scale conditions of the precipitation: the sea surface temperature and the midlevel large-scale vertical velocity. The former is related to the lower-tropospheric thermal instability, while the latter affects precipitation via moistening/drying of the midtroposphere. Consequently, favorable conditions differ between the three types in terms of these two environmental fields. Using these precipitation–environment relationships, we then reconstruct the precipitation distributions for each type with reference to the two environmental indices in climate models for the present and future climates. Future changes in the reconstructed precipitation are found to vary widely between the three types in association with the large-scale environment. In more than 90% of models, the region affected by organized-type precipitation will expand northward, leading to a substantial increase in this type of precipitation near Japan along the Sea of Japan, and in northern and eastern Japan on the Pacific side, where its present amount is relatively small. This result suggests an elevated risk of heavy rainfall in those regions because the maximum precipitation intensity is more intense in organized-type precipitation than in the other two types.

2020 ◽  
Author(s):  
Danijel Belusic ◽  
Petter Lind ◽  
Oskar Landgren ◽  
Dominic Matte ◽  
Rasmus Anker Pedersen ◽  
...  

<p>Current literature strongly indicates large benefits of convection permitting models for subdaily summer precipitation extremes. There has been less insight about other variables, seasons and weather conditions. We examine new climate simulations over the Nordic region, performed with the HCLIM38 regional climate model at both convection permitting and coarser scales, searching for benefits of using convection permitting resolutions. The Nordic climate is influenced by the North Atlantic storm track and characterised by large seasonal contrasts in temperature and precipitation. It is also in rapid change, most notably in the winter season when feedback processes involving retreating snow and ice lead to larger warming than in many other regions. This makes the area an ideal testbed for regional climate models. We explore the effects of higher resolution and better reproduction of convection on various aspects of the climate, such as snow in the mountains, coastal and other thermal circulations, convective storms and precipitation with a special focus on extreme events. We investigate how the benefits of convection permitting models change with different variables and seasons, and also their sensitivity to different circulation regimes.</p>


2019 ◽  
Vol 32 (12) ◽  
pp. 3707-3725 ◽  
Author(s):  
C. Munday ◽  
R. Washington

Abstract Ninety-five percent of climate models contributing to phase 5 of the Coupled Model Intercomparison Project (CMIP5) project early summer [October–December (OND)] rainfall declines over subtropical southern Africa by the end of the century, under all emissions forcing pathways. The intermodel consensus underlies the Intergovernmental Panel on Climate Change (IPCC) assessment that rainfall declines are “likely” and implies that significant climate change adaptation is needed. However, model consensus is not necessarily a good indicator of confidence, especially given that there is an order of magnitude difference in the scale of rainfall decline among models in OND (from <10 mm season−1 to ~100 mm season−1), and that the CMIP5 ensemble systematically overestimates present-day OND precipitation over subtropical southern Africa (in some models by a factor of 2). In this paper we investigate the uncertainty in the OND drying signal by evaluating the climate mechanisms that underlie the diversity in model rainfall projections. Models projecting the highest-magnitude drying simulate the largest increases in tropospheric stability over subtropical southern Africa associated with anomalous upper-level subsidence, reduced evaporation, and amplified surface temperature change. Intermodel differences in rainfall projections are in turn related to the large-scale adjustment of the tropical atmosphere to emissions forcing: models with the strongest relative warming of the northern tropical sea surface temperatures compared to the tropical mean warming simulate the largest rainfall declines. The models with extreme rainfall declines also tend to simulate large present-day biases in rainfall and in atmospheric stability, leading the authors to suggest that projections of high-magnitude drying require further critical attention.


2017 ◽  
Vol 10 (9) ◽  
pp. 3567-3589 ◽  
Author(s):  
Simon F. B. Tett ◽  
Kuniko Yamazaki ◽  
Michael J. Mineter ◽  
Coralia Cartis ◽  
Nathan Eizenberg

Abstract. Optimisation methods were successfully used to calibrate parameters in an atmospheric component of a climate model using two variants of the Gauss–Newton line-search algorithm: (1) a standard Gauss–Newton algorithm in which, in each iteration, all parameters were perturbed and (2) a randomised block-coordinate variant in which, in each iteration, a random sub-set of parameters was perturbed. The cost function to be minimised used multiple large-scale multi-annual average observations and was constrained to produce net radiative fluxes close to those observed. These algorithms were used to calibrate the HadAM3 (third Hadley Centre Atmospheric Model) model at N48 resolution and the HadAM3P model at N96 resolution.For the HadAM3 model, cases with 7 and 14 parameters were tried. All ten 7-parameter cases using HadAM3 converged to cost function values similar to that of the standard configuration. For the 14-parameter cases several failed to converge, with the random variant in which 6 parameters were perturbed being most successful. Multiple sets of parameter values were found that produced multiple models very similar to the standard configuration. HadAM3 cases that converged were coupled to an ocean model and run for 20 years starting from a pre-industrial HadCM3 (3rd Hadley Centre Coupled model) state resulting in several models whose global-average temperatures were consistent with pre-industrial estimates. For the 7-parameter cases the Gauss–Newton algorithm converged in about 70 evaluations. For the 14-parameter algorithm, with 6 parameters being randomly perturbed, about 80 evaluations were needed for convergence. However, when 8 parameters were randomly perturbed, algorithm performance was poor. Our results suggest the computational cost for the Gauss–Newton algorithm scales between P and P2, where P is the number of parameters being calibrated.For the HadAM3P model three algorithms were tested. Algorithms in which seven parameters were perturbed and three out of seven parameters randomly perturbed produced final configurations comparable to the standard hand-tuned configuration. An algorithm in which 6 out of 13 parameters were randomly perturbed failed to converge.These results suggest that automatic parameter calibration using atmospheric models is feasible and that the resulting coupled models are stable. Thus, automatic calibration could replace human-driven trial and error. However, convergence and costs are likely sensitive to details of the algorithm.


2017 ◽  
Author(s):  
Simon F. B. Tett ◽  
Kuniko Yamazaki ◽  
Michael J. Mineter ◽  
Coralia Cartis ◽  
Nathan Eizenberg

Abstract. Optimisation methods were successfully used to calibrate parameters in an atmospheric component of a climate model using two variants of the Gauss-Newton line-search algorithm. 1) A standard Gauss-Newton algorithm in which, in each iteration, all parameters were perturbed. 2) A randomized block-coordinate variant in which, in each iteration, a random sub-set of parameters was perturbed. The cost function to be minimized used multiple large-scale observations and was constrained to produce net radiative fluxes close to those observed. These algorithms were used to calibrate the HadAM3 (3rd Hadley Centre Atmospheric Model) model at N48 resolution and the HadAM3P model at N96 resolution. For the HadAM3 model, cases with seven and fourteen parameters were tried. All ten 7-parameter cases using HadAM3 converged to cost function values similar to that of the standard configuration. For the 14-parameter cases several failed to converge, with the random variant in which 6 parameters were perturbed being most successful. Multiple sets of parameter values were found that produced multiple models very similar to the standard configuration. HadAM3 cases that converged were coupled to an ocean model and ran for 20 years starting from a pre-industrial HadCM3 (3rd Hadley Centre Coupled model) state resulting in several models whose global-average temperatures were consistent with pre-industrial estimates. For the 7-parameter cases the Gauss-Newton algorithm converged in about 70 evaluations. For the 14-parameter algorithm with 6 parameters being randomly perturbed about 80 evaluations were needed for convergence. However, when 8 parameters were randomly perturbed algorithm performance was poor. Our results suggest the computational cost for the Gauss-Newton algorithm scales between P and P2 where P is the number of parameters being calibrated. For the HadAM3P model three algorithms were tested. Algorithms in which seven parameters were perturbed and three out of seven parameters randomly perturbed produced final configurations comparable to the standard hand tuned configuration. An algorithm in which six out of thirteen parameters were randomly perturbed failed to converge. These results suggest that automatic parameter calibration using atmospheric models is feasible and that the resulting coupled models are stable. Thus, automatic calibration could replace human driven trial and error. However, convergence and costs are, likely, sensitive to details of the algorithm.


2015 ◽  
Vol 28 (17) ◽  
pp. 6903-6919 ◽  
Author(s):  
Martin Hanel ◽  
T. Adri Buishand

Abstract A linear mixed-effects (LME) model is developed to discriminate the sources of variation in the changes of several precipitation characteristics over the Rhine basin as projected by an ensemble of 191 global climate model (GCM) simulations from phase 5 of the Coupled Model Intercomparison Project (CMIP5). The uncertainty in climate change projections originates from natural internal variability, imperfect climate models, and the unpredictability of future greenhouse gas forcing. The LME model allows for the quantification of the contribution of these sources of uncertainty as well as the interaction between greenhouse gas forcing and climate model. In addition, dependence between climate models can be accounted for by using a two-level LME model in which the GCMs are grouped according to their atmospheric circulation model. Statistical models of varied complexity are assessed by the Akaike information criterion. More than 60% of the variance of the changes in mean summer precipitation and various quantiles of 5-day summer precipitation at the end of the twenty-first century can be explained by the climate model. Differences between climate models are also the main source of uncertainty for the changes in three drought characteristics in the summer half-year. In winter, the differences between GCMs are smaller, and natural variability explains a large proportion of the variance of the changes. Natural variability is also the main source of uncertainty for the changes in two indices of extreme precipitation. The contribution of the forcing scenario to the variance of the changes is generally less than 25%.


2006 ◽  
Vol 19 (17) ◽  
pp. 4344-4359 ◽  
Author(s):  
Markus Stowasser ◽  
Kevin Hamilton

Abstract The relations between local monthly mean shortwave cloud radiative forcing and aspects of the resolved-scale meteorological fields are investigated in hindcast simulations performed with 12 of the global coupled models included in the model intercomparison conducted as part of the preparation for Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report (AR4). In particular, the connection of the cloud forcing over tropical and subtropical ocean areas with resolved midtropospheric vertical velocity and with lower-level relative humidity are investigated and compared among the models. The model results are also compared with observational determinations of the same relationships using satellite data for the cloud forcing and global reanalysis products for the vertical velocity and humidity fields. In the analysis the geographical variability in the long-term mean among all grid points and the interannual variability of the monthly mean at each grid point are considered separately. The shortwave cloud radiative feedback (SWCRF) plays a crucial role in determining the predicted response to large-scale climate forcing (such as from increased greenhouse gas concentrations), and it is thus important to test how the cloud representations in current climate models respond to unforced variability. Overall there is considerable variation among the results for the various models, and all models show some substantial differences from the comparable observed results. The most notable deficiency is a weak representation of the cloud radiative response to variations in vertical velocity in cases of strong ascending or strong descending motions. While the models generally perform better in regimes with only modest upward or downward motions, even in these regimes there is considerable variation among the models in the dependence of SWCRF on vertical velocity. The largest differences between models and observations when SWCRF values are stratified by relative humidity are found in either very moist or very dry regimes. Thus, the largest errors in the model simulations of cloud forcing are prone to be in the western Pacific warm pool area, which is characterized by very moist strong upward currents, and in the rather dry regions where the flow is dominated by descending mean motions.


2008 ◽  
Vol 21 (22) ◽  
pp. 6052-6059 ◽  
Author(s):  
B. Timbal ◽  
P. Hope ◽  
S. Charles

Abstract The consistency between rainfall projections obtained from direct climate model output and statistical downscaling is evaluated. Results are averaged across an area large enough to overcome the difference in spatial scale between these two types of projections and thus make the comparison meaningful. Undertaking the comparison using a suite of state-of-the-art coupled climate models for two forcing scenarios presents a unique opportunity to test whether statistical linkages established between large-scale predictors and local rainfall under current climate remain valid in future climatic conditions. The study focuses on the southwest corner of Western Australia, a region that has experienced recent winter rainfall declines and for which climate models project, with great consistency, further winter rainfall reductions due to global warming. Results show that as a first approximation the magnitude of the modeled rainfall decline in this region is linearly related to the model global warming (a reduction of about 9% per degree), thus linking future rainfall declines to future emission paths. Two statistical downscaling techniques are used to investigate the influence of the choice of technique on projection consistency. In addition, one of the techniques was assessed using different large-scale forcings, to investigate the impact of large-scale predictor selection. Downscaled and direct model projections are consistent across the large number of models and two scenarios considered; that is, there is no tendency for either to be biased; and only a small hint that large rainfall declines are reduced in downscaled projections. Among the two techniques, a nonhomogeneous hidden Markov model provides greater consistency with climate models than an analog approach. Differences were due to the choice of the optimal combination of predictors. Thus statistically downscaled projections require careful choice of large-scale predictors in order to be consistent with physically based rainfall projections. In particular it was noted that a relative humidity moisture predictor, rather than specific humidity, was needed for downscaled projections to be consistent with direct model output projections.


2021 ◽  
Vol 17 (4) ◽  
pp. 1665-1684
Author(s):  
Leonore Jungandreas ◽  
Cathy Hohenegger ◽  
Martin Claussen

Abstract. Global climate models experience difficulties in simulating the northward extension of the monsoonal precipitation over north Africa during the mid-Holocene as revealed by proxy data. A common feature of these models is that they usually operate on grids that are too coarse to explicitly resolve convection, but convection is the most essential mechanism leading to precipitation in the West African Monsoon region. Here, we investigate how the representation of tropical deep convection in the ICOsahedral Nonhydrostatic (ICON) climate model affects the meridional distribution of monsoonal precipitation during the mid-Holocene by comparing regional simulations of the summer monsoon season (July to September; JAS) with parameterized and explicitly resolved convection. In the explicitly resolved convection simulation, the more localized nature of precipitation and the absence of permanent light precipitation as compared to the parameterized convection simulation is closer to expectations. However, in the JAS mean, the parameterized convection simulation produces more precipitation and extends further north than the explicitly resolved convection simulation, especially between 12 and 17∘ N. The higher precipitation rates in the parameterized convection simulation are consistent with a stronger monsoonal circulation over land. Furthermore, the atmosphere in the parameterized convection simulation is less stably stratified and notably moister. The differences in atmospheric water vapor are the result of substantial differences in the probability distribution function of precipitation and its resulting interactions with the land surface. The parametrization of convection produces light and large-scale precipitation, keeping the soils moist and supporting the development of convection. In contrast, less frequent but locally intense precipitation events lead to high amounts of runoff in the explicitly resolved convection simulations. The stronger runoff inhibits the moistening of the soil during the monsoon season and limits the amount of water available to evaporation in the explicitly resolved convection simulation.


2020 ◽  
Vol 13 (5) ◽  
pp. 2355-2377
Author(s):  
Vijay S. Mahadevan ◽  
Iulian Grindeanu ◽  
Robert Jacob ◽  
Jason Sarich

Abstract. One of the fundamental factors contributing to the spatiotemporal inaccuracy in climate modeling is the mapping of solution field data between different discretizations and numerical grids used in the coupled component models. The typical climate computational workflow involves evaluation and serialization of the remapping weights during the preprocessing step, which is then consumed by the coupled driver infrastructure during simulation to compute field projections. Tools like Earth System Modeling Framework (ESMF) (Hill et al., 2004) and TempestRemap (Ullrich et al., 2013) offer capability to generate conservative remapping weights, while the Model Coupling Toolkit (MCT) (Larson et al., 2001) that is utilized in many production climate models exposes functionality to make use of the operators to solve the coupled problem. However, such multistep processes present several hurdles in terms of the scientific workflow and impede research productivity. In order to overcome these limitations, we present a fully integrated infrastructure based on the Mesh Oriented datABase (MOAB) (Tautges et al., 2004; Mahadevan et al., 2015) library, which allows for a complete description of the numerical grids and solution data used in each submodel. Through a scalable advancing-front intersection algorithm, the supermesh of the source and target grids are computed, which is then used to assemble the high-order, conservative, and monotonicity-preserving remapping weights between discretization specifications. The Fortran-compatible interfaces in MOAB are utilized to directly link the submodels in the Energy Exascale Earth System Model (E3SM) to enable online remapping strategies in order to simplify the coupled workflow process. We demonstrate the superior computational efficiency of the remapping algorithms in comparison with other state-of-the-science tools and present strong scaling results on large-scale machines for computing remapping weights between the spectral element atmosphere and finite volume discretizations on the polygonal ocean grids.


2021 ◽  
Author(s):  
Antoine Doury ◽  
Samuel Somot ◽  
Sébastien Gadat ◽  
Aurélien Ribes ◽  
Lola Corre

Abstract Providing reliable information on climate change at local scale remains a challenge of first importance for impact studies and policymakers. Here, we propose a novel hybrid downscaling method combining the strengths of both empirical statistical downscaling methods and Regional Climate Models (RCMs). The aim of this tool is to enlarge the size of high-resolution RCM simulation ensembles at low cost.We build a statistical RCM-emulator by estimating the downscaling function included in the RCM. This framework allows us to learn the relationship between large-scale predictors and a local surface variable of interest over the RCM domain in present and future climate. Furthermore, the emulator relies on a neural network architecture, which grants computational efficiency. The RCM-emulator developed in this study is trained to produce daily maps of the near-surface temperature at the RCM resolution (12km). The emulator demonstrates an excellent ability to reproduce the complex spatial structure and daily variability simulated by the RCM and in particular the way the RCM refines locally the low-resolution climate patterns. Training in future climate appears to be a key feature of our emulator. Moreover, there is a huge computational benefit in running the emulator rather than the RCM, since training the emulator takes about 2 hours on GPU, and the prediction is nearly instantaneous. However, further work is needed to improve the way the RCM-emulator reproduces some of the temperature extremes, the intensity of climate change, and to extend the proposed methodology to different regions, GCMs, RCMs, and variables of interest.


Sign in / Sign up

Export Citation Format

Share Document