scholarly journals Is Weather Chaotic? Coexisting Attractors, Multistability, and Predictability

Author(s):  
Bo-Wen Shen ◽  
Roger A. Pielke ◽  
Xubin Zeng ◽  
Sara Faghih-Naini ◽  
Jialin Cui ◽  
...  

Abstract Since Lorenz’s 1963 study and 1972 presentation, the statement “weather is chaotic’’ has been well accepted. Such a view turns our attention from regularity associated with Laplace’s view of determinism to irregularity associated with chaos. In contrast to single type chaotic solutions, recent studies using a generalized Lorenz model (Shen 2019a, b; Shen et al. 2019) have focused on the coexistence of chaotic and regular solutions that appear within the same model, using the same modeling configurations but different initial conditions. The results suggest that the entirety of weather possesses a dual nature of chaos and order with distinct predictability. Furthermore, Shen et al. (2021a, b) illustrated the following two mechanisms that may enable or modulate attractor coexistence: (1) the aggregated negative feedback of small-scale convective processes that enable the appearance of stable, steady-state solutions and their coexistence with chaotic or nonlinear limit cycle solutions; and (2) the modulation of large-scale time varying forcing (heating). Recently, the physical relevance of findings within Lorenz models for real world problems has been reiterated by providing mathematical universality between the Lorenz simple weather and Pedlosky simple ocean models, as well as amongst the non-dissipative Lorenz model, and the Duffing, the Nonlinear Schrodinger, and the Korteweg–de Vries equations (Shen 2020, 2021). We additionally compared the Lorenz 1963 and 1969 models. The former is a limited-scale, nonlinear, chaotic model; while the latter is a closure-based, physically multiscale, mathematically linear model with ill-conditioning. To support and illustrate the revised view, this short article elaborates on additional details of monostability and multistability by applying skiing and kayaking as an analogy, and provides a list of non-chaotic weather systems. We additionally address the influence of the revised view on real-world model predictions and analyses using hurricane track predictions as an illustration, and provide a brief summary on the recent deployment of methods for multiscale analyses and classifications of chaotic and non-chaotic solutions.

Geosciences ◽  
2019 ◽  
Vol 9 (7) ◽  
pp. 281 ◽  
Author(s):  
Shen

Recent advances in computational and global modeling technology have provided the potential to improve weather predictions at extended-range scales. In earlier studies by the author and his coauthors, realistic 30-day simulations of multiple African easterly waves (AEWs) and an averaged African easterly jet (AEJ) were obtained. The formation of hurricane Helene (2006) was also realistically simulated from Day 22 to Day 30. In this study, such extended predictability was further analyzed based on recent understandings of chaos and instability within Lorenz models and the generalized Lorenz model. The analysis suggested that a statement of the theoretical predictability of two weeks is not universal. New insight into chaotic and non-chaotic processes revealed by the generalized Lorenz model (GLM) indicated the potential for extending prediction lead times. Two major features within the GLM included: (1) three types of attractors (that also appeared in the original Lorenz model) and (2) two kinds of attractor coexistence. The features suggest a refined view on the nature of weather, as follows: The entirety of weather is a superset that consists of chaotic and non-chaotic processes. Better predictability can be obtained for stable, steady-state solutions and nonlinear periodic solutions that occur at small and large Rayleigh parameters, respectively. By comparison, chaotic solutions appear only at moderate Rayleigh parameters. Errors associated with dissipative small-scale processes do not necessarily contaminate the simulations of large scale processes. Based on the nonlinear periodic solutions (also known as limit cycle solutions), here, we propose a hypothetical mechanism for the recurrence (or periodicity) of successive AEWs. The insensitivity of limit cycles to initial conditions implies that AEW simulations with strong heating and balanced nonlinearity could be more predictable. Based on the hypothetical mechanism, the possibility of extending prediction lead times at extended range scales is discussed. Future work will include refining the model to better examine the validity of the mechanism to explain the recurrence of multiple AEWs.


Electronics ◽  
2021 ◽  
Vol 10 (22) ◽  
pp. 2868
Author(s):  
Wenxuan Zhao ◽  
Yaqin Zhao ◽  
Liqi Feng ◽  
Jiaxi Tang

The purpose of image dehazing is the reduction of the image degradation caused by suspended particles for supporting high-level visual tasks. Besides the atmospheric scattering model, convolutional neural network (CNN) has been used for image dehazing. However, the existing image dehazing algorithms are limited in face of unevenly distributed haze and dense haze in real-world scenes. In this paper, we propose a novel end-to-end convolutional neural network called attention enhanced serial Unet++ dehazing network (AESUnet) for single image dehazing. We attempt to build a serial Unet++ structure that adopts a serial strategy of two pruned Unet++ blocks based on residual connection. Compared with the simple Encoder–Decoder structure, the serial Unet++ module can better use the features extracted by encoders and promote contextual information fusion in different resolutions. In addition, we take some improvement measures to the Unet++ module, such as pruning, introducing the convolutional module with ResNet structure, and a residual learning strategy. Thus, the serial Unet++ module can generate more realistic images with less color distortion. Furthermore, following the serial Unet++ blocks, an attention mechanism is introduced to pay different attention to haze regions with different concentrations by learning weights in the spatial domain and channel domain. Experiments are conducted on two representative datasets: the large-scale synthetic dataset RESIDE and the small-scale real-world datasets I-HAZY and O-HAZY. The experimental results show that the proposed dehazing network is not only comparable to state-of-the-art methods for the RESIDE synthetic datasets, but also surpasses them by a very large margin for the I-HAZY and O-HAZY real-world dataset.


2019 ◽  
Vol 24 (2) ◽  
pp. 44 ◽  
Author(s):  
Gilberto M. Nakamura ◽  
Ana Carolina P. Monteiro ◽  
George C. Cardoso ◽  
Alexandre S. Martinez

Predictive analysis of epidemics often depends on the initial conditions of the outbreak, the structure of the afflicted population, and population size. However, disease outbreaks are subjected to fluctuations that may shape the spreading process. Agent-based epidemic models mitigate the issue by using a transition matrix which replicates stochastic effects observed in real epidemics. They have met considerable numerical success to simulate small scale epidemics. The problem grows exponentially with population size, reducing the usability of agent-based models for large scale epidemics. Here, we present an algorithm that explores permutation symmetries to enhance the computational performance of agent-based epidemic models. Our findings bound the stochastic process to a single eigenvalue sector, scaling down the dimension of the transition matrix to o ( N 2 ) .


1984 ◽  
Vol 142 ◽  
pp. 217-231 ◽  
Author(s):  
Hakuro Oguchi ◽  
Osamu Inoue

This paper aims to elucidate the structure of the turbulent mixing layers, especially, its dependence on initial disturbances. The mixing layers are produced by setting a woven-wire screen perpendicular to the freestream in the test section of a wind tunnel to obstruct part of the flow. Three kinds of model geometry are treated; these model screens produced mixing layers which may be regarded as the equivalents of the plane mixing layer and of two-dimensional and axisymmetric wakes issuing into ambient streams of higher velocity. The initial disturbances are imposed by installing thin rods of various sizes along the edge of the screen or at the origin of the mixing layer. Flow features are visualized by the smoke-wire method. Statistical quantities are measured by a laser-Doppler velocimeter. In all cases large-scale transverse vortices seem to persist, although comparatively small-scale vortices are superimposed on the flow field in the mixing layer. The mixing layers are in self-preserving state at least up to third-order moments, but the self-preserving state is different in each case. The growth rates of the mixing layer are shown to depend strongly on the initial disturbance imposed.


2002 ◽  
Vol 456 ◽  
pp. 219-237 ◽  
Author(s):  
FAUSTO CATTANEO ◽  
DAVID W. HUGHES ◽  
JEAN-CLAUDE THELEN

By considering an idealized model of helically forced flow in an extended domain that allows scale separation, we have investigated the interaction between dynamo action on different spatial scales. The evolution of the magnetic field is studied numerically, from an initial state of weak magnetization, through the kinematic and into the dynamic regime. We show how the choice of initial conditions is a crucial factor in determining the structure of the magnetic field at subsequent times. For a simulation with initial conditions chosen to favour the growth of the small-scale field, the evolution of the large-scale magnetic field can be described in terms of the α-effect of mean field magnetohydrodynamics. We have investigated this feature further by a series of related numerical simulations in smaller domains. Of particular significance is that the results are consistent with the existence of a nonlinearly driven α-effect that becomes saturated at very small amplitudes of the mean magnetic field.


2019 ◽  
Vol 34 (4) ◽  
pp. 908-913
Author(s):  
Itay Marienberg-Milikowsky

AbstractThis short article discusses whether it matters if non-computational colleagues fail to understand our (i.e. the digital humanists) work. The case study of the article is Hebrew literature and its community of scholars: surprisingly, despite the fact that the initial conditions are promising, it would appear that the digital humanities do not find access into the departments of Hebrew literature and the journals dedicated to it easy to come by. The article examines the reasons for this and describes a possible remedy for it—one where a conceptual rather than a technical foundation would provide the basis for a fruitful and critical dialogue between computational researchers and the rest. Such an approach is necessary not just for the research of small-scale literatures but also for the development of (computational) literary studies in general.


2020 ◽  
Vol 495 (4) ◽  
pp. 4227-4236 ◽  
Author(s):  
Doogesh Kodi Ramanah ◽  
Tom Charnock ◽  
Francisco Villaescusa-Navarro ◽  
Benjamin D Wandelt

ABSTRACT We present an extension of our recently developed Wasserstein optimized model to emulate accurate high-resolution (HR) features from computationally cheaper low-resolution (LR) cosmological simulations. Our deep physical modelling technique relies on restricted neural networks to perform a mapping of the distribution of the LR cosmic density field to the space of the HR small-scale structures. We constrain our network using a single triplet of HR initial conditions and the corresponding LR and HR evolved dark matter simulations from the quijote suite of simulations. We exploit the information content of the HR initial conditions as a well-constructed prior distribution from which the network emulates the small-scale structures. Once fitted, our physical model yields emulated HR simulations at low computational cost, while also providing some insights about how the large-scale modes affect the small-scale structure in real space.


2015 ◽  
Vol 786 ◽  
pp. 1-4 ◽  
Author(s):  
Paul K. Newton

The paper by Dritschel et al. (J. Fluid Mech., vol. 783, 2015, pp. 1–22) describes the long-time behaviour of inviscid two-dimensional fluid dynamics on the surface of a sphere. At issue is whether the flow settles down to an equilibrium or whether, for generic (random) initial conditions, the long-time solution is periodic, quasi-periodic or chaotic. While it might be surprising that this issue is not settled in the literature, it is important to keep in mind that the Euler equations form a dissipationless Hamiltonian system, hence the set of equations only redistributes the initial vorticity, generating smaller and smaller scales, while keeping kinetic energy, angular impulse and an infinite family of vorticity moments (Casimirs) intact. While special solutions that never settle down to an equilibrium state can be constructed using point vortices, vortex patches and other distributions, the fate of random initial conditions is a trickier problem. Previous statistical theories indicate that the long-time state should be a stationary large-scale distribution of vorticity. By carrying out careful numerical simulations using two different methods, the authors make a compelling case that the generic long-time state resembles a large-scale oscillating quadrupolar vorticity field, surrounded by persistent small-scale vortices. While numerical simulations can never conclusively settle this issue, the results might help guide future theories that seek to prove the existence of such an interesting dynamical long-time state.


Author(s):  
Xiao Huang ◽  
Qingquan Song ◽  
Fan Yang ◽  
Xia Hu

Feature embedding aims to learn a low-dimensional vector representation for each instance to preserve the information in its features. These representations can benefit various offthe-shelf learning algorithms. While embedding models for a single type of features have been well-studied, real-world instances often contain multiple types of correlated features or even information within a different modality such as networks. Existing studies such as multiview learning show that it is promising to learn unified vector representations from all sources. However, high computational costs of incorporating heterogeneous information limit the applications of existing algorithms. The number of instances and dimensions of features in practice are often large. To bridge the gap, we propose a scalable framework FeatWalk, which can model and incorporate instance similarities in terms of different types of features into a unified embedding representation. To enable the scalability, FeatWalk does not directly calculate any similarity measure, but provides an alternative way to simulate the similarity-based random walks among instances to extract the local instance proximity and preserve it in a set of instance index sequences. These sequences are homogeneous with each other. A scalable word embedding algorithm is applied to them to learn a joint embedding representation of instances. Experiments on four real-world datasets demonstrate the efficiency and effectiveness of FeatWalk.


2017 ◽  
Vol 145 (11) ◽  
pp. 4593-4603
Author(s):  
Yanfeng Zhao ◽  
Donghai Wang ◽  
Jianjun Xu

A combined forecasting methodology, into which the spectral nudging, lateral boundary filtering, and update initial conditions methods are incorporated, was employed in the regional Weather Research and Forecasting (WRF) Model. The intent was to investigate the potential for improving the prediction capability for the rainy season in China via using as many merits of the global model having better predictability as it does for the large-scale circulation and of the regional model as it does for the small-scale features. The combined methodology was found to be successful in improving the prediction of the regional atmospheric circulation and precipitation. It performed best for the larger magnitude precipitation, the relative humidity above 800 hPa, and wind fields below 300 hPa. Furthermore, the larger the magnitude and the longer the lead time, the more obvious is the improvement in terms of the accumulated rainfall of persistent severe rainfall events.


Sign in / Sign up

Export Citation Format

Share Document