Weather, Macroweather, and the Climate
Latest Publications


TOTAL DOCUMENTS

8
(FIVE YEARS 8)

H-INDEX

0
(FIVE YEARS 0)

Published By Oxford University Press

9780190864217, 9780197559895

Author(s):  
Shaun Lovejoy

It was April 11, 2014, and the McGill University press release went online at 1:30 in the afternoon. Although I’d published many articles, they were on fundamental geoscience; the release summarized the first one that had significant social and political consequences. Its title, “Scaling Fluctuation Analysis and Statistical Hypothesis Testing of Anthropogenic Warming,” was arcane, but the release was clear enough: “Statistical analysis rules out natural- warming hypothesis with more than 99% certainty” (the article, published in Climate Dynamics, is hereafter referred to as CD). It had been fifteen months since the original submission went to peer review, but now the pace picked up dramatically. Within hours, the tone was set by the skeptic majordomo Viscount Christopher Monckton of Brenchely, who displayed his Oxbridge classics erudition by deliciously qualifying the paper as a “mephitically ectoplasmic emanation from the Forces of Darkness.” Three days later, with the release getting 12,000 hits per day, the “Friends of Science” sent an aggressive missive to the McGill chancellor asking that it be removed from McGill’s site. The Calgary- based group with its Orwellian name was set up in 2002 to promote the theory that “The sun is the driver of climate change. Not you. Not CO2.” (Fig. 6.1). One could understand their thunder. Rather than trying to prove that the warming was anthropogenic— something that is impossible to do “beyond reasonable doubt”— the new paper closed the debate2 by doing something far simpler: by disproving the “Friends” Giant Natural Fluctuation (GNF) hypothesis. If we exclude either divine or extraterrestrial intervention, then the warming is natural or it is human; there is no third alternative. The skeptics were stuck. To add insult to injury, their prepackaged sermons on the inadequacies of computer models or their speculations about solar variability were irrelevant. Provoked by the media attention and several Op- Eds in the hours, days, and weeks that followed, in email, blogs, and Twitter, I was treated to a deluge of abuse: “atheist,” “Marxist,” “hippy name,” and so on— everything, it seemed, short of death threats.


Author(s):  
Shaun Lovejoy

“The climate is what you expect, the weather is what you get”: The climate is a kind of average weather. But is it really? Those of us who have thirty years or more of recall are likely aware of subtle but systematic changes between today’s weather and the weather of their youth. I remember Montreal winters with much more snow and with longer spells of extreme cold. Did it really change? If so, was it only Montreal that changed? Or did all of Quebec change? Or did the whole planet warm up? And which is the real climate? Todays’ experience or that of the past? The key to answering these questions is the notion of scale, both in time (du­ration) and in space (size). Spatial variability is probably easier to grasp because structures of different sizes can be visualized readily (Fig. 1.1). In a puff of cigarette smoke, one can casually observe tiny wisps, whirls, and eddies. Looking out the window, we may see fluffy cumulus clouds with bumps and wiggles kilometers across. With a quick browse on the Internet, we can find satellite images of cloud patterns literally the size of the planet. Such visual inspection confirms that structures exist over a range of 10 billion or so: from 10,000 km down to less than 1 mm. At 0.1 mm, the atmosphere is like molasses; friction takes over and any whirls are quickly smoothed out. But even at this scale, matter is still “smooth.” To discern its granular, molecular nature, we would have to zoom in 1,000 times more to reach submicron scales. For weather and climate, the millimetric “dissipation scale” is thus a natural place to stop zooming, and the fact that it is still much larger than molecular scales indicates that, at this scale, we can safely discuss atmos­pheric properties without worrying about its molecular substructure. Clouds are highly complex objects. How should we deal with such apparent chaos? According to Greek mythology, at first there was only chaos; cosmos emerged later.


Author(s):  
Shaun Lovejoy

“Does the Flapping of a Butterfly’s Wings in Brazil Set off a Tornado in Texas?” This was the provocative title of an address given by Edward Lorenz, the origin for the (nearly) household expression “butterfly effect.” It was December 1972 and it had been nearly ten years since he had discovered it,1 yet its significance was only then being recognized. Lorenz explained: “In more technical language, is the behavior of the atmosphere unstable to small perturbations?” His answer: “Although we cannot claim to have proven that the atmosphere is unstable, the evidence that it is so is overwhelming.” Imagine two planets identical in every way except that on one there is a butterfly that flaps its wings. The butterfly effect means that their future evolutions are “sensitively dependent” on the initial conditions, so that a mere flap of a wing could perturb the atmosphere sufficiently so that, eventually, the weather patterns on the two planets would evolve quite differently. On the planet with the Brazilian butterfly, the number of tornadoes would likely be the same. But on a given day, one might occur in Texas rather than Oklahoma. This sensitive dependence on small perturbations thus limits our ability to predict the weather. For Earth, Lorenz estimated this predictability limit to be about two weeks. From Chapters 4 and 5 and the discussion that follows, we now understand it as the slightly shorter weather– macroweather transition scale. In Chapter 1, we learned that the ratio of the nonlinear to linear terms in the (deterministic) equations governing the atmosphere is typically about a thou­sand billion. The nonlinear terms are the mathematical expressions of physical mechanisms that can blow up microscopic perturbations into large effects. Therefore, we expect instability. Chapter 4, we examined instability from the point of view of the higher level statistical laws— the fact that, at weather scales, the fluctuation exponents H for all atmospheric fields are positive (in space, up to the size of the planet; in time, up to the weather– macroweather transition scale at five to ten days).


Author(s):  
Shaun Lovejoy

“Expect the cold weather to continue for the next ten days, followed by a warm spell.” This might have been the fourteen- day weather forecast for Montreal on December, 31, 2006 (Fig. 5.1, top). But imagine what it might have been if Earth rotated about its axis ten times more slowly, so that the length of the day coincided with the ten- day weather– macroweather transition scale— an alignment of scales almost achieved on Mars. In that case (Fig. 5.1, bottom) we would have heard, “Expect mild weather on Monday, followed by freezing temperatures, until a warm spell on Thursday, followed by a brisk Friday and Saturday, a warming on Sunday and Monday, followed by freezing on Tuesday, then a four- day warm period followed by freezing and then warming.” Although long- term trends in weather can persist for up to ten days or so, in macroweather, the upshifts tend to be followed immediately by downshifts (and vice versa) and, although longer term trends exist, they are much more subtle, resulting from imperfect cancelations of successive fluctuations. The tendency of macroweather fluctuations to cancel rather than to accu­mulate is its defining feature, and cancelation is synonymous with stability. Quantitatively, it implies that the temporal fluctuation exponent H is negative. In the weather regime with positive H, the temperature, wind, and other variables wander up and down with prolonged swings. The weather is a meta­phor for instability. If we average macroweather over longer and longer times, its variability is reduced systematically so that it appears to converge to a well-defined value. In that sense, macroweather is what you expect, the weather is what you get. But what about macroweather’s spatial properties? As usual, forecasts can be explained with recourse to maps. For example, Plate 5.1 (left) shows the day- to-day evolution of the daily temperatures corresponding to the forecast in Figure 5.1 (top).


Author(s):  
Shaun Lovejoy

We have discussed two extreme views of atmospheric variability: the scalebound view, in which every factor of 10 or so involves some new mechanism or law; and the opposing self- similar scaling view, in which zooming gives us something essentially the same— a single mechanism or law that could hold over ranges of thousands or more. By considering time series and spatial transects, we saw that, over various ranges of scale in space and in time, atmospheric scaling seemed to work quite well. We looked at a complication: Interesting geophysical quantities are not simply black or white (geometric sets of points), but have gray shades; they have numerical values everywhere. To deal with the associated extreme variability and intermit­tency, we saw that we had to go beyond fractal sets to multifractal fields (Box 2.2). Understanding multifractals turned out to be important. Failure to appre­ciate their importance led to numerous deleterious consequences.1 In this chapter, I want to consider something quite different: the morphologies of shapes in two or three dimensions. Up until now, we have identified scaling with self- similarity, the property that, following a usual isotropic zoom (one that is the same in all directions), small parts resemble the whole in some way. Yet in Chapter 1 (Fig. 1.8A, B), we saw that zooming into lidar vertical sections uncovered morphologies that changed with scale. As we zoomed into flat, stratified layers, structures became visibly more “roundish” (compare Fig. 1.8A with Fig. 1.8B). Vertical sections are thus not self- similar. Their degree of stratification— anisotropy— changes systematically with scale. But the vertical isn’t the only place where self- similarity is unrealistic. Although it is not as obvious, the same difficulty arises if we zoom into clouds in the hori­zontal. We criticized Orlanski’s powers of ten classification as being arbitrary and in contradiction with the scaling area– perimeter relation, but Orlanski was only trying to update an older phenomenological classification scheme, some of which predated the twentieth century.


Author(s):  
Shaun Lovejoy

“This afternoon, the sky will start to clear, with cloud shreds, runners, and thin bars followed by flocks.” If Jean- Baptiste Lamarck (1744–1829) had had his way, this might have been an uplifting early- morning weather forecast announcing the coming of a sunny day. Unfortunately for poetry, in 1803, several months after Lamarck proposed this first cloud classification, the “namer of clouds,” Luke Howard (1772–1864), introduced his own staid Latin nomenclature that is still with us today and includes terms such as “cumulus,” “stratus,” and “cirrus.” Howard not only had a more scientific-sounding jargon, but was soon given publicity in the form of a poem by Goethe; Lamarck’s names didn’t stand a chance. For a long time, human- scale observation of clouds was the primary source of scientific knowledge of atmospheric morphologies and dynamics. This didn’t change until the appearance of the first weather maps based on meager collections of ground station measurements around 1850. This was the beginning of the field of “synoptic” (literally “map- scale”) meteorology. Under the leader­ship of Wilhelm Bjerknes (1862–1951), it spawned the Norwegian school of me­teorology that focused notably on airmasses, the often sharp gradients between them called “fronts,” and the stability of the airmass interfaces. This was the dom­inant view when, in the mid 1920s, Richardson proposed his scaling 4/ 3 diffusion law. The spatial resolution of these “synoptic- scale” maps was so low that features smaller than 1,000 kilometers or so could not be discerned. Between these and the kilometric human “microscales,” virtually nothing was known. Richardson’s claim that a single scaling law might hold from thousands of kilometers down to millimeters didn’t seem so daring. Not only was it compatible with the scale- free equations that he had elaborated, but also there were no scalebound paradigms to contradict it. By the late 1940s and ʼ50s, the development of radar finally opened a window onto the intermediate range.


Author(s):  
Shaun Lovejoy

From big to small, from fast to slow, we traveled through scales— through magnifications of billions in space and billions of billions in time. We looked at how the traditional scalebound approach singles out specific phenomena: structures at specific spatial scales with specific lifetimes. The approach attempts to understand each in a (scale) reductionist and (usually) deterministic manner. Yet it fails miserably to describe more than tiny portions of the actual variability, giving— at best— some qualitative insights. Viewing the big picture with the help of modern data, we saw that, quantitatively, the scalebound approach underestimates the variability by a factor of a million billion (Fig. 2.3A). The alternative is the scaling approach, which attempts to understand and model the atmosphere over wide ranges of scale. This approach is based on space– time scale symmetry principles. It describes statistically the synergy of nonlinear processes that act collectively over wide ranges of scale. To apply the idea in space, we needed to generalize the notion of scale itself (Chapter 3)— notably, to be able to account for the stratification caused by gravity. The appropriate notion of scale is one that emerges as a consequence of strong nonlinear dynamics, rather than being imposed a priori from without. Applying scaling in time, we found that the familiar weather– climate dichotomy was missing a key middle regime: from ten days to twenty years. It is a weather, macroweather, climate trichotomy. When it comes to real atmospheric modeling, scientists have long realized the limits of the scalebound approach. When they “really need to know,” they defer to NWP or GCMs, the embodiment of Richardson’s dream of “weather prediction by numerical process.” This is fortunate, because the NWPs and GCMs respect space– time scaling symmetries; without them, they would be hopelessly unrealistic. At least when used for their original purpose— weather prediction up to the ten- day deterministic predictability limit— respecting scaling allows them to be reasonably accurate.


Author(s):  
Shaun Lovejoy

We just took a voyage through scales, noticing structures in cloud photographs and wiggles on graphs. Collectively, they spanned ranges of scale over factors of billions in space and billions of billions in time. We are immediately confronted with the question: How can we conceptualize and model such fantastic variation? Two extreme approaches have developed. For the moment, I call the domi­nant one the new worlds view, after Antoni van Leeuwenhoek (1632– 1723), who developed a powerful early microscope. The other is the self- similar (scaling) view by Benoit Mandelbrot which I discuss in the next section. My own view— scaling but with the notion of scale itself an emergent property— is discussed in Chapter 3. When van Leeuwenhoek peered through his microscope, in his amazement he is said to have discovered a “new world in a drop of water”: “animalcules,” the first microorganisms (Fig. 2.1). Since then, the idea that zooming reveals something completely new has become second nature. In the twenty- first cen­tury, atom- imaging microscopes are developed precisely because of the promise of such new worlds. The scale- by- scale “newness” idea was graphically illustrated by K. Boeke’s highly influential book Cosmic View, which starts with a photograph of a girl holding a cat, first zooming away to show the surrounding vast reaches of outer space, and then zooming in until reaching the nucleus of an atom. The book was incredibly successful. It was included in Hutchins and Adler’s Gateway to the Great Books, a ten- volume series featuring works by Aristotle, Shakespeare, Einstein, and others. In 1968, two films were based on Boeke’s book— Cosmic Zoom and Powers of Ten (1968, re- released in 1977), encouraging the idea that nearly every power of ten in scale hosts different phenomena. More recently (2012), there’s even the interactive Cosmic Eye app for the iPad, iPhone, or iPod, not to mention a lavish update: the “Zoomable Universe.”


Sign in / Sign up

Export Citation Format

Share Document