scholarly journals Characteristics of the near-surface turbulence during a bora event

2010 ◽  
Vol 28 (1) ◽  
pp. 155-163 ◽  
Author(s):  
Ž. Večenaj ◽  
D. Belušić ◽  
B. Grisogono

Abstract. During a bora event, the turbulence is strongly developed in the lee of the Dinaric Alps at the eastern Adriatic coast. In order to study its properties, a 3-D ultrasonic anemometer operating at 4 Hz sampling frequency was placed in the town of Senj at 13 m above ground. The strong bora case that occurred on 7 January and lasted till 11 January 2006 is analyzed here. This data set is used for evaluation of the turbulent kinetic energy, TKE, and its dissipation rate, ε. The computation of ε is performed using the inertial dissipation method. The empirical length scale parameter for this event is estimated with respect to ε and TKE. Some considerations about defining turbulent perturbations of the bora wind velocity are also pointed out.

2014 ◽  
Vol 599-601 ◽  
pp. 1605-1609 ◽  
Author(s):  
Ming Zeng ◽  
Zhan Xie Wu ◽  
Qing Hao Meng ◽  
Jing Hai Li ◽  
Shu Gen Ma

The wind is the main factor to influence the propagation of gas in the atmosphere. Therefore, the wind signal obtained by anemometer will provide us valuable clues for searching gas leakage sources. In this paper, the Recurrence Plot (RP) and Recurrence Quantification Analysis (RQA) are applied to analyze the influence of recurrence characteristics of the wind speed time series under the condition of the same place, the same time period and with the sampling frequency of 1hz, 2hz, 4.2hz, 5hz, 8.3hz, 12.5hz and 16.7hz respectively. Research results show that when the sampling frequency is higher than 5hz, the trends of recurrence nature of different groups are basically unchanged. However, when the sampling frequency is set below 5hz, the original trend of recurrence nature is destroyed, because the recurrence characteristic curves obtained using different sampling frequencies appear cross or overlapping phenomena. The above results indicate that the anemometer will not be able to fully capture the detailed information in wind field when its sampling frequency is lower than 5hz. The recurrence characteristics analysis of the wind speed signals provides an important basis for the optimal selection of anemometer.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


2015 ◽  
Vol 8 (2) ◽  
pp. 941-963 ◽  
Author(s):  
T. Vlemmix ◽  
F. Hendrick ◽  
G. Pinardi ◽  
I. De Smedt ◽  
C. Fayt ◽  
...  

Abstract. A 4-year data set of MAX-DOAS observations in the Beijing area (2008–2012) is analysed with a focus on NO2, HCHO and aerosols. Two very different retrieval methods are applied. Method A describes the tropospheric profile with 13 layers and makes use of the optimal estimation method. Method B uses 2–4 parameters to describe the tropospheric profile and an inversion based on a least-squares fit. For each constituent (NO2, HCHO and aerosols) the retrieval outcomes are compared in terms of tropospheric column densities, surface concentrations and "characteristic profile heights" (i.e. the height below which 75% of the vertically integrated tropospheric column density resides). We find best agreement between the two methods for tropospheric NO2 column densities, with a standard deviation of relative differences below 10%, a correlation of 0.99 and a linear regression with a slope of 1.03. For tropospheric HCHO column densities we find a similar slope, but also a systematic bias of almost 10% which is likely related to differences in profile height. Aerosol optical depths (AODs) retrieved with method B are 20% high compared to method A. They are more in agreement with AERONET measurements, which are on average only 5% lower, however with considerable relative differences (standard deviation ~ 25%). With respect to near-surface volume mixing ratios and aerosol extinction we find considerably larger relative differences: 10 ± 30, −23 ± 28 and −8 ± 33% for aerosols, HCHO and NO2 respectively. The frequency distributions of these near-surface concentrations show however a quite good agreement, and this indicates that near-surface concentrations derived from MAX-DOAS are certainly useful in a climatological sense. A major difference between the two methods is the dynamic range of retrieved characteristic profile heights which is larger for method B than for method A. This effect is most pronounced for HCHO, where retrieved profile shapes with method A are very close to the a priori, and moderate for NO2 and aerosol extinction which on average show quite good agreement for characteristic profile heights below 1.5 km. One of the main advantages of method A is the stability, even under suboptimal conditions (e.g. in the presence of clouds). Method B is generally more unstable and this explains probably a substantial part of the quite large relative differences between the two methods. However, despite a relatively low precision for individual profile retrievals it appears as if seasonally averaged profile heights retrieved with method B are less biased towards a priori assumptions than those retrieved with method A. This gives confidence in the result obtained with method B, namely that aerosol extinction profiles tend on average to be higher than NO2 profiles in spring and summer, whereas they seem on average to be of the same height in winter, a result which is especially relevant in relation to the validation of satellite retrievals.


Geophysics ◽  
2012 ◽  
Vol 77 (4) ◽  
pp. E301-E315 ◽  
Author(s):  
Thomas Kalscheuer ◽  
Juliane Hübert ◽  
Alexey Kuvshinov ◽  
Tobias Lochbühler ◽  
Laust B. Pedersen

Magnetotelluric (MT), radiomagnetotelluric (RMT), and, in particular, controlled-source audiomagnetotelluric (CSAMT) data are often heavily distorted by near-surface inhomogeneities. We developed a novel scheme to invert MT, RMT, and CSAMT data in the form of scalar or tensorial impedances and vertical magnetic transfer functions simultaneously for layer resistivities and electric and magnetic galvanic distortion parameters. The inversion scheme uses smoothness constraints to regularize layer resistivities and either Marquardt-Levenberg damping or the minimum-solution length criterion to regularize distortion parameters. A depth of investigation range is estimated by comparing layered model sections derived from first- and second-order smoothness constraints. Synthetic examples demonstrate that earth models are reconstructed properly for distorted and undistorted tensorial CSAMT data. In the inversion of scalar CSAMT data, such as the determinant impedance or individual tensor elements, the reduced number of transfer functions inevitably leads to increased ambiguity for distortion parameters. As a consequence of this ambiguity for scalar data, distortion parameters often grow over the iterations to unrealistic absolute values when regularized with the Marquardt-Levenberg scheme. Essentially, compensating relationships between terms containing electric and/or magnetic distortion are used in this growth. In a regularization with the minimum solution length criterion, the distortion parameters converge into a stable configuration after several iterations and attain reasonable values. The inversion algorithm was applied to a CSAMT field data set collected along a profile over a tunnel construction site at Hallandsåsen, Sweden. To avoid erroneous inverse models from strong anthropogenic effects on the data, two scalar transfer functions (one scalar impedance and one scalar vertical magnetic transfer function) were selected for inversion. Compared with a regularization of distortion parameters with the Marquardt-Levenberg method, the minimum-solution length criterion yielded smaller absolute values of distortion parameters and a horizontally more homogeneous distribution of electrical conductivity.


2021 ◽  
Author(s):  
Riccardo Scandroglio ◽  
Till Rehm ◽  
Jonas K. Limbrock ◽  
Andreas Kemna ◽  
Markus Heinze ◽  
...  

<p>The warming of alpine bedrock permafrost in the last three decades and consequent reduction of frozen areas has been well documented. Its consequences like slope stability reduction put humans and infrastructures at high risk. 2020 in particular was the warmest year on record at 3000m a.s.l. embedded in the warmest decade.</p><p>Recently, the development of electrical resistivity tomography (ERT) as standard technique for quantitative permafrost investigation allows extended monitoring of this hazard even allowing including quantitative 4D monitoring strategies (Scandroglio et al., in review). Nevertheless thermo-hydro-mechanical dynamics of steep bedrock slopes cannot be totally explained by a single measurement technique and therefore multi-approach setups are necessary in the field to record external forcing and improve the deciphering of internal responses.</p><p>The Zugspitze Kammstollen is a 850m long tunnel located between 2660 and 2780m a.s.l., a few decameters under the mountain ridge. First ERT monitoring was conducted in 2007 (Krautblatter et al., 2010) and has been followed by more than one decade of intensive field work. This has led to the collection of a unique multi-approach data set of still unpublished data. Continuous logging of environmental parameters such as rock/air temperatures and water infiltration through joints as well as a dedicated thermal model (Schröder and Krautblatter, in review) provide important additional knowledge on bedrock internal dynamics. Summer ERT and seismic refraction tomography surveys with manual and automated joints’ displacement measurements on the ridge offer information on external controls, complemented by three weather stations and a 44m long borehole within 1km from the tunnel.</p><p>Year-round access to the area enables uninterrupted monitoring and maintenance of instruments for reliable data collection. “Precisely controlled natural conditions”, restricted access for researchers only and logistical support by Environmental Research Station Schneefernerhaus, make this tunnel particularly attractive for developing benchmark experiments. Some examples are the design of induced polarization monitoring, the analysis of tunnel spring water for isotopes investigation, and the multi-annual mass monitoring by means of relative gravimetry.</p><p>Here, we present the recently modernized layout of the outdoor laboratory with the latest monitoring results, opening a discussion on further possible approaches of this extensive multi-approach data set, aiming at understanding not only permafrost thermal evolution but also the connected thermo-hydro-mechanical processes.</p><p> </p><p> </p><p>Krautblatter, M. et al. (2010) ‘Temperature-calibrated imaging of seasonal changes in permafrost rock walls by quantitative electrical resistivity tomography (Zugspitze, German/Austrian Alps)’, Journal of Geophysical Research: Earth Surface, 115(2), pp. 1–15. doi: 10.1029/2008JF001209.</p><p>Scandroglio, R. et al. (in review) ‘4D-Quantification of alpine permafrost degradation in steep rock walls using a laboratory-calibrated ERT approach (in review)’, Near Surface Geophysics.</p><p>Schröder, T. and Krautblatter, M. (in review) ‘A high-resolution multi-phase thermo-geophysical model to verify long-term electrical resistivity tomography monitoring in alpine permafrost rock walls (Zugspitze, German/Austrian Alps) (submitted)’, Earth Surface Processes and Landforms.</p>


2020 ◽  
Author(s):  
Yee Jun Tham ◽  
Nina Sarnela ◽  
Carlos A. Cuevas ◽  
Iyer Siddharth ◽  
Lisa Beck ◽  
...  

<p>Atmospheric halogens chemistry like the catalytic reaction of bromine and chlorine radicals with ozone (O<sub>3</sub>) has been known to cause the springtime surface-ozone destruction in the polar region. Although the initial atmospheric reactions of chlorine with ozone are well understood, the final oxidation steps leading to the formation of chlorate (ClO<sub>3</sub><sup>-</sup>) and perchlorate (ClO<sub>4</sub><sup>-</sup>) remain unclear due to the lack of direct evidence of their presence and fate in the atmosphere. In this study, we present the first high-resolution ambient data set of gas-phase HClO<sub>3</sub> (chloric acid) and HClO<sub>4</sub> (perchlorate acid) obtained from the field measurement at the Villum Research Station, Station Nord, in high arctic North Greenland (81°36’ N, 16°40’ W) during the spring of 2015. A state-of-the-art chemical ionization atmospheric pressure interface time-of-flight mass spectrometer (CI-APi-TOF) was used in negative ion mode with nitrate ion as the reagent ion to detect the gas-phase HClO<sub>3</sub> and HClO<sub>4</sub>. We measured significant level of HClO<sub>3</sub> and HClO<sub>4</sub> only during the springtime ozone depletion events in the Greenland, with concentration up to 9x10<sup>5</sup> molecule cm<sup>-3</sup>. Air mass trajectory analysis shows that the air during the ozone depletion event was confined to near-surface, indicating that the O<sub>3</sub> and surface of sea-ice/snowpack may play important roles in the formation of HClO<sub>3</sub> and HClO<sub>4</sub>. We used high-level quantum-chemical methods to calculate the ultraviolet-visible absorption spectra and cross-section of HClO<sub>3</sub> and HClO<sub>4</sub> in the gas-phase to assess their fates in the atmosphere. Overall, our results reveal the presence of HClO<sub>3</sub> and HClO<sub>4</sub> during ozone depletion events, which could affect the chlorine chemistry in the Arctic atmosphere.</p>


Fact Sheet ◽  
2002 ◽  
Author(s):  
Denise L. Montgomery ◽  
G.R. Robinson ◽  
J.D. Ayotte ◽  
S.M. Flanagan ◽  
K.W. Robinson

Geophysics ◽  
2012 ◽  
Vol 77 (6) ◽  
pp. B287-B294 ◽  
Author(s):  
Jamie K. Pringle ◽  
Peter Styles ◽  
Claire P. Howell ◽  
Michael W. Branston ◽  
Rebecca Furner ◽  
...  

The area around the town of Northwich in Cheshire, U. K., has a long history of catastrophic ground subsidence caused by a combination of natural dissolution and collapsing abandoned mine workings within the underlying Triassic halite bedrock geology. In the village of Marston, the Trent and Mersey Canal crosses several abandoned salt mine workings and previously subsiding areas, the canal being breached by a catastrophic subsidence event in 1953. This canal section is the focus of a long-term monitoring study by conventional geotechnical topographic and microgravity surveys. Results of 20 years of topographic time-lapse surveys indicate specific areas of local subsidence that could not be predicted by available site and mine abandonment plan and shaft data. Subsidence has subsequently necessitated four phases of temporary canal bank remediation. Ten years of microgravity time-lapse data have recorded major deepening negative anomalies in specific sections that correlate with topographic data. Gravity 2D modeling using available site data found upwardly propagating voids, and associated collapse material produced a good match with observed microgravity data. Intrusive investigations have confirmed a void at the major anomaly. The advantages of undertaking such long-term studies for near-surface geophysicists, geotechnical engineers, and researchers working in other application areas are discussed.


2021 ◽  
Author(s):  
Thomas Cropper ◽  
Elizabeth Kent ◽  
David Berry ◽  
Richard Cornes ◽  
Beatriz Recinos-Rivas

<p>Accurate, long-term time series of near-surface air temperature (AT) are the fundamental datasets on which the magnitude of anthropogenic climate change is scientifically and societally addressed. Across the ocean, these (near-surface) climate records use Sea Surface Temperature (SST) instead of Marine Air Temperature (MAT) and blend the SST and AT over land to create datasets. MAT has often been overlooked as a data choice as daytime MAT observations from ships are known to contain warm biases due to the storage of accumulated solar energy. Two recent MAT datasets, CLASSnmat (1881 – 2019) and UAHNMAT (1900 – 2018), both use night-time MAT observations only. Daytime MAT observations in the International Comprehensive Ocean–Atmosphere Data Set (ICOADS) account for over half of the MAT observations in ICOADS, and this proportion increases further back in time (i.e. pre-1850s). If long-term MAT records over the ocean are to be extended, the use of daytime MAT is vital.</p><p> </p><p>To adjust for the daytime MAT heating bias, and apply it to ICOADS, we present the application of a physics-based model, which accounts for the accumulated energy storage throughout the day. As the ‘true’ diurnal cycle of MAT over the ocean has not been, to-date, adequately quantified, our approach also removes the diurnal cycle from ICOADS observations and generates a night-time equivalent MAT for all observations. We fit this model to MAT observations from groups of ships in ICOADS that share similar heating biases and metadata characteristics. This enables us to use the empirically derived coefficients (representing the physical energy transfer terms of the heating model) obtained from the fit for use in removal of the heating bias and diurnal cycle from ship-based MAT observations throughout ICOADS which share similar characteristics (i.e. we can remove the diurnal cycle from a ship which only reports once daily at noon). This adjustment will create an MAT record of night-time-equivalent temperatures that will enable an extension of the marine surface AT record back into the 18<sup>th</sup> century.</p>


Author(s):  
James B. Elsner ◽  
Thomas H. Jagger

Hurricane data originate from careful analysis of past storms by operational meteorologists. The data include estimates of the hurricane position and intensity at 6-hourly intervals. Information related to landfall time, local wind speeds, damages, and deaths, as well as cyclone size, are included. The data are archived by season. Some effort is needed to make the data useful for hurricane climate studies. In this chapter, we describe the data sets used throughout this book. We show you a work flow that includes importing, interpolating, smoothing, and adding attributes. We also show you how to create subsets of the data. Code in this chapter is more complicated and it can take longer to run. You can skip this material on first reading and continue with model building in Chapter 7. You can return here when you have an updated version of the data that includes the most recent years. Most statistical models in this book use the best-track data. Here we describe these data and provide original source material. We also explain how to smooth and interpolate them. Interpolations are needed for regional hurricane analyses. The best-track data set contains the 6-hourly center locations and intensities of all known tropical cyclones across the North Atlantic basin, including the Gulf of Mexico and Caribbean Sea. The data set is called HURDAT for HURricane DATa. It is maintained by the U.S. National Oceanic and Atmospheric Administration (NOAA) at the National Hurricane Center (NHC). Center locations are given in geographic coordinates (in tenths of degrees) and the intensities, representing the one-minute near-surface (∼10 m) wind speeds, are given in knots (1 kt = .5144 m s−1) and the minimum central pressures are given in millibars (1 mb = 1 hPa). The data are provided in 6-hourly intervals starting at 00 UTC (Universal Time Coordinate). The version of HURDAT file used here contains cyclones over the period 1851 through 2010 inclusive. Information on the history and origin of these data is found in Jarvinen et al (1984). The file has a logical structure that makes it easy to read with a FORTRAN program. Each cyclone contains a header record, a series of data records, and a trailer record.


Sign in / Sign up

Export Citation Format

Share Document