scholarly journals Overview of Small Fixed-Wing Unmanned Aircraft for Meteorological Sampling

2015 ◽  
Vol 32 (1) ◽  
pp. 97-115 ◽  
Author(s):  
Jack Elston ◽  
Brian Argrow ◽  
Maciej Stachura ◽  
Doug Weibel ◽  
Dale Lawrence ◽  
...  

AbstractSampling the atmospheric boundary layer with small unmanned aircraft is a difficult task requiring informed selection of sensors and algorithms that are suited to the particular platform and mission. Many factors must be considered during the design process to ensure the desired measurement accuracy and resolution is achieved, as is demonstrated through an examination of previous and current efforts. A taxonomy is developed from these approaches and is used to guide a review of the systems that have been employed to make in situ wind and thermodynamic measurements, along with the campaigns that have employed them. Details about the airframe parameters, estimation algorithms, sensors, and calibration methods are given.

2007 ◽  
Vol 111 (1120) ◽  
pp. 389-396 ◽  
Author(s):  
G. Campa ◽  
M. R. Napolitano ◽  
M. Perhinschi ◽  
M. L. Fravolini ◽  
L. Pollini ◽  
...  

Abstract This paper describes the results of an effort on the analysis of the performance of specific ‘pose estimation’ algorithms within a Machine Vision-based approach for the problem of aerial refuelling for unmanned aerial vehicles. The approach assumes the availability of a camera on the unmanned aircraft for acquiring images of the refuelling tanker; also, it assumes that a number of active or passive light sources – the ‘markers’ – are installed at specific known locations on the tanker. A sequence of machine vision algorithms on the on-board computer of the unmanned aircraft is tasked with the processing of the images of the tanker. Specifically, detection and labeling algorithms are used to detect and identify the markers and a ‘pose estimation’ algorithm is used to estimate the relative position and orientation between the two aircraft. Detailed closed-loop simulation studies have been performed to compare the performance of two ‘pose estimation’ algorithms within a simulation environment that was specifically developed for the study of aerial refuelling problems. Special emphasis is placed on the analysis of the required computational effort as well as on the accuracy and the error propagation characteristics of the two methods. The general trade offs involved in the selection of the pose estimation algorithm are discussed. Finally, simulation results are presented and analysed.


Methodology ◽  
2007 ◽  
Vol 3 (1) ◽  
pp. 14-23 ◽  
Author(s):  
Juan Ramon Barrada ◽  
Julio Olea ◽  
Vicente Ponsoda

Abstract. The Sympson-Hetter (1985) method provides a means of controlling maximum exposure rate of items in Computerized Adaptive Testing. Through a series of simulations, control parameters are set that mark the probability of administration of an item on being selected. This method presents two main problems: it requires a long computation time for calculating the parameters and the maximum exposure rate is slightly above the fixed limit. Van der Linden (2003) presented two alternatives which appear to solve both of the problems. The impact of these methods in the measurement accuracy has not been tested yet. We show how these methods over-restrict the exposure of some highly discriminating items and, thus, the accuracy is decreased. It also shown that, when the desired maximum exposure rate is near the minimum possible value, these methods offer an empirical maximum exposure rate clearly above the goal. A new method, based on the initial estimation of the probability of administration and the probability of selection of the items with the restricted method ( Revuelta & Ponsoda, 1998 ), is presented in this paper. It can be used with the Sympson-Hetter method and with the two van der Linden's methods. This option, when used with Sympson-Hetter, speeds the convergence of the control parameters without decreasing the accuracy.


Author(s):  
Wendy Rusli ◽  
Pavan Kumar Naraharisetti ◽  
Wee Chew

The use of Raman spectroscopy for reaction monitoring has been successfully applied over the past few decades. One complication in such usage is the applicability for quantitative reaction studies. This...


1982 ◽  
Vol 15 ◽  
Author(s):  
W. S. Fyfe

ABSTRACTSelection of the best rock types for radwaste disposal will depend on their having minimal permeability, maximal flow dispersion, minimal chance of forming new wide aperture fractures, maximal ion retention, and minimal thermal and mining disturbance. While no rock is perfect, thinly bedded complex sedimentary sequences may have good properties, either as repository rocks, or as cover to a repository.Long time prediction of such favorable properties of a rock at a given site may be best modelled from studies of in situ rock properties. Fracture flow, dispersion history, and geological stability can be derived from direct observations of rocks themselves, and can provide the parameters needed for convincing demonstration of repository security for appropriate times.


2015 ◽  
Vol 15 (9) ◽  
pp. 5083-5097 ◽  
Author(s):  
M. D. Shaw ◽  
J. D. Lee ◽  
B. Davison ◽  
A. Vaughan ◽  
R. M. Purvis ◽  
...  

Abstract. Highly spatially resolved mixing ratios of benzene and toluene, nitrogen oxides (NOx) and ozone (O3) were measured in the atmospheric boundary layer above Greater London during the period 24 June to 9 July 2013 using a Dornier 228 aircraft. Toluene and benzene were determined in situ using a proton transfer reaction mass spectrometer (PTR-MS), NOx by dual-channel NOx chemiluminescence and O3 mixing ratios by UV absorption. Average mixing ratios observed over inner London at 360 ± 10 m a.g.l. were 0.20 ± 0.05, 0.28 ± 0.07, 13.2 ± 8.6, 21.0 ± 7.3 and 34.3 ± 15.2 ppbv for benzene, toluene, NO, NO2 and NOx respectively. Linear regression analysis between NO2, benzene and toluene mixing ratios yields a strong covariance, indicating that these compounds predominantly share the same or co-located sources within the city. Average mixing ratios measured at 360 ± 10 m a.g.l. over outer London were always lower than over inner London. Where traffic densities were highest, the toluene / benzene (T / B) concentration ratios were highest (average of 1.8 ± 0.5 ppbv ppbv-1), indicative of strong local sources. Daytime maxima in NOx, benzene and toluene mixing ratios were observed in the morning (~ 40 ppbv NOx, ~ 350 pptv toluene and ~ 200 pptv benzene) and in the mid-afternoon for ozone (~ 40 ppbv O3), all at 360 ± 10 m a.g.l.


2021 ◽  
Vol 11 (5) ◽  
pp. 2318
Author(s):  
David Macii ◽  
Daniel Belega ◽  
Dario Petri

The Interpolated Discrete Fourier Transform (IpDFT) is one of the most popular algorithms for Phasor Measurement Units (PMUs), due to its quite low computational complexity and its good accuracy in various operating conditions. However, the basic IpDFT algorithm can be used also as a preliminary estimator of the amplitude, phase, frequency and rate of change of frequency of voltage or current AC waveforms at times synchronized to the Universal Coordinated Time (UTC). Indeed, another cascaded algorithm can be used to refine the waveform parameters estimation. In this context, the main novelty of this work is a fair and extensive performance comparison of three different state-of-the-art IpDFT-tuned estimation algorithms for PMUs. The three algorithms are: (i) the so-called corrected IpDFT (IpDFTc), which is conceived to compensate for the effect of both the image of the fundamental tone and second-order harmonic; (ii) a frequency-tuned version of the Taylor Weighted Least-Squares (TWLS) algorithm, and (iii) the frequency Down-Conversion and low-pass Filtering (DCF) technique described also in the IEEE/IEC Standard 60255-118-1:2018. The simulation results obtained in the P Class and M Class testing conditions specified in the same Standard show that the IpDFTc algorithm is generally preferable under the effect of steady-state disturbances. On the contrary, the tuned TWLS estimator is usually the best solution when dynamic changes of amplitude, phase or frequency occur. In transient conditions (i.e., under the effect of amplitude or phase steps), the IpDFTc and the tuned TWLS algorithms do not clearly outperform one another. The DCF approach generally returns the worst results. However, its actual performances heavily depend on the adopted low-pass filter.


2020 ◽  
Vol 24 (3) ◽  
pp. 251-264
Author(s):  
Paula Lacomba Montes ◽  
Alejandro Campos Uribe

This paper reports on the primary school design processes carried out around the 1940s in the County of Hertfordshire in Great Britain, which later evolved into innovative strategies developed by Mary and David Medd in the Ministry of Education from the late 1950s. The whole process, undertaken during more than three decades, reveals a way of breaking with the traditional spatial conception of a school. The survey of the period covered has allowed an in-depth understanding of how learning spaces could be transformed by challenging the conventional school model of closed rooms, suggesting a new way of understanding learning spaces as a group of Centres rather than classrooms. Historians have thoroughly shown the ample scope of this process, which involved many professionals, fostering a true cross-disciplinary endeavour where the curriculum and the learning spaces were developed in close collaboration. A selection of schools built in the county has been used to typologically analyse how architectural changes began to arise and later flourished at the Ministry of Education. The Medds had indeed a significant role through the development of a design process known as the Built-in variety and the Planning Ingredients. A couple of examples will clarify some of these strategies, revealing how the design of educational space could successfully respond to an active way of learning.


Abstract The evolution of the tropical cyclone boundary layer (TCBL) wind field before landfall is examined in this study. As noted in previous studies, a typical TCBL wind structure over the ocean features a supergradient boundary layer jet to the left of motion and Earth-relative maximum winds to the right. However, the detailed response of the wind field to frictional convergence at the coastline is less well known. Here, idealized numerical simulations reveal an increase in the offshore radial and vertical velocities beginning once the TC is roughly 200 km offshore. This increase in the radial velocity is attributed to the sudden decrease in frictional stress once the highly agradient flow crosses the offshore coastline. Enhanced advection of angular momentum by the secondary circulation forces a strengthening of the supergradient jet near the top of the TCBL. Sensitivity experiments reveal that the coastal roughness discontinuity dominates the friction asymmetry due to motion. Additionally, increasing the inland roughness through increasing the aerodynamic roughness length enhances the observed asymmetries. Lastly, a brief analysis of in-situ surface wind data collected during the landfall of three Gulf of Mexico hurricanes is provided and compared to the idealized simulations. Despite the limited in-situ data, the observations generally support the simulations. The results here imply that assumptions about the TCBL wind field based on observations from over horizontally-homogeneous surface types - which have been well-documented by previous studies - are inappropriate for use near strong frictional heterogeneity.


2018 ◽  
Author(s):  
Anna Nikandrova ◽  
Ksenia Tabakova ◽  
Antti Manninen ◽  
Riikka Väänänen ◽  
Tuukka Petäjä ◽  
...  

Abstract. Understanding the distribution of aerosol layers is important for determining long range transport and aerosol radiative forcing. In this study we combine airborne in situ measurements of aerosol with data obtained by a ground-based High Spectral Resolution Lidar (HSRL) and radiosonde profiles to investigate the temporal and vertical variability of aerosol properties in the lower troposphere. The HSRL was deployed in Hyytiälä, Southern Finland, from January to September 2014 as a part of the US DoE ARM (Atmospheric Radiation Measurement) mobile facility during the BAECC (Biogenic Aerosols – Effects on Cloud and Climate) Campaign. Two flight campaigns took place in April and August 2014 with instruments measuring the aerosol size distribution from 10 nm to 10 µm at altitudes up to 3800 m. Two case studies from the flight campaigns, when several aerosol layers were identified, were selected for further investigation: one clear sky case, and one partly cloudy case. During the clear sky case, turbulent mixing ensured low temporal and spatial variability in the measured aerosol size distribution in the boundary layer whereas mixing was not as homogeneous in the boundary layer during the partly cloudy case. The elevated layers exhibited greater temporal and spatial variability in aerosol size distribution, indicating a lack of mixing. New particle formation was observed in the boundary layer during the clear sky case, and nucleation mode particles were also seen in the elevated layers that were not mixing with the boundary layer. Interpreting local measurements of elevated layers in terms of long-range transport can be achieved using back trajectories from Lagrangian models, but care should be taken in selecting appropriate arrival heights, since the modelled and observed layer heights did not always coincide. We conclude that higher confidence in attributing elevated aerosol layers with their air mass origin is attained when back trajectories are combined with lidar and radiosonde profiles.


2009 ◽  
Vol 66 (8) ◽  
pp. 2429-2443 ◽  
Author(s):  
Tim Li ◽  
Chunhua Zhou

Abstract Numerical experiments with a 2.5-layer and a 2-level model are conducted to examine the mechanism for the planetary scale selection of the Madden–Julian oscillation (MJO). The strategy here is to examine the evolution of an initial perturbation that has a form of the equatorial Kelvin wave at zonal wavenumbers of 1 to 15. In the presence of a frictional boundary layer, the most unstable mode prefers a short wavelength under a linear heating; but with a nonlinear heating, the zonal wavenumber 1 grows fastest. This differs significantly from a model without the boundary layer, in which neither linear nor nonlinear heating leads to the long wave selection. Thus, the numerical simulations point out the crucial importance of the combined effect of the nonlinear heating and the frictional boundary layer in the MJO planetary scale selection. The cause of this scale selection under the nonlinear heating is attributed to the distinctive phase speeds between the dry Kelvin wave and the wet Kelvin–Rossby wave couplet. The faster dry Kelvin wave triggered by a convective branch may catch up and suppress another convective branch, which travels ahead of it at the phase speed of the wet Kelvin–Rossby wave couplet if the distance between the two neighboring convective branches is smaller than a critical distance (about 16 000 km). The interference between the dry Kelvin wave and the wet Kelvin–Rossby wave couplet eventually dissipates and “filters out” shorter wavelength perturbations, leading to a longwave selection. The boundary layer plays an important role in destabilizing the MJO through frictional moisture convergences and in retaining the in-phase zonal wind–pressure structure.


Sign in / Sign up

Export Citation Format

Share Document