Simultaneous Multicomponent Quantitative Analysis by Infrared Absorption Spectroscopy

1984 ◽  
Vol 38 (5) ◽  
pp. 663-668 ◽  
Author(s):  
Lesia L. Tyson ◽  
Yong-Chien Ling ◽  
Charles K. Mann

Two data-handling techniques, least-squares fitting and cross-correlation, have been used for three-component analysis under comparable conditions with the use of both simulated and real data Factors considered are the effect of variation in degree of peak overlap, signal-to-noise ratio, the effect of peak width variations when peak maxima occur at the same position, and the effect of varying peak intensities A series of lipid mixtures was analyzed by each method with the use of infrared absorption This permits comparison of these results with earlier reports Both least-squares and cross-correlation can be used with samples that are outside the applicable range of the earlier work In this comparison, the least-squares results are somewhat better than those from cross-correlation

Author(s):  
Parisa Torkaman

The generalized inverted exponential distribution is introduced as a lifetime model with good statistical properties. This paper, the estimation of the probability density function and the cumulative distribution function of with five different estimation methods: uniformly minimum variance unbiased(UMVU), maximum likelihood(ML), least squares(LS), weighted least squares (WLS) and percentile(PC) estimators are considered. The performance of these estimation procedures, based on the mean squared error (MSE) by numerical simulations are compared. Simulation studies express that the UMVU estimator performs better than others and when the sample size is large enough the ML and UMVU estimators are almost equivalent and efficient than LS, WLS and PC. Finally, the result using a real data set are analyzed.


Geophysics ◽  
2020 ◽  
Vol 85 (5) ◽  
pp. S285-S297
Author(s):  
Zhina Li ◽  
Zhenchun Li ◽  
Qingqing Li ◽  
Qingyang Li ◽  
Miaomiao Sun ◽  
...  

The migration of multiples can provide complementary information about the subsurface, but crosstalk artifacts caused by the interference between different-order multiples reduce its reliability. To mitigate the crosstalk artifacts, least-squares reverse time migration (LSRTM) of multiples is suggested by some researchers. Multiples are more affected by attenuation than primaries because of the longer travel path. To avoid incorrect waveform matching during the inversion, we propose to include viscosity in the LSRTM implementation. A method of LSRTM of multiples is introduced based on a viscoacoustic wave equation, which is derived from the generalized standard linear solid model. The merit of the proposed method is that it not only compensates for the amplitude loss and phase change, which cannot be achieved by traditional RTM and LSRTM of multiples, but it also provides more information about the subsurface with fewer crosstalk artifacts by using multiples compared with the viscoacoustic LSRTM of primaries. Tests on sensitivity to the errors in the velocity model, the Q model, and the separated multiples reveal that accurate models and input multiples are vital to the image quality. Numerical tests on synthetic models and real data demonstrate the advantages of our approach in improving the quality of the image in terms of amplitude balancing and signal-to-noise ratio.


2021 ◽  
Vol 5 (1) ◽  
pp. 59
Author(s):  
Gaël Kermarrec ◽  
Niklas Schild ◽  
Jan Hartmann

Terrestrial laser scanners (TLS) capture a large number of 3D points rapidly, with high precision and spatial resolution. These scanners are used for applications as diverse as modeling architectural or engineering structures, but also high-resolution mapping of terrain. The noise of the observations cannot be assumed to be strictly corresponding to white noise: besides being heteroscedastic, correlations between observations are likely to appear due to the high scanning rate. Unfortunately, if the variance can sometimes be modeled based on physical or empirical considerations, the latter are more often neglected. Trustworthy knowledge is, however, mandatory to avoid the overestimation of the precision of the point cloud and, potentially, the non-detection of deformation between scans recorded at different epochs using statistical testing strategies. The TLS point clouds can be approximated with parametric surfaces, such as planes, using the Gauss–Helmert model, or the newly introduced T-splines surfaces. In both cases, the goal is to minimize the squared distance between the observations and the approximated surfaces in order to estimate parameters, such as normal vector or control points. In this contribution, we will show how the residuals of the surface approximation can be used to derive the correlation structure of the noise of the observations. We will estimate the correlation parameters using the Whittle maximum likelihood and use comparable simulations and real data to validate our methodology. Using the least-squares adjustment as a “filter of the geometry” paves the way for the determination of a correlation model for many sensors recording 3D point clouds.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Camilo Broc ◽  
Therese Truong ◽  
Benoit Liquet

Abstract Background The increasing number of genome-wide association studies (GWAS) has revealed several loci that are associated to multiple distinct phenotypes, suggesting the existence of pleiotropic effects. Highlighting these cross-phenotype genetic associations could help to identify and understand common biological mechanisms underlying some diseases. Common approaches test the association between genetic variants and multiple traits at the SNP level. In this paper, we propose a novel gene- and a pathway-level approach in the case where several independent GWAS on independent traits are available. The method is based on a generalization of the sparse group Partial Least Squares (sgPLS) to take into account groups of variables, and a Lasso penalization that links all independent data sets. This method, called joint-sgPLS, is able to convincingly detect signal at the variable level and at the group level. Results Our method has the advantage to propose a global readable model while coping with the architecture of data. It can outperform traditional methods and provides a wider insight in terms of a priori information. We compared the performance of the proposed method to other benchmark methods on simulated data and gave an example of application on real data with the aim to highlight common susceptibility variants to breast and thyroid cancers. Conclusion The joint-sgPLS shows interesting properties for detecting a signal. As an extension of the PLS, the method is suited for data with a large number of variables. The choice of Lasso penalization copes with architectures of groups of variables and observations sets. Furthermore, although the method has been applied to a genetic study, its formulation is adapted to any data with high number of variables and an exposed a priori architecture in other application fields.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4618
Author(s):  
Francisco Oliveira ◽  
Miguel Luís ◽  
Susana Sargento

Unmanned Aerial Vehicle (UAV) networks are an emerging technology, useful not only for the military, but also for public and civil purposes. Their versatility provides advantages in situations where an existing network cannot support all requirements of its users, either because of an exceptionally big number of users, or because of the failure of one or more ground base stations. Networks of UAVs can reinforce these cellular networks where needed, redirecting the traffic to available ground stations. Using machine learning algorithms to predict overloaded traffic areas, we propose a UAV positioning algorithm responsible for determining suitable positions for the UAVs, with the objective of a more balanced redistribution of traffic, to avoid saturated base stations and decrease the number of users without a connection. The tests performed with real data of user connections through base stations show that, in less restrictive network conditions, the algorithm to dynamically place the UAVs performs significantly better than in more restrictive conditions, reducing significantly the number of users without a connection. We also conclude that the accuracy of the prediction is a very important factor, not only in the reduction of users without a connection, but also on the number of UAVs deployed.


2020 ◽  
Vol 11 (1) ◽  
pp. 39
Author(s):  
Eric Järpe ◽  
Mattias Weckstén

A new method for musical steganography for the MIDI format is presented. The MIDI standard is a user-friendly music technology protocol that is frequently deployed by composers of different levels of ambition. There is to the author’s knowledge no fully implemented and rigorously specified, publicly available method for MIDI steganography. The goal of this study, however, is to investigate how a novel MIDI steganography algorithm can be implemented by manipulation of the velocity attribute subject to restrictions of capacity and security. Many of today’s MIDI steganography methods—less rigorously described in the literature—fail to be resilient to steganalysis. Traces (such as artefacts in the MIDI code which would not occur by the mere generation of MIDI music: MIDI file size inflation, radical changes in mean absolute error or peak signal-to-noise ratio of certain kinds of MIDI events or even audible effects in the stego MIDI file) that could catch the eye of a scrutinizing steganalyst are side-effects of many current methods described in the literature. This steganalysis resilience is an imperative property of the steganography method. However, by restricting the carrier MIDI files to classical organ and harpsichord pieces, the problem of velocities following the mood of the music can be avoided. The proposed method, called Velody 2, is found to be on par with or better than the cutting edge alternative methods regarding capacity and inflation while still possessing a better resilience against steganalysis. An audibility test was conducted to check that there are no signs of audible traces in the stego MIDI files.


Author(s):  
Wenjun Huo ◽  
Peng Chu ◽  
Kai Wang ◽  
Liangting Fu ◽  
Zhigang Niu ◽  
...  

In order to study the detection methods of weak transient electromagnetic radiation signals, a detection algorithm integrating generalized cross-correlation and chaotic sequence prediction is proposed in this paper. Based on the dual-antenna test and cross-correlation information estimation method, the detection of aperiodic weak discharge signals under low signal-to-noise ratio is transformed into the estimation of periodic delay parameters, and the noise is reduced at the same time. The feasibility of this method is verified by simulation and experimental analysis. The results show that under the condition of low signal-to-noise ratio, the integrated method can effectively suppress the influence of 10 noise disturbances. It has a high detection probability for weak transient electromagnetic radiation signals, and needs fewer pulse accumulation times, which improves the detection efficiency and is more suitable for long-distance detection of weak electromagnetic radiation sources.


Geophysics ◽  
2006 ◽  
Vol 71 (5) ◽  
pp. U67-U76 ◽  
Author(s):  
Robert J. Ferguson

The possibility of improving regularization/datuming of seismic data is investigated by treating wavefield extrapolation as an inversion problem. Weighted, damped least squares is then used to produce the regularized/datumed wavefield. Regularization/datuming is extremely costly because of computing the Hessian, so an efficient approximation is introduced. Approximation is achieved by computing a limited number of diagonals in the operators involved. Real and synthetic data examples demonstrate the utility of this approach. For synthetic data, regularization/datuming is demonstrated for large extrapolation distances using a highly irregular recording array. Without approximation, regularization/datuming returns a regularized wavefield with reduced operator artifacts when compared to a nonregularizing method such as generalized phase shift plus interpolation (PSPI). Approximate regularization/datuming returns a regularized wavefield for approximately two orders of magnitude less in cost; but it is dip limited, though in a controllable way, compared to the full method. The Foothills structural data set, a freely available data set from the Rocky Mountains of Canada, demonstrates application to real data. The data have highly irregular sampling along the shot coordinate, and they suffer from significant near-surface effects. Approximate regularization/datuming returns common receiver data that are superior in appearance compared to conventional datuming.


2001 ◽  
Vol 7 (1) ◽  
pp. 97-112 ◽  
Author(s):  
Yulia R. Gel ◽  
Vladimir N. Fomin

Usually the coefficients in a stochastic time series model are partially or entirely unknown when the realization of the time series is observed. Sometimes the unknown coefficients can be estimated from the realization with the required accuracy. That will eventually allow optimizing the data handling of the stochastic time series.Here it is shown that the recurrent least-squares (LS) procedure provides strongly consistent estimates for a linear autoregressive (AR) equation of infinite order obtained from a minimal phase regressive (ARMA) equation. The LS identification algorithm is accomplished by the Padé approximation used for the estimation of the unknown ARMA parameters.


2013 ◽  
Vol 9 (S304) ◽  
pp. 243-243
Author(s):  
Takamitsu Miyaji ◽  
M. Krumpe ◽  
A. Coil ◽  
H. Aceves ◽  
B. Husemann

AbstractWe present the results of our series of studies on correlation function and halo occupation distribution of AGNs utilizing data the ROSAT All-Sky Survey (RASS) and the Sloan Digital Sky Survey (SDSS) in the redshift range of 0.07<z<0.36. In order to improve the signal-to-noise ratio, we take cross-correlation approach, where cross-correlation functions (CCF) between AGNs and much more numerous AGNs are analyzed. The calculated CCFs are analyzed using the Halo Occupation Distribution (HOD) model, where the CCFs are divided into the term contributed by the AGN-galaxy pairs that reside in one dark matter halo (DMH), (the 1-halo term) and those from two different DMHs (the 2-halo term). The 2-halo term is the indicator of the bias parameter, which is a function of the typical mass of the DMHs in which AGNs reside. The combination of the 1-halo and 2-halo terms gives, not only the typical DMH mass, but also how the AGNs are distributed among the DMHs as a function of mass separately for those at the center of the DMHs and satellites. The main results are as follows: (1) the range of typical mass of the DMHs in various sub-samples of AGNs log (MDMH/h−1MΘ) ~ 12.4–13.4, (2) we found a dependence of the AGN bias parameter on the X-ray luminosity of AGNs, while the optical luminosity dependence is not significant probably due to smaller dynamic range in luminosity for the optically-selected sample, and (3) the growth of the number of AGNs per DMH (N (MDMH)) with MDMH is shallow, or even may be flat, contrary to that of the galaxy population in general, which grows with MDMH proportionally, suggesting a suppression of AGN triggering in denser environment. In order to investigate the origin of the X-ray luminosity dependence, we are also investigating the dependence of clustering on the black hole mass and the Eddington ratio, we also present the results of this investigation.


Sign in / Sign up

Export Citation Format

Share Document