Efficient Pathway Identification from Geospatial Trajectories

Author(s):  
Carola Trahms ◽  
Patricia Handmann ◽  
Willi Rath ◽  
Matthias Renz ◽  
Martin Visbeck

<p>In the earth-physics community Lagrangian trajectories are used within multiple contexts – analyzing the spreading of pollutants in the air or studying the connectivity between two ocean regions of interest. Huge amounts of data are generated reporting the geo position and other variables e.g. temperature, depth or salinity for particles spreading in the ocean. As state-of-the-art, these experiments are analyzed and visualized by binning the particle positions to pre-defined rectangular boxes. For each box a particle density is computed which then yields a probability map to visualize major pathways. Identifying the main pathways directly still remains a challenge when huge amounts of particles and variables are involved.</p><p>We propose a novel method that focuses on linking the net fluctuation of particles between adaptable hexagonal grid cells. For very small areas the rectangular boxing does not imply big differences in area or shape, though when gridding larger areas it introduces rather large distortions. Using hexagons instead provides multiple advantages, such as constant distances between the centers of neighboring boxes or more possibilities of movement due to 6 edges instead of 4 with a lower number of neighbors at the same time (6 instead of 9). The net fluctuation can be viewed as transition strength between the cells.Through this network perspective, the density of the transition strength can be visualized clearly. The main pathways are the transitions with the highest net fluctuation. Thus, simple statistical filtering can be used to reveal the main pathways. The combination of network analysis and adaptable hexagonal grid cells yields a surprisingly time and resource efficient way to identify main pathways.</p>

This volume vividly demonstrates the importance and increasing breadth of quantitative methods in the earth sciences. With contributions from an international cast of leading practitioners, chapters cover a wide range of state-of-the-art methods and applications, including computer modeling and mapping techniques. Many chapters also contain reviews and extensive bibliographies which serve to make this an invaluable introduction to the entire field. In addition to its detailed presentations, the book includes chapters on the history of geomathematics and on R.G.V. Eigen, the "father" of mathematical geology. Written to commemorate the 25th anniversary of the International Association for Mathematical Geology, the book will be sought after by both practitioners and researchers in all branches of geology.


2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Aysen Degerli ◽  
Mete Ahishali ◽  
Mehmet Yamac ◽  
Serkan Kiranyaz ◽  
Muhammad E. H. Chowdhury ◽  
...  

AbstractComputer-aided diagnosis has become a necessity for accurate and immediate coronavirus disease 2019 (COVID-19) detection to aid treatment and prevent the spread of the virus. Numerous studies have proposed to use Deep Learning techniques for COVID-19 diagnosis. However, they have used very limited chest X-ray (CXR) image repositories for evaluation with a small number, a few hundreds, of COVID-19 samples. Moreover, these methods can neither localize nor grade the severity of COVID-19 infection. For this purpose, recent studies proposed to explore the activation maps of deep networks. However, they remain inaccurate for localizing the actual infestation making them unreliable for clinical use. This study proposes a novel method for the joint localization, severity grading, and detection of COVID-19 from CXR images by generating the so-called infection maps. To accomplish this, we have compiled the largest dataset with 119,316 CXR images including 2951 COVID-19 samples, where the annotation of the ground-truth segmentation masks is performed on CXRs by a novel collaborative human–machine approach. Furthermore, we publicly release the first CXR dataset with the ground-truth segmentation masks of the COVID-19 infected regions. A detailed set of experiments show that state-of-the-art segmentation networks can learn to localize COVID-19 infection with an F1-score of 83.20%, which is significantly superior to the activation maps created by the previous methods. Finally, the proposed approach achieved a COVID-19 detection performance with 94.96% sensitivity and 99.88% specificity.


Author(s):  
Mingliang Xu ◽  
Qingfeng Li ◽  
Jianwei Niu ◽  
Hao Su ◽  
Xiting Liu ◽  
...  

Quick response (QR) codes are usually scanned in different environments, so they must be robust to variations in illumination, scale, coverage, and camera angles. Aesthetic QR codes improve the visual quality, but subtle changes in their appearance may cause scanning failure. In this article, a new method to generate scanning-robust aesthetic QR codes is proposed, which is based on a module-based scanning probability estimation model that can effectively balance the tradeoff between visual quality and scanning robustness. Our method locally adjusts the luminance of each module by estimating the probability of successful sampling. The approach adopts the hierarchical, coarse-to-fine strategy to enhance the visual quality of aesthetic QR codes, which sequentially generate the following three codes: a binary aesthetic QR code, a grayscale aesthetic QR code, and the final color aesthetic QR code. Our approach also can be used to create QR codes with different visual styles by adjusting some initialization parameters. User surveys and decoding experiments were adopted for evaluating our method compared with state-of-the-art algorithms, which indicates that the proposed approach has excellent performance in terms of both visual quality and scanning robustness.


Electronics ◽  
2021 ◽  
Vol 10 (13) ◽  
pp. 1517
Author(s):  
Xinsheng Wang ◽  
Xiyue Wang

True random number generators (TRNGs) have been a research hotspot due to secure encryption algorithm requirements. Therefore, such circuits are necessary building blocks in state-of-the-art security controllers. In this paper, a TRNG based on random telegraph noise (RTN) with a controllable rate is proposed. A novel method of noise array circuits is presented, which consists of digital decoder circuits and RTN noise circuits. The frequency of generating random numbers is controlled by the speed of selecting different gating signals. The results of simulation show that the array circuits consist of 64 noise source circuits that can generate random numbers by a frequency from 1 kHz to 16 kHz.


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Huaping Guo ◽  
Weimei Zhi ◽  
Hongbing Liu ◽  
Mingliang Xu

In recent years, imbalanced learning problem has attracted more and more attentions from both academia and industry, and the problem is concerned with the performance of learning algorithms in the presence of data with severe class distribution skews. In this paper, we apply the well-known statistical model logistic discrimination to this problem and propose a novel method to improve its performance. To fully consider the class imbalance, we design a new cost function which takes into account the accuracies of both positive class and negative class as well as the precision of positive class. Unlike traditional logistic discrimination, the proposed method learns its parameters by maximizing the proposed cost function. Experimental results show that, compared with other state-of-the-art methods, the proposed one shows significantly better performance on measures of recall,g-mean,f-measure, AUC, and accuracy.


2018 ◽  
Vol 35 (14) ◽  
pp. 2458-2465 ◽  
Author(s):  
Johanna Schwarz ◽  
Dominik Heider

Abstract Motivation Clinical decision support systems have been applied in numerous fields, ranging from cancer survival toward drug resistance prediction. Nevertheless, clinical decision support systems typically have a caveat: many of them are perceived as black-boxes by non-experts and, unfortunately, the obtained scores cannot usually be interpreted as class probability estimates. In probability-focused medical applications, it is not sufficient to perform well with regards to discrimination and, consequently, various calibration methods have been developed to enable probabilistic interpretation. The aims of this study were (i) to develop a tool for fast and comparative analysis of different calibration methods, (ii) to demonstrate their limitations for the use on clinical data and (iii) to introduce our novel method GUESS. Results We compared the performances of two different state-of-the-art calibration methods, namely histogram binning and Bayesian Binning in Quantiles, as well as our novel method GUESS on both, simulated and real-world datasets. GUESS demonstrated calibration performance comparable to the state-of-the-art methods and always retained accurate class discrimination. GUESS showed superior calibration performance in small datasets and therefore may be an optimal calibration method for typical clinical datasets. Moreover, we provide a framework (CalibratR) for R, which can be used to identify the most suitable calibration method for novel datasets in a timely and efficient manner. Using calibrated probability estimates instead of original classifier scores will contribute to the acceptance and dissemination of machine learning based classification models in cost-sensitive applications, such as clinical research. Availability and implementation GUESS as part of CalibratR can be downloaded at CRAN.


Ocean Science ◽  
2011 ◽  
Vol 7 (5) ◽  
pp. 651-659 ◽  
Author(s):  
M. Le Menn

Abstract. In the current state of the art, salinity is a quantity computed from conductivity ratio measurements, with temperature and pressure known at the time of the measurement, and using the Practical Salinity Scale algorithm of 1978 (PSS-78). This calculation gives practical salinity values S. The uncertainty expected in PSS-78 values is ±0.002, but no details have ever been given on the method used to work out this uncertainty, and the error sources to include in this calculation. Following a guide published by the Bureau International des Poids et Mesures (BIPM), using two independent methods, this paper assesses the uncertainties of salinity values obtained from a laboratory salinometer and Conductivity-Temperature-Depth (CTD) measurements after laboratory calibration of a conductivity cell. The results show that the part due to the PSS-78 relations fits is sometimes as significant as the instrument's. This is particularly the case with CTD measurements where correlations between variables contribute mainly to decreasing the uncertainty of S, even when expanded uncertainties of conductivity cell calibrations are for the most part in the order of 0.002 mS cm−1. The relations given here, and obtained with the normalized GUM method, allow a real analysis of the uncertainties' sources and they can be used in a more general way, with instruments having different specifications.


2009 ◽  
Vol 6 (3) ◽  
pp. 2461-2485 ◽  
Author(s):  
M. Le Menn

Abstract. Salinity is a quantity computed, in the actual state of the art, from conductivity ratio measurements, knowing temperature and pressure at the time of the measurement and using the Practical Salinity Scale algorithm of 1978 (PSS-78) which gives practical salinity values S. The uncertainty expected on PSS-78 values is ±0.002, but nothing has ever been detailed about the method to work out this uncertainty, and the sources of errors to include in this calculation. Following a guide edited by the Bureau International des Poids et Mesures (BIPM), this paper assess, by two independent methods, the uncertainties of salinity values obtained from a laboratory salinometer and Conductivity-Temperature-Depth (CTD) measurements after laboratory calibration of a conductivity cell. The results show that the part due to the PSS-78 relations fits is sometimes as much significant as the instruments one's. This is particularly the case with CTD measurements where correlations between the variables contribute to decrease largely the uncertainty on S, even when the expanded uncertainties on conductivity cells calibrations are largely up of 0.002 mS/cm. The relations given in this publication, and obtained with the normalized GUM method, allow a real analysis of the uncertainties sources and they can be used in a more general way, with instruments having different specifications.


Author(s):  
Manjunath B. E ◽  
D. G. Anand ◽  
Mahant. G. Kattimani

Airborne Light Detection and Ranging (LiDAR) provides accurate height information for objects on the earth, which makes LiDAR become more and more popular in terrain and land surveying. In particular, LiDAR data offer vital and significant features for land-cover classification which is an important task in many application domains. Aerial photos with LiDAR data were processed with genetic algorithms not only for feature extraction but also for orthographical image. DSM provided by LiDAR reduced the amount of GCPs needed for the regular processing, thus the reason both efficiency and accuracy are highly improved. LiDAR is an acronym for Light Detection and Ranging, which is typically defined as an integration of three technologies into a single system, which is capable of acquiring a data to produce accurate Digital Elevation Models.


2019 ◽  
Author(s):  
Mehrdad Shoeiby ◽  
Mohammad Ali Armin ◽  
Sadegh Aliakbarian ◽  
Saeed Anwar ◽  
Lars petersson

<div>Advances in the design of multi-spectral cameras have</div><div>led to great interests in a wide range of applications, from</div><div>astronomy to autonomous driving. However, such cameras</div><div>inherently suffer from a trade-off between the spatial and</div><div>spectral resolution. In this paper, we propose to address</div><div>this limitation by introducing a novel method to carry out</div><div>super-resolution on raw mosaic images, multi-spectral or</div><div>RGB Bayer, captured by modern real-time single-shot mo-</div><div>saic sensors. To this end, we design a deep super-resolution</div><div>architecture that benefits from a sequential feature pyramid</div><div>along the depth of the network. This, in fact, is achieved</div><div>by utilizing a convolutional LSTM (ConvLSTM) to learn the</div><div>inter-dependencies between features at different receptive</div><div>fields. Additionally, by investigating the effect of different</div><div>attention mechanisms in our framework, we show that a</div><div>ConvLSTM inspired module is able to provide superior at-</div><div>tention in our context. Our extensive experiments and anal-</div><div>yses evidence that our approach yields significant super-</div><div>resolution quality, outperforming current state-of-the-art</div><div>mosaic super-resolution methods on both Bayer and multi-</div><div>spectral images. Additionally, to the best of our knowledge,</div><div>our method is the first specialized method to super-resolve</div><div>mosaic images, whether it be multi-spectral or Bayer.</div><div><br></div>


Sign in / Sign up

Export Citation Format

Share Document