Evaluating bedrock outcrop mapping algorithms across diverse landscapes

Author(s):  
Brittany Selander ◽  
Suzanne Anderson ◽  
Matthew Rossi

<p>            Mapping bedrock outcrops is useful across disciplines, but is challenging in environments where ground surface visibility is obscured. The presence of soil or bedrock affects sediment production and transport, local ecology, and runoff generation. The distribution of bedrock outcrops in an area reflects the interplay between regolith production and sediment removal. Outcrop classification methods from Terrestrial-lidar produce millimeter or centimeter resolution DEMs that are highly successful because lidar penetrates through vegetation to the ground surface. However, data availability at such high resolution is limited, and the associated computational complexity required for identifying outcrop, or other surface features, is often impractical for landscape-scale analysis. Aerial lidar datasets at ~1-m resolution (e.g., moderate resolution) are more widely available and less computationally expensive than higher resolution datasets. With increasing accessibility of moderate resolution surface data, there is a need to develop outcrop classification methods and understand the efficacy of these methods across diverse environments. Our objectives are to present a simplified technique that builds on existing methods, and to examine the success of current outcrop identification methods in a variety of landscapes.</p><p>            At moderate resolution, the two most cited metrics to differentiate bedrock from soil-mantled surfaces are based on gradient (e.g., DiBiase et al., 2012) or on surface roughness (e.g., Milodowski et al., 2015). We developed a method that simplifies and combines both metrics, and that improves overall accuracy. We applied all three methods to six landscapes in the USA. For each site, we delineated ground truth from high-resolution orthoimagery for 7-10 test patches with visible ground surface, that evenly spanned 0-100% exposed outcrop. Overall accuracy, true positive rate, and false positive rate for each patch were calculated by comparing the ground truth grids to each lidar-derived outcrop grids on a cell-by-cell basis. Metric success was evaluated for each landscape by assessing the mean and distribution of performance measures across patches. Our combined metric had the highest overall accuracy in an arid, horst and graben landscape (Canyonlands National Park, Utah). It also performed well in a vegetated, high sediment load, active volcano (Mount Rainier, Washington), a canyon carved by channel incision (Boulder Canyon, Colorado), and a chaparral mixed bedrock canyon environment (Mission Trails, San Diego, California). All three methods systematically failed for portions of the landscape in glacially carved canyons (Southern Wind River Range, Wyoming) and on terraced sea cliffs (Santa Cruz County, California). These environments have significant outcrop that is both smooth and low gradient, and therefore cannot be identified using a slope or roughness-based algorithm.</p><p>            Our work highlights the importance of tailoring DEM-based bedrock mapping algorithms to its geomorphic context, and of the need for ground truth. Such data provides the basis for developing more robust methods for error evaluation. In addition, new methods are needed to identify bedrock outcrop from surface DEMs in smooth and low gradient, yet rocky landscapes.</p>

Author(s):  
Yosef S. Razin ◽  
Jack Gale ◽  
Jiaojiao Fan ◽  
Jaznae’ Smith ◽  
Karen M. Feigh

This paper evaluates Banks et al.’s Human-AI Shared Mental Model theory by examining how a self-driving vehicle’s hazard assessment facilitates shared mental models. Participants were asked to affirm the vehicle’s assessment of road objects as either hazards or mistakes in real-time as behavioral and subjective measures were collected. The baseline performance of the AI was purposefully low (<50%) to examine how the human’s shared mental model might lead to inappropriate compliance. Results indicated that while the participant true positive rate was high, overall performance was reduced by the large false positive rate, indicating that participants were indeed being influenced by the Al’s faulty assessments, despite full transparency as to the ground-truth. Both performance and compliance were directly affected by frustration, mental, and even physical demands. Dispositional factors such as faith in other people’s cooperativeness and in technology companies were also significant. Thus, our findings strongly supported the theory that shared mental models play a measurable role in performance and compliance, in a complex interplay with trust.


1991 ◽  
Vol 32 (6) ◽  
pp. 439-441 ◽  
Author(s):  
K. Young ◽  
F. Aspestrand ◽  
A. Kolbenstvedt

To elucidate the reliability of CT in the assessment of bronchiectasis, a retrospective study of high resolution CT and bronchography was carried out. A segment by segment comparison of 259 segmental bronchi from 70 lobes of 27 lungs in 19 patients was performed using bronchography as standard. CT was positive in 87 of 89 segmental bronchi with bronchiectasis giving a false-negative rate of 2%. CT was negative in 169 of 170 segmental bronchi without bronchiectasis at bronchography, giving a false-positive rate of 1%. There was agreement between the two modalities in identifying the different types of bronchiectasis.


2020 ◽  
Vol 2020 (6) ◽  
pp. 50-1-50-8
Author(s):  
Deniz Aykac ◽  
Thomas Karnowski ◽  
Regina Ferrell ◽  
James S. Goddard

State departments of transportation often maintain extensive “video logs” of their roadways that include signs, lane markings, as well as non-image-based information such as grade, curvature, etc. In this work we use the Roadway Information Database (RID), developed for the Second Strategic Highway Research Program, as a surrogate for a video log to design and test algorithms to detect rumble strips in the roadway images. Rumble strips are grooved patterns at the lane extremities designed to produce an audible queue to drivers who are in danger of lane departure. The RID contains 6,203,576 images of roads in six locations across the United States with extensive ground truth information and measurements, but the rumble strip measurements (length and spacing) were not recorded. We use an image correction process along with automated feature extraction and convolutional neural networks to detect rumble strip locations and measure their length and pitch. Based on independent measurements, we estimate our true positive rate to be 93% and false positive rate to be 10% with errors in length and spacing on the order of 0.09 meters RMS and 0.04 meters RMS. Our results illustrate the feasibility of this approach to add value to video logs after initial capture as well as identify potential methods for autonomous navigation.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Mingyang Zou ◽  
Junjie Liao ◽  
Yurong Zeng ◽  
Qianwen Guan ◽  
Bowen Lan

Cerebrovascular disease is increasing rapidly because of its high morbidity and high mortality, which is a serious threat to human health. For the early diagnosis and treatment of diseases, the CT vascular noise combined with high-resolution magnetic resonance angiography in acute cerebral apoplexy vascular disease is adopted. 150 patients with ischemic stroke were selected, which were admitted to the Department of Radiology, Huizhou Central People’s Hospital, from January 2020 to December 2020. All patients accepted digital subtraction angiography (DSA), magnetic resonance angiography (MRA), and CT angiography (CTA) examination. Results. There were 76 cases of aneurysm in DSA examination, accounting for 46%; 69 cases with pulsating stenosis, accounting for 50.67%; and 5 cases of moyamoya disease, accounting for 3.33%. The number and proportion of cases of the above diseases in MRA examination were (75, 69, 71; 53.33%, 45.67%, 4%), and those in CTA examination were (71, 76, 3, 47.33%, 50.67%, 2%). Relative to the DSA gold standard, the sensitivity, specificity, and false positive rate of MRA were 81.51%, 95.19%, and 2.1, respectively, and those of CTA were 95.78%, 79.17%, and 11.0, respectively. The number of cases and accuracy of detection of cerebral aneurysms by MRA were (75, 96.57%), and those by CTA were (71, 91.2%), which was not statistically considerable, P > 0.05 . For the number of cases and the detection accuracy of cerebrovascular malformations, MRA was (38, 92.68%) and CTA was (37, 90.24%), which was not statistically considerable, P > 0.05 . Conclusion. The detection sensitivity and accuracy of MRA were better than those of CTA, while specific CTA was superior to MRA. The differences between the two detections were substantial ( P < 0.05 ), while the sensitivity and false positive rate were not remarkably different ( P > 0.05 ). Therefore, the combination of the two detections was of great significance to the diagnosis and treatment of stroke and other vascular diseases.


2020 ◽  
Vol 12 (21) ◽  
pp. 3471
Author(s):  
Walter T. Dado ◽  
Jillian M. Deines ◽  
Rinkal Patel ◽  
Sang-Zi Liang ◽  
David B. Lobell

Cloud computing and freely available, high-resolution satellite data have enabled recent progress in crop yield mapping at fine scales. However, extensive validation data at a matching resolution remain uncommon or infeasible due to data availability. This has limited the ability to evaluate different yield estimation models and improve understanding of key features useful for yield estimation in both data-rich and data-poor contexts. Here, we assess machine learning models’ capacity for soybean yield prediction using a unique ground-truth dataset of high-resolution (5 m) yield maps generated from combine harvester yield monitor data for over a million field-year observations across the Midwestern United States from 2008 to 2018. First, we compare random forest (RF) implementations, testing a range of feature engineering approaches using Sentinel-2 and Landsat spectral data for 20- and 30-m scale yield prediction. We find that Sentinel-2-based models can explain up to 45% of out-of-sample yield variability from 2017 to 2018 (r2 = 0.45), while Landsat models explain up to 43% across the longer 2008–2018 period. Using discrete Fourier transforms, or harmonic regressions, to capture soybean phenology improved the Landsat-based model considerably. Second, we compare RF models trained using this ground-truth data to models trained on available county-level statistics. We find that county-level models rely more heavily on just a few predictors, namely August weather covariates (vapor pressure deficit, rainfall, temperature) and July and August near-infrared observations. As a result, county-scale models perform relatively poorly on field-scale validation (r2 = 0.32), especially for high-yielding fields, but perform similarly to field-scale models when evaluated at the county scale (r2 = 0.82). Finally, we test whether our findings on variable importance can inform a simple, generalizable framework for regions or time periods beyond ground data availability. To do so, we test improvements to a Scalable Crop Yield Mapper (SCYM) approach that uses crop simulations to train statistical models for yield estimation. Based on findings from our RF models, we employ harmonic regressions to estimate peak vegetation index (VI) and a VI observation 30 days later, with August rainfall as the sole weather covariate in our new SCYM model. Modifications improved SCYM’s explained variance (r2 = 0.27 at the 30 m scale) and provide a new, parsimonious model.


Author(s):  
Lin Jin ◽  
Shuai Hao ◽  
Haining Wang ◽  
Chase Cotton

It is challenging to conduct a large scale Internet censorship measurement, as it involves triggering censors through artificial requests and identifying abnormalities from corresponding responses. Due to the lack of ground truth on the expected responses from legitimate services, previous studies typically require a heavy, unscalable manual inspection to identify false positives while still leaving false negatives undetected. In this paper, we propose Disguiser, a novel framework that enables end-to-end measurement to accurately detect the censorship activities and reveal the censor deployment without manual efforts. The core of Disguiser is a control server that replies with a static payload to provide the ground truth of server responses. As such, we send requests from various types of vantage points across the world to our control server, and the censorship activities can be recognized if a vantage point receives a different response. In particular, we design and conduct a cache test to pre-exclude the vantage points that could be interfered by cache proxies along the network path. Then we perform application traceroute towards our control server to explore censors' behaviors and their deployment. With Disguiser, we conduct 58 million measurements from vantage points in 177 countries. We observe 292 thousand censorship activities that block DNS, HTTP, or HTTPS requests inside 122 countries, achieving a 10^-6 false positive rate and zero false negative rate. Furthermore, Disguiser reveals the censor deployment in 13 countries.


2018 ◽  
Vol 8 (1) ◽  
pp. 2367-2373 ◽  
Author(s):  
G. Toz ◽  
P. Erdogmus

In the computer-assisted diagnosis of breast cancer, the removal of pectoral muscle from mammograms is very important. In this study, a new method, called Single-Sided Edge Marking (SSEM) technique, is proposed for the identification of the pectoral muscle border from mammograms. 60 mammograms from the INbreast database were used to test the proposed method. The results obtained were compared for False Positive Rate, False Negative Rate, and Sensitivity using the ground truth values pre-determined by radiologists for the same images. Accordingly, it has been shown that the proposed method can detect the pectoral muscle border with an average of 95.6% sensitivity.


2019 ◽  
Author(s):  
Lawrence Huang ◽  
Ulf Knoblich ◽  
Peter Ledochowitsch ◽  
Jérôme Lecoq ◽  
R. Clay Reid ◽  
...  

AbstractTwo-photon calcium imaging is often used with genetically encoded calcium indicators (GECIs) to investigate neural dynamics, but the relationship between fluorescence and action potentials (spikes) remains unclear. Pioneering work linked electrophysiology and calcium imaging in vivo with viral GECI expression, albeit in a small number of cells. Here we characterized the spikefluorescence transfer function in vivo of 91 layer 2/3 pyramidal neurons in primary visual cortex in four transgenic mouse lines expressing GCaMP6s or GCaMP6f. We found that GCaMP6s cells have spike-triggered fluorescence responses of larger amplitude, lower variability and greater single-spike detectability than GCaMP6f. Mean single-spike detection rates at high spatiotemporal resolution measured in our data was >70% for GCaMP6s and ~40-50% for GCaMP6f (at 5% false positive rate). These rates are estimated to decrease to 25-35% for GCaMP6f under generally used population imaging conditions. Our ground-truth dataset thus supports more refined inference of neuronal activity from calcium imaging.


2002 ◽  
Vol 41 (01) ◽  
pp. 37-41 ◽  
Author(s):  
S. Shung-Shung ◽  
S. Yu-Chien ◽  
Y. Mei-Due ◽  
W. Hwei-Chung ◽  
A. Kao

Summary Aim: Even with careful observation, the overall false-positive rate of laparotomy remains 10-15% when acute appendicitis was suspected. Therefore, the clinical efficacy of Tc-99m HMPAO labeled leukocyte (TC-WBC) scan for the diagnosis of acute appendicitis in patients presenting with atypical clinical findings is assessed. Patients and Methods: Eighty patients presenting with acute abdominal pain and possible acute appendicitis but atypical findings were included in this study. After intravenous injection of TC-WBC, serial anterior abdominal/pelvic images at 30, 60, 120 and 240 min with 800k counts were obtained with a gamma camera. Any abnormal localization of radioactivity in the right lower quadrant of the abdomen, equal to or greater than bone marrow activity, was considered as a positive scan. Results: 36 out of 49 patients showing positive TC-WBC scans received appendectomy. They all proved to have positive pathological findings. Five positive TC-WBC were not related to acute appendicitis, because of other pathological lesions. Eight patients were not operated and clinical follow-up after one month revealed no acute abdominal condition. Three of 31 patients with negative TC-WBC scans received appendectomy. They also presented positive pathological findings. The remaining 28 patients did not receive operations and revealed no evidence of appendicitis after at least one month of follow-up. The overall sensitivity, specificity, accuracy, positive and negative predictive values for TC-WBC scan to diagnose acute appendicitis were 92, 78, 86, 82, and 90%, respectively. Conclusion: TC-WBC scan provides a rapid and highly accurate method for the diagnosis of acute appendicitis in patients with equivocal clinical examination. It proved useful in reducing the false-positive rate of laparotomy and shortens the time necessary for clinical observation.


1993 ◽  
Vol 32 (02) ◽  
pp. 175-179 ◽  
Author(s):  
B. Brambati ◽  
T. Chard ◽  
J. G. Grudzinskas ◽  
M. C. M. Macintosh

Abstract:The analysis of the clinical efficiency of a biochemical parameter in the prediction of chromosome anomalies is described, using a database of 475 cases including 30 abnormalities. A comparison was made of two different approaches to the statistical analysis: the use of Gaussian frequency distributions and likelihood ratios, and logistic regression. Both methods computed that for a 5% false-positive rate approximately 60% of anomalies are detected on the basis of maternal age and serum PAPP-A. The logistic regression analysis is appropriate where the outcome variable (chromosome anomaly) is binary and the detection rates refer to the original data only. The likelihood ratio method is used to predict the outcome in the general population. The latter method depends on the data or some transformation of the data fitting a known frequency distribution (Gaussian in this case). The precision of the predicted detection rates is limited by the small sample of abnormals (30 cases). Varying the means and standard deviations (to the limits of their 95% confidence intervals) of the fitted log Gaussian distributions resulted in a detection rate varying between 42% and 79% for a 5% false-positive rate. Thus, although the likelihood ratio method is potentially the better method in determining the usefulness of a test in the general population, larger numbers of abnormal cases are required to stabilise the means and standard deviations of the fitted log Gaussian distributions.


Sign in / Sign up

Export Citation Format

Share Document