scholarly journals Updates to and Performance of the cBathy Algorithm for Estimating Nearshore Bathymetry from Remote Sensing Imagery

2021 ◽  
Vol 13 (19) ◽  
pp. 3996
Author(s):  
Rob Holman ◽  
Erwin W. J. Bergsma

This manuscript describes and tests a set of improvements to the cBathy algorithm, published in 2013 by Holman et al. [hereafter HPH13], for the estimation of bathymetry based on optical observations of propagating nearshore waves. Three versions are considered, the original HPH13 algorithm (now labeled V1.0), an intermediate version that has seen moderate use but limited testing (V1.2), and a substantially updated version (V2.0). Important improvements from V1.0 include a new deep-water weighting scheme, removal of a spurious variable in the nonlinear fitting, an adaptive scheme for determining the optimum tile size based on the approximate wavelength, and a much-improved search seed algorithm. While V1.2 was tested and results listed, the primary interest is in comparing V1.0, the original code, with the new version V2.0. The three versions were tested against an updated dataset of 39 ground-truth surveys collected from 2015 to 2019 at the Field Research Facility in Duck, NC. In all, 624 cBathy collections were processed spanning a four-day period up to and including each survey date. Both the unfiltered phase 2 and the Kalman-filtered phase 3 bathymetry estimates were tested. For the Kalman-filtered estimates, only the estimate from mid-afternoon on the survey date was used for statistical measures. Of those 39 Kalman products, the bias, rms error, and 95% exceedance for V1.0 were 0.15, 0.47, and 0.96 m, respectively, while for V2.0, they were 0.08, 0.38, and 0.78 m. The mean observed coverage, the percentage of successful estimate locations in the map, were 99.1% for V1.0 and 99.9% for V2.0. Phase 2 (unfiltered) bathymetry estimates were also compared to ground truth for the 624 available data runs. The mean bias, rms error, and 95% exceedance statistics for V1.0 were 0.19, 0.64, and 1.27 m, respectively, and for V2.0 were 0.16, 0.56, and 1.19 m, an improvement in all cases. The coverage also increased from 78.8% for V1.0 to 84.7% for V2.0, about a 27% reduction in the number of failed estimates. The largest errors were associated with both large waves and poor imaging conditions such as fog, rain, or darkness that greatly reduced the percentage of successful coverage. As a practical mitigation of large errors, data runs for which the significant wave height was greater than 1.2 m or the coverage was less than 50% were omitted from the analysis, reducing the number of runs from 624 to 563. For this reduced dataset, the bias, rms error, and 95% exceedance errors for V1.0 were 0.15, 0.58, and 1.16 m and for V2.0 were 0.09, 0.41, and 0.85 m, respectively. Successful coverage for V1.0 was 82.8%, while for V2.0, it was 90.0%, a roughly 42% reduction in the number of failed estimates. Performance for V2.0 individual (non-filtered) estimates is slightly better than the Kalman results in the original HPH13 paper, and it is recommended that version 2.0 becomes the new standard algorithm.

2000 ◽  
Vol 16 (2) ◽  
pp. 107-114 ◽  
Author(s):  
Louis M. Hsu ◽  
Judy Hayman ◽  
Judith Koch ◽  
Debbie Mandell

Summary: In the United States' normative population for the WAIS-R, differences (Ds) between persons' verbal and performance IQs (VIQs and PIQs) tend to increase with an increase in full scale IQs (FSIQs). This suggests that norm-referenced interpretations of Ds should take FSIQs into account. Two new graphs are presented to facilitate this type of interpretation. One of these graphs estimates the mean of absolute values of D (called typical D) at each FSIQ level of the US normative population. The other graph estimates the absolute value of D that is exceeded only 5% of the time (called abnormal D) at each FSIQ level of this population. A graph for the identification of conventional “statistically significant Ds” (also called “reliable Ds”) is also presented. A reliable D is defined in the context of classical true score theory as an absolute D that is unlikely (p < .05) to be exceeded by a person whose true VIQ and PIQ are equal. As conventionally defined reliable Ds do not depend on the FSIQ. The graphs of typical and abnormal Ds are based on quadratic models of the relation of sizes of Ds to FSIQs. These models are generalizations of models described in Hsu (1996) . The new graphical method of identifying Abnormal Ds is compared to the conventional Payne-Jones method of identifying these Ds. Implications of the three juxtaposed graphs for the interpretation of VIQ-PIQ differences are discussed.


Author(s):  
Luis Cláudio de Jesus-Silva ◽  
Antônio Luiz Marques ◽  
André Luiz Nunes Zogahib

This article aims to examine the variable compensation program for performance implanted in the Brazilian Judiciary. For this purpose, a survey was conducted with the servers of the Court of Justice of the State of Roraima - Amazon - Brazil. The strategy consisted of field research with quantitative approach, with descriptive and explanatory research and conducting survey using a structured questionnaire, available through the INTERNET. The population surveyed, 37.79% is the sample. The results indicate the effectiveness of the program as a tool of motivation and performance improvement and also the need for some adjustments and improvements, especially on the perception of equity of the program and the distribution of rewards.


Author(s):  
Kyle Hoegh ◽  
Trevor Steiner ◽  
Eyoab Zegeye Teshale ◽  
Shongtao Dai

Available methods for assessing hot-mix-asphalt pavements are typically restricted to destructive methods such as coring that damage the pavement and are limited in coverage. Recently, density profiling systems (DPS) have become available with the capability of measuring asphalt compaction continuously, giving instantaneous measurements a few hundred feet behind the final roller of the freshly placed pavement. Further developments of the methods involved with DPS processing have allowed for coreless calibration by correlating dielectric measurements with asphalt specimens fabricated at variable air void contents using superpave gyratory compaction. These developments make DPS technology an attractive potential tool for quality control because of the real-time nature of the results, and quality assurance because of the ability to measure a more statistically significant amount of data as compared with current quality assurance methods such as coring. To test the viability of these recently developed methods for implementation, multiple projects were selected for field trials. Each field trial was used to assess the coreless calibration prediction by comparing with field cores where dielectric measurements were made. Ground truth core validation on each project showed the reasonableness of the coreless calibration method. The validated dielectric to air void prediction curves allowed for assessment of the tested pavements in relation to as-built characteristics, with the DPS providing the equivalent of approximately 100,000 cores per mile. Statistical measures were used to demonstrate how DPS can provide a comprehensive asphalt compaction evaluation that can be used to inform construction-related decisions and has potential as a future quality assurance tool.


Author(s):  
Ewa A. Burian ◽  
Lubna Sabah ◽  
Klaus Kirketerp-Møller ◽  
Elin Ibstedt ◽  
Magnus M. Fazli ◽  
...  

Acute wounds may require cleansing to reduce the risk of infection. Stabilized hypochlorous acid in acetic buffer (HOCl + buffer) is a novel wound irrigation solution with antimicrobial properties. We performed a first-in-man, prospective, open-label pilot study to document preliminary safety and performance in the treatment of acute wounds. The study enrolled 12 subjects scheduled for a split-skin graft transplantation, where the donor site was used as a model of an acute wound. The treatment time was 75 s, given on 6 occasions. A total of 7 adverse events were regarded as related to the treatment; all registered as pain during the procedure for 2 subjects. One subject had a wound infection at the donor site. The mean colony-forming unit (CFU) decreased by 41% after the treatment, and the mean epithelialization was 96% on both days 14 (standard deviation [SD] 8%) and 21 (SD 10%). The study provides preliminary support for the safety, well-tolerance, and efficacy of HOCl + buffer for acute wounds. The pain was frequent although resolved quickly. Excellent wound healing and satisfying antimicrobial properties were observed. A subsequent in vitro biofilm study also indicated good antimicrobial activity against Pseudomonas aeruginosa with a 96% mean reduction of CFU, when used for a treatment duration of 15 min ( P < .0001), and a 50% decrease for Staphylococcus aureus ( P = .1010). Future larger studies are needed to evaluate the safety and performance of HOCl + buffer in acute wounds, including the promising antimicrobial effect by prolonged treatment on bacterial biofilms.


2021 ◽  
Vol 18 (1) ◽  
pp. 172988142199332
Author(s):  
Xintao Ding ◽  
Boquan Li ◽  
Jinbao Wang

Indoor object detection is a very demanding and important task for robot applications. Object knowledge, such as two-dimensional (2D) shape and depth information, may be helpful for detection. In this article, we focus on region-based convolutional neural network (CNN) detector and propose a geometric property-based Faster R-CNN method (GP-Faster) for indoor object detection. GP-Faster incorporates geometric property in Faster R-CNN to improve the detection performance. In detail, we first use mesh grids that are the intersections of direct and inverse proportion functions to generate appropriate anchors for indoor objects. After the anchors are regressed to the regions of interest produced by a region proposal network (RPN-RoIs), we then use 2D geometric constraints to refine the RPN-RoIs, in which the 2D constraint of every classification is a convex hull region enclosing the width and height coordinates of the ground-truth boxes on the training set. Comparison experiments are implemented on two indoor datasets SUN2012 and NYUv2. Since the depth information is available in NYUv2, we involve depth constraints in GP-Faster and propose 3D geometric property-based Faster R-CNN (DGP-Faster) on NYUv2. The experimental results show that both GP-Faster and DGP-Faster increase the performance of the mean average precision.


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Elin Wallstén ◽  
Jan Axelsson ◽  
Joakim Jonsson ◽  
Camilla Thellenberg Karlsson ◽  
Tufve Nyholm ◽  
...  

Abstract Background Attenuation correction of PET/MRI is a remaining problem for whole-body PET/MRI. The statistical decomposition algorithm (SDA) is a probabilistic atlas-based method that calculates synthetic CTs from T2-weighted MRI scans. In this study, we evaluated the application of SDA for attenuation correction of PET images in the pelvic region. Materials and method Twelve patients were retrospectively selected from an ongoing prostate cancer research study. The patients had same-day scans of [11C]acetate PET/MRI and CT. The CT images were non-rigidly registered to the PET/MRI geometry, and PET images were reconstructed with attenuation correction employing CT, SDA-generated CT, and the built-in Dixon sequence-based method of the scanner. The PET images reconstructed using CT-based attenuation correction were used as ground truth. Results The mean whole-image PET uptake error was reduced from − 5.4% for Dixon-PET to − 0.9% for SDA-PET. The prostate standardized uptake value (SUV) quantification error was significantly reduced from − 5.6% for Dixon-PET to − 2.3% for SDA-PET. Conclusion Attenuation correction with SDA improves quantification of PET/MR images in the pelvic region compared to the Dixon-based method.


2021 ◽  
Vol 2021 (1) ◽  
Author(s):  
Xiang Li ◽  
Jianzheng Liu ◽  
Jessica Baron ◽  
Khoa Luu ◽  
Eric Patterson

AbstractRecent attention to facial alignment and landmark detection methods, particularly with application of deep convolutional neural networks, have yielded notable improvements. Neither these neural-network nor more traditional methods, though, have been tested directly regarding performance differences due to camera-lens focal length nor camera viewing angle of subjects systematically across the viewing hemisphere. This work uses photo-realistic, synthesized facial images with varying parameters and corresponding ground-truth landmarks to enable comparison of alignment and landmark detection techniques relative to general performance, performance across focal length, and performance across viewing angle. Recently published high-performing methods along with traditional techniques are compared in regards to these aspects.


2021 ◽  
Vol 13 (9) ◽  
pp. 5274
Author(s):  
Xinyang Yu ◽  
Younggu Her ◽  
Xicun Zhu ◽  
Changhe Lu ◽  
Xuefei Li

Development of a high-accuracy method to extract arable land using effective data sources is crucial to detect and monitor arable land dynamics, servicing land protection and sustainable development. In this study, a new arable land extraction index (ALEI) based on spectral analysis was proposed, examined by ground truth data, and then applied to the Hexi Corridor in northwest China. The arable land and its change patterns during 1990–2020 were extracted and identified using 40 Landsat TM/OLI images acquired in 1990, 2000, 2010, and 2020. The results demonstrated that the proposed method can distinguish arable land areas accurately, with the User’s (Producer’s) accuracy and overall accuracy (kappa coefficient) exceeding 0.90 (0.88) and 0.89 (0.87), respectively. The mean relative error calculated using field survey data obtained in 2012 and 2020 was 0.169 and 0.191, respectively, indicating the feasibility of the ALEI method in arable land extracting. The study found that arable land area in the Hexi Corridor was 13217.58 km2 in 2020, significantly increased by 25.33% compared to that in 1990. At 10-year intervals, the arable land experienced different change patterns. The study results indicate that ALEI index is a promising tool used to effectively extract arable land in the arid area.


2013 ◽  
Vol 117 (1197) ◽  
pp. 1075-1101 ◽  
Author(s):  
S. M. Parkes ◽  
I. Martin ◽  
M. N. Dunstan ◽  
N. Rowell ◽  
O. Dubois-Matra ◽  
...  

Abstract The use of machine vision to guide robotic spacecraft is being considered for a wide range of missions, such as planetary approach and landing, asteroid and small body sampling operations and in-orbit rendezvous and docking. Numerical simulation plays an essential role in the development and testing of such systems, which in the context of vision-guidance means that realistic sequences of navigation images are required, together with knowledge of the ground-truth camera motion. Computer generated imagery (CGI) offers a variety of benefits over real images, such as availability, cost, flexibility and knowledge of the ground truth camera motion to high precision. However, standard CGI methods developed for terrestrial applications lack the realism, fidelity and performance required for engineering simulations. In this paper, we present the results of our ongoing work to develop a suitable CGI-based test environment for spacecraft vision guidance systems. We focus on the various issues involved with image simulation, including the selection of standard CGI techniques and the adaptations required for use in space applications. We also describe our approach to integration with high-fidelity end-to-end mission simulators, and summarise a variety of European Space Agency research and development projects that used our test environment.


Sign in / Sign up

Export Citation Format

Share Document