scholarly journals Automated Identification of Discrepancies between Nautical Charts and Survey Soundings

2018 ◽  
Vol 7 (10) ◽  
pp. 392 ◽  
Author(s):  
Giuseppe Masetti ◽  
Tyanne Faulkes ◽  
Christos Kastrisios

Timely and accurate identification of change detection for areas depicted on nautical charts constitutes a key task for marine cartographic agencies in supporting maritime safety. Such a task is usually achieved through manual or semi-automated processes, based on best practices developed over the years requiring a substantial level of human commitment (i.e., to visually compare the chart with the new collected data or to analyze the result of intermediate products). This work describes an algorithm that aims to largely automate the change identification process as well as to reduce its subjective component. Through the selective derivation of a set of depth points from a nautical chart, a triangulated irregular network is created to apply a preliminary tilted-triangle test to all the input survey soundings. Given the complexity of a modern nautical chart, a set of feature-specific, point-in-polygon tests are then performed. As output, the algorithm provides danger-to-navigation candidates, chart discrepancies, and a subset of features that requires human evaluation. The algorithm has been successfully tested with real-world electronic navigational charts and survey datasets. In parallel to the research development, a prototype application implementing the algorithm was created and made publicly available.

Author(s):  
Giuseppe Masetti ◽  
Tyanne Faulkes ◽  
Christos Kastrisios

Timely and accurate identification of change detection for areas depicted on nautical charts constitutes a key task for marine cartographic agencies in supporting maritime safety. Such a task is usually achieved through manual or semi-automated processes, based on best practices developed over the years requiring a substantial level of human commitment (i.e., to visually compare the chart with the new collected data or to analyze the result of intermediate products). This work describes an algorithm that aims to largely automate the change identification process as well as to reduce its subjective component. Through the selective derivation of a set of depth points from a nautical chart, a triangulated irregular network is created to apply a preliminary tilted-triangle test to all the input survey soundings. Given the complexity of a modern nautical chart, a set of feature-specific, point-in-polygon tests are then performed. As output, the algorithm provides danger-to-navigation candidates, chart discrepancies, and a subset of features that requires human evaluation. The algorithm has been successfully tested with real-world electronic navigational charts and survey datasets. In parallel to the research development, a prototype application implementing the algorithm was created and made publicly available.


Author(s):  
Giuseppe Masetti ◽  
Tyanne Faulkes ◽  
Christos Kastrisios

Timely and accurate identification of change detection for areas depicted on nautical charts constitutes a key task for marine cartographic agencies in supporting maritime safety. Such a task is usually achieved through manual or semi-automated processes, based on best practices developed over the years requiring a substantial level of human commitment (i.e., to visually compare the chart with the new collected data or to analyze the result of intermediate products). This work describes an algorithm that aims to largely automate the change identification process as well as to reduce its subjective component. Through the selective derivation of a set of depth points from a nautical chart, a triangulated irregular network is created to apply a preliminary tilted-triangle test to all the input survey soundings. Given the complexity of a modern nautical chart, a set of feature-specific, point-in-polygon tests are then performed. As output, the algorithm provides danger-to-navigation candidates, chart discrepancies, and a subset of features that requires human evaluation. The algorithm has been successfully tested with real-world electronic navigational charts and survey datasets. In parallel to the research development, a prototype application implementing the algorithm was created and made publicly available.


Plants are prone to different diseases caused by multiple reasons like environmental conditions, light, bacteria, and fungus. These diseases always have some physical characteristics on the leaves, stems, and fruit, such as changes in natural appearance, spot, size, etc. Due to similar patterns, distinguishing and identifying category of plant disease is the most challenging task. Therefore, efficient and flawless mechanisms should be discovered earlier so that accurate identification and prevention can be performed to avoid several losses of the entire plant. Therefore, an automated identification system can be a key factor in preventing loss in the cultivation and maintaining high quality of agriculture products. This paper introduces modeling of rose plant leaf disease classification technique using feature extraction process and supervised learning mechanism. The outcome of the proposed study justifies the scope of the proposed system in terms of accuracy towards the classification of different kind of rose plant disease.


2012 ◽  
Vol 29 (1) ◽  
pp. 89-102 ◽  
Author(s):  
Chris T. Jones ◽  
Todd D. Sikora ◽  
Paris W. Vachon ◽  
John Wolfe

Abstract The Canadian Forces Meteorology and Oceanography Center produces a near-daily ocean feature analysis, based on sea surface temperature (SST) images collected by spaceborne radiometers, to keep the fleet informed of the location of tactically important ocean features. Ubiquitous cloud cover hampers these data. In this paper, a methodology for the identification of SST front signatures in cloud-independent synthetic aperture radar (SAR) images is described. Accurate identification of ocean features in SAR images, although attainable to an experienced analyst, is a difficult process to automate. As a first attempt, the authors aimed to discriminate between signatures of SST fronts and those caused by all other processes. Candidate SST front signatures were identified in Radarsat-2 images using a Canny edge detector. A feature vector of textural and contextual measures was constructed for each candidate edge, and edges were validated by comparison with coincident SST images. Each candidate was classified as being an SST front signature or the signature of another process using logistic regression. The resulting probability that a candidate was correctly classified as an SST front signature was between 0.50 and 0.70. The authors concluded that improvement in classification accuracy requires a set of measures that can differentiate between signatures of SST fronts and those of certain atmospheric phenomena and that a search for such measures should include a wider range of computational methods than was considered. As such, this work represents a step toward the goal of a general ocean feature classification algorithm.


2020 ◽  
Vol 53 (5) ◽  
pp. 343-353 ◽  
Author(s):  
Jeremy Miciak ◽  
Jack M. Fletcher

This article addresses the nature of dyslexia and best practices for identification and treatment within the context of multitier systems of support (MTSS). We initially review proposed definitions of dyslexia to identify key commonalities and differences in proposed attributes. We then review empirical evidence for proposed definitional attributes, focusing on key sources of controversy, including the role of IQ, instructional response, as well as issues of etiology and immutability. We argue that current empirical evidence supports a dyslexia classification marked by specific deficits in reading and spelling words combined with inadequate response to evidence-based instruction. We then propose a “hybrid” dyslexia identification process built to gather data relevant to these markers of dyslexia. We argue that this assessment process is best implemented within school-wide MTSS because it leverages data routinely collected in well-implemented MTSS, including documentation of student progress and fidelity of implementation. In contrast with other proposed methods for learning disability (LD) identification, the proposed “hybrid” method demonstrates strong evidence for valid decision-making and directly informs intervention.


2013 ◽  
Vol 196 ◽  
pp. 13-19
Author(s):  
Adam Polak

The process of identification of the parameters of a mathematical model of any dynamic real object necessitates performing the relevant number of measurements with the accurate precision of the quantities that characterize the current temporal state of this object (i.e. object’s vector of state). The accurate identification of the parameters of such a model is achievable only with the help of an especially designed system of acquisition and measurement. This article describes the mathematical model of ship’s synchronous generator with the presupposed simplifications, the parameters of this model and the methods of its determining. There is also presented the conception of the system of acquisition and measurement (based on CompactDAQ platform produced by National Instruments), which is indispensable for identifying the parameters of this model.


2021 ◽  
Vol 7 (8) ◽  
pp. 131
Author(s):  
Alessandro Stefano ◽  
Albert Comelli

Background: In the field of biomedical imaging, radiomics is a promising approach that aims to provide quantitative features from images. It is highly dependent on accurate identification and delineation of the volume of interest to avoid mistakes in the implementation of the texture-based prediction model. In this context, we present a customized deep learning approach aimed at addressing the real-time, and fully automated identification and segmentation of COVID-19 infected regions in computed tomography images. Methods: In a previous study, we adopted ENET, originally used for image segmentation tasks in self-driving cars, for whole parenchyma segmentation in patients with idiopathic pulmonary fibrosis which has several similarities to COVID-19 disease. To automatically identify and segment COVID-19 infected areas, a customized ENET, namely C-ENET, was implemented and its performance compared to the original ENET and some state-of-the-art deep learning architectures. Results: The experimental results demonstrate the effectiveness of our approach. Considering the performance obtained in terms of similarity of the result of the segmentation to the gold standard (dice similarity coefficient ~75%), our proposed methodology can be used for the identification and delineation of COVID-19 infected areas without any supervision of a radiologist, in order to obtain a volume of interest independent from the user. Conclusions: We demonstrated that the proposed customized deep learning model can be applied to rapidly identify, and segment COVID-19 infected regions to subsequently extract useful information for assessing disease severity through radiomics analyses.


1996 ◽  
Vol 9 (3) ◽  
pp. 405-422 ◽  
Author(s):  
S L On

The organisms which are referred to as campylobacteria are associated with a diverse range of diseases and habitats and are important from both clinical and economic perspectives. Accurate identification of these organisms is desirable for deciding upon appropriate therapeutic measures, and also for furthering our understanding of their pathology and epidemiology. However, the identification process is made difficult because of the complex and rapidly evolving taxonomy, fastidious nature, and biochemical inertness of these bacteria. These problems have resulted in a proliferation of phenotypic and genotypic methods for identifying members of this group. The purpose of this review is to summarize the problems associated with identifying campylobacteria, critically appraise the methods that have been used for this purpose, and discuss prospects for improvements in this field.


2012 ◽  
Vol 2012 ◽  
pp. 1-4 ◽  
Author(s):  
Shalini Duggal ◽  
Rajni Gaind ◽  
Neha Tandon ◽  
Manorama Deb ◽  
Tulsi Das Chugh

The present study was designed to compare a fully automated identification/antibiotic susceptibility testing (AST) system BD Phoenix (BD) for its efficacy in rapid and accurate identification and AST with conventional manual methods and to determine if the errors reported in AST, such as the (very major errors) VME (false susceptibility), (major errors) ME (false resistance), and (minor errors) MiE (intermediate category interpretation) were within the range certified by FDA. Identification and antimicrobial susceptibility test results of eighty-five clinical isolates including both gram-positive and negative were compared on Phoenix considering the results obtained from conventional manual methods of identification and disc diffusion testing of antibiotics as standards for comparison. Phoenix performed favorably well. There was 100% concordance in identification for gram-negative isolates and 94.83% for gram-positive isolates. In seven cases, Phoenix proved better than conventional identification. For antibiotic results, categorical agreement was 98.02% for gram-positive and 95.7% for gram-negative isolates. VME was 0.33%, ME 0.66%, MiE 0.99% for gram-positive isolates and 1.23% VME, 1.23% ME, and 1.85% MiE for gram-negative isolates. Therefore, this automated system can be used as a tool to facilitate early identification and susceptibility pattern of aerobic bacteria in routine microbiology laboratories.


2020 ◽  
Vol 67 ◽  
Author(s):  
Samuel Läubli ◽  
Sheila Castilho ◽  
Graham Neubig ◽  
Rico Sennrich ◽  
Qinlan Shen ◽  
...  

The quality of machine translation has increased remarkably over the past years, to the degree that it was found to be indistinguishable from professional human translation in a number of empirical investigations. We reassess Hassan et al.'s 2018 investigation into Chinese to English news translation, showing that the finding of human–machine parity was owed to weaknesses in the evaluation design—which is currently considered best practice in the field. We show that the professional human translations contained significantly fewer errors, and that perceived quality in human evaluation depends on the choice of raters, the availability of linguistic context, and the creation of reference translations. Our results call for revisiting current best practices to assess strong machine translation systems in general and human–machine parity in particular, for which we offer a set of recommendations based on our empirical findings.  


Sign in / Sign up

Export Citation Format

Share Document