An imaging data model for concrete bridge inspection

2004 ◽  
Vol 35 (8-9) ◽  
pp. 473-480 ◽  
Author(s):  
Osama Abudayyeh ◽  
Mohammed Al Bataineh ◽  
Ikhlas Abdel-Qader
Author(s):  
Hermes Giberti ◽  
Andrea Zanoni ◽  
Marco Mauri ◽  
Massimo Gammino

This work focuses on the development of a methodology for the complete reconstruction of the 3D geometry of a concrete bridge. 3D scanning technology was selected as the most apt to the task as it provides very detailed geometrical informations. A dedicated carriage system for a compact and lightweight laser scanner has been designed and built as a first prototype to be used on laboratory as well as future on-field tests. A first assessment of the design constraints has been carried out, based on the general goal of implementing a system able to be used with existing inspection vehicles with minimal modifications. The specific electronic system for management and control of the carriage system and the management of the associated tracking system has been also designed and realized. Some preliminary tests have been performed at Politecnico di Milano University campus to assess the viability and analyze the performance of the early design choices.


Author(s):  
E. Varga-Verebélyi ◽  
L. Dobos ◽  
T. Budavári ◽  
Cs. Kiss

AbstractWe created the Herschel1 Footprint Database and web services for the Herschel Space Observatory imaging data. For this database we set up a unified data model for the PACS and SPIRE Herschel instruments, from the pointing and header information of each observation, generated and stored sky coverages (footprints) of the observations in their exact geometric form. With this tool we extend the capabilities of the Herschel Science Archive by providing an effective search tool that is able to find observations for selected sky locations (objects), or even in larger areas in the sky.


2018 ◽  
Vol 90 ◽  
pp. 265-280 ◽  
Author(s):  
Renping Xie ◽  
Jian Yao ◽  
Kang Liu ◽  
Xiaohu Lu ◽  
Yahui Liu ◽  
...  

2021 ◽  
Author(s):  
Ashmita Kumar

<p>The Neuroimaging Data Model (NIDM) was started by an international team of cognitive scientists, computer scientists and statisticians to develop a data format capable of describing all aspects of the data lifecycle, from raw data through analyses and provenance. NIDM was built on top of the PROV standard and consists of three main interconnected specifications: Experiment, Results, and Workflow. These specifications were envisioned to capture information on all aspects of the neuroimaging data lifecycle, using semantic web techniques. They provide a critical capability to aid in reproducibility and replication of studies, as well as data discovery in shared resources. The NIDM-Experiment component has been used to describe publicly-available human neuroimaging datasets (e.g. ABIDE, ADHD200, CoRR, and OpenNeuro datasets) along with providing unambiguous descriptions of the clinical, neuropsychological, and imaging data collected as part of those studies resulting in approximately 4.5 million statements about aspects of these datasets.</p><p>PyNIDM, a toolbox written in Python, supports the creation, manipulation, and query of NIDM documents. It is an open-source project hosted on GitHub and distributed under the Apache License, Version 2.0. PyNIDM is under active development and testing. Tools have been created to support RESTful SPARQL queries of the NIDM documents in support of users wanting to identify interesting cohorts across datasets in support of evaluating scientific hypotheses and/or replicating results found in the literature. This query functionality, together with the NIDM document semantics, provides a path for investigators to interrogate datasets, understand what data was collected in those studies, and provide sufficiently-annotated data dictionaries of the variables collected to facilitate transformation and combining of data across studies.</p><p>Beyond querying across NIDM documents, some high-level statistical analysis tools are needed to provide investigators with an opportunity to gain more insight into data they may be interested in combining for a complete scientific investigation. Here we report on one such tool providing linear modeling support for NIDM documents: nidm_linreg.</p>


2021 ◽  
Author(s):  
Ashmita Kumar

<p>The Neuroimaging Data Model (NIDM) was started by an international team of cognitive scientists, computer scientists and statisticians to develop a data format capable of describing all aspects of the data lifecycle, from raw data through analyses and provenance. NIDM was built on top of the PROV standard and consists of three main interconnected specifications: Experiment, Results, and Workflow. These specifications were envisioned to capture information on all aspects of the neuroimaging data lifecycle, using semantic web techniques. They provide a critical capability to aid in reproducibility and replication of studies, as well as data discovery in shared resources. The NIDM-Experiment component has been used to describe publicly-available human neuroimaging datasets (e.g. ABIDE, ADHD200, CoRR, and OpenNeuro datasets) along with providing unambiguous descriptions of the clinical, neuropsychological, and imaging data collected as part of those studies resulting in approximately 4.5 million statements about aspects of these datasets.</p><p>PyNIDM, a toolbox written in Python, supports the creation, manipulation, and query of NIDM documents. It is an open-source project hosted on GitHub and distributed under the Apache License, Version 2.0. PyNIDM is under active development and testing. Tools have been created to support RESTful SPARQL queries of the NIDM documents in support of users wanting to identify interesting cohorts across datasets in support of evaluating scientific hypotheses and/or replicating results found in the literature. This query functionality, together with the NIDM document semantics, provides a path for investigators to interrogate datasets, understand what data was collected in those studies, and provide sufficiently-annotated data dictionaries of the variables collected to facilitate transformation and combining of data across studies.</p><p>Beyond querying across NIDM documents, some high-level statistical analysis tools are needed to provide investigators with an opportunity to gain more insight into data they may be interested in combining for a complete scientific investigation. Here we report on one such tool providing linear modeling support for NIDM documents: nidm_linreg.</p>


Sensors ◽  
2018 ◽  
Vol 18 (6) ◽  
pp. 1881 ◽  
Author(s):  
In-Ho Kim ◽  
Haemin Jeon ◽  
Seung-Chan Baek ◽  
Won-Hwa Hong ◽  
Hyung-Jo Jung

2022 ◽  
Vol 63 (Suppl) ◽  
pp. S74
Author(s):  
ChulHyoung Park ◽  
Seng Chan You ◽  
Hokyun Jeon ◽  
Chang Won Jeong ◽  
Jin Wook Choi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document