scholarly journals A generic workflow for effective sampling of environmental vouchers with UUID assignment and image processing

Database ◽  
2018 ◽  
Vol 2018 ◽  
Author(s):  
Dagmar Triebel ◽  
Wolfgang Reichert ◽  
Simone Bosert ◽  
Martin Feulner ◽  
Daniel Osieko Okach ◽  
...  

Abstract Sampling of biological and environmental vouchers in the field is rather challenging, particularly under adverse habitat conditions and when various activities need to be handled simultaneously. The workflow described here includes five procedural steps, which result in professional sampling and the generation of universally identifiable data. In preparation for the field campaign, sample containers need to be labelled with universally unique identifier (UUID)-QR-codes. At the collection site, labelled containers, sampled material and attached supplementary information are imaged using a GNSS- respectively GPS-enabled smartphone or camera. Image processing, tagging and data storage as CSV text file is subsequently achieved in a field station or laboratory. For this purposes, the newly implemented tool DiversityImageInspector (URL: http://diversityworkbench.net/Portal/DiversityImageInspector) is used. It addresses combined image and data processing in such a context including the extraction of the QR-coded UUID from the image content and the extraction of geodata and time information from the Exif image header. The import of the resulting data files into a relational database or other kind of data management systems is optional but recommended. If applied, the import might be guided by a data transformation tool with compliant schema as described here. The new approach is discussed also with regard to implications for virtual research environments and data publication networks. Database URL: http://diversityworkbench.net/Portal/DiversityImageInspector

Author(s):  
Umesh Banodha ◽  
Praveen Kumar Kataria

Cloud is an emerging technology that stores the necessary data and electronic form of data is produced in gigantic quantity. It is vital to maintain the efficacy of this data the need of data recovery services is highly essential. Cloud computing is anticipated as the vital foundation for the creation of IT enterprise and it is an impeccable solution to move databases and application software to big data centers where managing data and services is not completely reliable. Our focus will be on the cloud data storage security which is a vital feature when it comes to giving quality service. It should also be noted that cloud environment comprises of extremely dynamic and heterogeneous environment and because of high scale physical data and resources, the failure of data centre nodes is completely normal.Therefore, cloud environment needs effective adaptive management of data replication to handle the indispensable characteristic of the cloud environment. Disaster recovery using cloud resources is an attractive approach and data replication strategy which attentively helps to choose the data files for replication and the strategy proposed tells dynamically about the number of replicas and effective data nodes for replication. Thus, the objective of future algorithm is useful to help users together the information from a remote location where network connectivity is absent and secondly to recover files in case it gets deleted or wrecked because of any reason. Even, time oriented problems are getting resolved so in less time recovery process is executed.


Author(s):  
Олег Евсютин ◽  
Oleg Evsutin ◽  
Анна Мельман ◽  
Anna Melman ◽  
Роман Мещеряков ◽  
...  

One of the areas of digital image processing is the steganographic embedding of additional information into them. Digital steganography methods are used to ensure the information confidentiality, as well as to track the distribution of digital content on the Internet. Main indicators of the steganographic embedding effectiveness are invisibility to the human eye, characterized by the PSNR metric, and embedding capacity. However, even with full visual stealth of embedding, its presence may produce a distortion of the digital image natural model in the frequency domain. The article presents a new approach to reducing the distortion of the digital image natural model in the field of discrete cosine transform (DCT) when embedding information using the classical QIM method. The results of the experiments show that the proposed approach allows reducing the distortion of the histograms of the distribution of DCT coefficients, and thereby eliminating the unmasking signs of embedding.


Author(s):  
Kerry E. Koitzsch

This chapter is a brief introduction to the Image As Big Data Toolkit (IABDT), a Java-based open source framework for performing a variety of distributed image processing and analysis tasks. IABDT has been developed over the last two years in response to the rapid evolution of Big Data architectures and technologies, distributed and image processing systems. This chapter presents an architecture for image analytics that uses Big Data storage and compression methods. A sample implementation of our image analytic architecture called the Image as Big Data Toolkit (IABDT) addresses some of the most frequent challenges experienced by the image analytics developer. Baseline applications developed with IABDT, status of the toolkit and directions for future extension with emphasis on image display, presentation, and reporting case studies are discussed to motivate our design and technology stack choices. Sample applications built using IABDT, as well as future development plans for IABDT are discussed.


1988 ◽  
Vol 27 (02) ◽  
pp. 53-57 ◽  
Author(s):  
J. Dengler ◽  
H. Bertsch ◽  
J. F. Desaga ◽  
M. Schmidt

SummaryImage analysis with the aid of the computer has rapidly developed over the last few years. There are many possibilities of making use of this development in the medical and biological field. This paper is meant to give a rather general overview of recent systematics regarding the existing methodology in image analysis. Furthermore, some parts of these systematics are illustrated in greater detail by recent research work in the German Cancer Research Center. In particular, two applications are reported where special emphasis is laid on mathematical morphology. This relatively new approach to image analysis finds growing interest in the image processing community and has its strength in bridging the gap between a priori knowledge and image analysis procedures.


Author(s):  
Stephanie M Gogarten ◽  
Tamar Sofer ◽  
Han Chen ◽  
Chaoyu Yu ◽  
Jennifer A Brody ◽  
...  

Abstract Summary The Genomic Data Storage (GDS) format provides efficient storage and retrieval of genotypes measured by microarrays and sequencing. We developed GENESIS to perform various single- and aggregate-variant association tests using genotype data stored in GDS format. GENESIS implements highly flexible mixed models, allowing for different link functions, multiple variance components and phenotypic heteroskedasticity. GENESIS integrates cohesively with other R/Bioconductor packages to build a complete genomic analysis workflow entirely within the R environment. Availability and implementation https://bioconductor.org/packages/GENESIS; vignettes included. Supplementary information Supplementary data are available at Bioinformatics online.


2019 ◽  
Vol 35 (18) ◽  
pp. 3544-3546 ◽  
Author(s):  
Douglas H Roossien ◽  
Benjamin V Sadis ◽  
Yan Yan ◽  
John M Webb ◽  
Lia Y Min ◽  
...  

Abstract Summary This note describes nTracer, an ImageJ plug-in for user-guided, semi-automated tracing of multispectral fluorescent tissue samples. This approach allows for rapid and accurate reconstruction of whole cell morphology of large neuronal populations in densely labeled brains. Availability and implementation nTracer was written as a plug-in for the open source image processing software ImageJ. The software, instructional documentation, tutorial videos, sample image and sample tracing results are available at https://www.cai-lab.org/ntracer-tutorial. Supplementary information Supplementary data are available at Bioinformatics online.


2015 ◽  
Vol 11 (S319) ◽  
pp. 144-144
Author(s):  
Laerte Sodré ◽  
Patricia Martins de Novais

AbstractStellar populations are fossil records of several physical processes which occur in galaxies and their distribution within these objects may provide important clues on how they form and evolve. By using parameters from image processing we have been developing a new approach to understand the spatial distribution of stellar populations and how this is correlated with the form and evolution of galaxies. In this work we present some results obtained with data from the CALIFA survey.


Sign in / Sign up

Export Citation Format

Share Document