image metadata
Recently Published Documents


TOTAL DOCUMENTS

58
(FIVE YEARS 16)

H-INDEX

6
(FIVE YEARS 1)

Author(s):  
S. T. Veena ◽  
A. Selvaraj

<p>Today many steganographic software tools are freely available on the Internet, which helps even callow users to have covert communication through digital images. Targeted structural image steganalysers identify only a particular steganographic software tool by tracing the unique fingerprint left in the stego images by the steganographic process. Image steganalysis proves to be a tough challenging task if the process is blind and universal, the secret payload is very less and the cover image is in lossless compression format. A payload independent universal steganalyser which identifies the steganographic software tools by exploiting the traces of artefacts left in the image and in its metadata for five different image formats is proposed. First, the artefacts in image metadata are identified and clustered to form distinct groups by extended K-means clustering. The group that is identical to the cover is further processed by extracting the artefacts in the image data. This is done by developing a signature of the steganographic software tool from its stego images. They are then matched for steganographic software tool identification. Thus, the steganalyser successfully identifies the stego images in five different image formats, out of which four are lossless, even for a payload of 1 byte. Its performance is also compared with the existing steganalyser software tool.</p>


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Silky Goel ◽  
Siddharth Gupta ◽  
Avnish Panwar ◽  
Sunil Kumar ◽  
Madhushi Verma ◽  
...  

Diabetes is a very fast-growing disease in India, with currently more than 72 million patients. Prolonged diabetes (about almost 20 years) can cause serious loss to the tiny blood vessels and neurons in the patient eyes, called diabetic retinopathy (DR). This first causes occlusion and then rapid vision loss. The symptoms of the disease are not very conspicuous in its early stage. The disease is featured by the formation of bloated structures in the retinal area called microaneurysms. Because of negligence, the condition of the eye worsens into the generation of more severe blots and damage to retinal vessels causing complete loss of vision. Early screening and monitoring of DR can reduce the risk of vision loss in patients with high possibilities. But the diabetic retinopathy detection and its classification by a human, is a challenging and error-prone task, because of the complexity of the image captured by color fundus photography. Machine learning algorithms armed with some feature extraction techniques have been employed earlier to detect and classify the levels of DR. However, these techniques provide below-par accuracy. Now, with the advent of deep learning (DL) techniques in computer vision, it has become possible to achieve very high levels of accuracy. DL models are an abstraction of the human brain coupled with the eyes. To create a model from scratch and train it is a cumbersome task requiring a huge amount of images. This deficiency of the DL techniques can be patched up by employing another technique to a task called transfer learning. In this, a DL model is trained on image metadata, and to learn features it used hundreds of classes from the DR fundus images. This enables professionals to create models capable of classifying unseen images into a proper grade or level with acceptable accuracy. This paper proposed a DL model coupled with different classifiers to classify the fundus image into its correct class of severity. We have trained the model on IDRD images and it has proven to show very high accuracy.


2021 ◽  
Vol 13 (14) ◽  
pp. 2802
Author(s):  
Soraya Kaiser ◽  
Guido Grosse ◽  
Julia Boike ◽  
Moritz Langer

Water bodies are a highly abundant feature of Arctic permafrost ecosystems and strongly influence their hydrology, ecology and biogeochemical cycling. While very high resolution satellite images enable detailed mapping of these water bodies, the increasing availability and abundance of this imagery calls for fast, reliable and automatized monitoring. This technical work presents a largely automated and scalable workflow that removes image noise, detects water bodies, removes potential misclassifications from infrastructural features, derives lake shoreline geometries and retrieves their movement rate and direction on the basis of ortho-ready very high resolution satellite imagery from Arctic permafrost lowlands. We applied this workflow to typical Arctic lake areas on the Alaska North Slope and achieved a successful and fast detection of water bodies. We derived representative values for shoreline movement rates ranging from 0.40–0.56 m.yr−1 for lake sizes of 0.10 ha–23.04 ha. The approach also gives an insight into seasonal water level changes. Based on an extensive quantification of error sources, we discuss how the results of the automated workflow can be further enhanced by incorporating additional information on weather conditions and image metadata and by improving the input database. The workflow is suitable for the seasonal to annual monitoring of lake changes on a sub-meter scale in the study areas in northern Alaska and can readily be scaled for application across larger regions within certain accuracy limitations.


2021 ◽  
Author(s):  
Chao Pan ◽  
S. Kasra Tabatabaei ◽  
SM Hossein Tabatabaei Yazdi ◽  
Alvaro G. Hernandez ◽  
Charles M. Schroeder ◽  
...  

DNA-based data storage platforms traditionally encode information only in the nucleotide sequence of the molecule. Here, we report on a two-dimensional molecular data storage system that records information in both the sequence and the backbone structure of DNA. Our “2DDNA” method efficiently stores high-density images in synthetic DNA and embeds metadata as nicks in the DNA backbone. To avoid costly redundancy used to combat sequencing errors and missing information content that typically requires additional synthesis, specialized machine learning methods are developed for automatic discoloration detection and image inpainting. The 2DDNA platform is experimentally tested on a library of images that show undetectable visual degradation after processing, while the image metadata is erased and rewritten to modify copyright information. Our results show that DNA can serve both as a write-once and rewritable memory for heterogenous data. Moreover, the storage density of the molecules can be increased by using different encoding dimensions and avoiding error-correction redundancy.


2021 ◽  
Vol 13 (4) ◽  
pp. 593
Author(s):  
Lorenzo Lastilla ◽  
Valeria Belloni ◽  
Roberta Ravanelli ◽  
Mattia Crespi

DSM generation from satellite imagery is a long-lasting issue and it has been addressed in several ways over the years; however, expert and users are continuously searching for simpler but accurate and reliable software solutions. One of the latest ones is provided by the commercial software Agisoft Metashape (since version 1.6), previously known as Photoscan, which joins other already available open-source and commercial software tools. The present work aims to quantify the potential of the new Agisoft Metashape satellite processing module, considering that to the best knowledge of the authors, only two papers have been published, but none considering cross-sensor imagery. Here we investigated two different case studies to evaluate the accuracy of the generated DSMs. The first dataset consists of a triplet of Pléiades images acquired over the area of Trento and the Adige valley (Northern Italy), which is characterized by a great variety in terms of geomorphology, land uses and land covers. The second consists of a triplet composed of a WorldView-3 stereo pair and a GeoEye-1 image, acquired over the city of Matera (Southern Italy), one of the oldest settlements in the world, with the worldwide famous area of Sassi and a very rugged morphology in the surroundings. First, we carried out the accuracy assessment using the RPCs supplied by the satellite companies as part of the image metadata. Then, we refined the RPCs with an original independent terrain technique able to supply a new set of RPCs, using a set of GCPs adequately distributed across the regions of interest. The DSMs were generated both in a stereo and multi-view (triplet) configuration. We assessed the accuracy and completeness of these DSMs through a comparison with proper references, i.e., DSMs obtained through LiDAR technology. The impact of the RPC refinement on the DSM accuracy is high, ranging from 20 to 40% in terms of LE90. After the RPC refinement, we achieved an average overall LE90 <5.0 m (Trento) and <4.0 m (Matera) for the stereo configuration, and <5.5 m (Trento) and <4.5 m (Matera) for the multi-view (triplet) configuration, with an increase of completeness in the range 5–15% with respect to stereo pairs. Finally, we analyzed the impact of land cover on the accuracy of the generated DSMs; results for three classes (urban, agricultural, forest and semi-natural areas) are also supplied.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Xiaoguang Wang ◽  
Ningyuan Song ◽  
Xuemei Liu ◽  
Lei Xu

PurposeTo meet the emerging demand for fine-grained annotation and semantic enrichment of cultural heritage images, this paper proposes a new approach that can transcend the boundary of information organization theory and Panofsky's iconography theory.Design/methodology/approachAfter a systematic review of semantic data models for organizing cultural heritage images and a comparative analysis of the concept and characteristics of deep semantic annotation (DSA) and indexing, an integrated DSA framework for cultural heritage images as well as its principles and process was designed. Two experiments were conducted on two mural images from the Mogao Caves to evaluate the DSA framework's validity based on four criteria: depth, breadth, granularity and relation.FindingsResults showed the proposed DSA framework included not only image metadata but also represented the storyline contained in the images by integrating domain terminology, ontology, thesaurus, taxonomy and natural language description into a multilevel structure.Originality/valueDSA can reveal the aboutness, ofness and isness information contained within images, which can thus meet the demand for semantic enrichment and retrieval of cultural heritage images at a fine-grained level. This method can also help contribute to building a novel infrastructure for the increasing scholarship of digital humanities.


Fast track article for IS&T International Symposium on Electronic Imaging 2021: Media Watermarking, Security, and Forensics 2021 proceedings.


First Monday ◽  
2020 ◽  
Author(s):  
Denise Russo ◽  
Abebe Rorissa

The digitization of visual resources and the creation of corresponding metadata that meets the criteria of clarity and interoperability, while also approaching the needs of the multilingual Web, are pressing concerns. Because visual resources make up a significant percentage of digital information, this paper focuses on the aforementioned concerns and proposes ways to address them, including swift progression and adoption of cohesive, multi-user, multilingual metadata standardization to improve digital access and to allow all descriptive image metadata to be approachable and translatable. We offer some recommendations such as those involved in visual resource management moving away from using primarily the English writing system based metadata schemas in order to provide flexible lexicon in non-Roman languages, which can easily be recognized and interpreted by both monolingual and multilingual users alike as well as facilitate digital metadata interoperability.


Author(s):  
Melanie Conroy ◽  
Kimmo Elo

This chapter uses network analysis to explore, visualise and analyse quantitative historical data related to political resistance movements in the former East Germany. The study applies historical network analysis (HNA) rooted in social network analysis (SNA) to shed light on the structure and dynamics of the geospatial social networks of a sample group within the East German opposition movement between 1975 and 1990. In particular the opportunities and limits of using network analysis for historical studies are discussed which demonstrates how network graphs can be useful for historical analysis. The network analysis is used to help the researcher to identify which individuals are more likely to be well integrated into the group and which individuals are less central to the group, regardless of which individuals are most well-known or prominent. In particular, we point to the fact that knowledge of the government’s repressive actions and the opposition movement’s attempts to evade repression are fundamental to understanding the geospatial and social changes within this group during this period.


Author(s):  
H. J. Theiss

Abstract. The National Geospatial-Intelligence Agency (NGA) designed the Generic Linear Array Scanner (GLAS) model for geopositioning images from both airborne and spaceborne linear array scanning systems, including pushbroom, whiskbroom, and panoramic sensors. Providers of hyperspectral imagery (HSI) historically have not populated products with high fidelity metadata to support downstream photogrammetric processing. To demonstrate recommended metadata population and exploitation using the GLAS model, NGA has generated example HSI products using data collected by NASA’s EO-1 Hyperion sensor and provided courtesy of the U.S. Geological Survey. This paper provides novel techniques for: 1) generating reasonably accurate initial approximations for GLAS metadata as a function of per-image metadata consisting of only timing information and the latitude and longitude values of the four corners of the image; and 2) identifying a vector of adjustable parameters and reasonable values for its a priori error covariance matrix that enable corrections to the metadata during a bundle adjustment. The paper describes applying these techniques to fourteen overlapping Hyperion images of the Alps, running a bundle adjustment as a function of tie points and optional ground control points, and demonstrating superior results to the previous polynomial based approach as quantified by the 3D errors at several ground check points.


Sign in / Sign up

Export Citation Format

Share Document