image databases
Recently Published Documents


TOTAL DOCUMENTS

706
(FIVE YEARS 50)

H-INDEX

39
(FIVE YEARS 2)

Author(s):  
Y. I. Golub

Quality assessment is an integral stage in the processing and analysis of digital images in various automated systems. With the increase in the number and variety of devices that allow receiving data in various digital formats, as well as the expansion of human activities in which information technology (IT) is used, the need to assess the quality of the data obtained is growing. As well as the bar grows for the requirements for their quality.The article describes the factors that deteriorate the quality of digital images, areas of application of image quality assessment functions, a method for normalizing proximity measures, classes of digital images and their possible distortions, image databases available on the Internet for conducting experiments on assessing image quality with visual assessments of experts.


2021 ◽  
Vol 38 (6) ◽  
pp. 1713-1718
Author(s):  
Manikanta Prahlad Manda ◽  
Daijoon Hyun

Traditional thresholding methods are often used for image segmentation of real images. However, due to distinct characteristics of infrared thermal images, it is difficult to ensure an optimal image segmentation using the traditional thresholding algorithms, and therefore, sometimes this can lead to over-segmentation, missing object information, and/or spurious responses in the output. To overcome these issues, we propose a new thresholding technique that makes use of the sine entropy-based criterion. Moreover, we build a double thresholding technique that makes use of two thresholds to get the final image thresholding result. Besides, we introduce the sine entropy concept as a supplement of the Shannon entropy in creating threshold-dependent criterion derived from the grayscale histogram. We found that the sine entropy is more robust in interpreting the strength of the long-range correlation in the gray levels compared to the Shannon entropy. We have experimented with our method on several infrared thermal images collected from standard image databases to describe the performance. On comparing with the state-of-art methods, the qualitative results from the experiments show that the proposed method achieves the best performance with an average sensitivity of 0.98 and an average misclassification error of 0.01, and second-best performance with an average sensitivity of 0.99 and an average specificity of 0.93.


2021 ◽  
Vol 11 (19) ◽  
pp. 8795
Author(s):  
Cesar Benavides-Alvarez ◽  
Carlos Aviles-Cruz ◽  
Eduardo Rodriguez-Martinez ◽  
Andrés Ferreyra-Ramírez ◽  
Arturo Zúñiga-López

One of the most important applications of data science and data mining is is organizing, classifying, and retrieving digital images on Internet. The current focus of the researchers is to develop methods for the content based exploration of natural scenery images. In this research paper, a self-organizing method of natural scenes images using Wiener-Granger Causality theory is proposed. It is achieved by carrying out Wiener-Granger causality for organizing the features in the time series form and introducing a characteristics extraction stage at random points within the image. Once the causal relationships are obtained, the k-means algorithm is applied to achieve the self-organizing of these attributes. Regarding classification, the k−NN distance classification algorithm is used to find the most similar images that share the causal relationships between the elements of the scenes. The proposed methodology is validated on three public image databases, obtaining 100% recovery results.


Author(s):  
Laura Raquel Bareiro Paniagua ◽  
Jose Luis Vazquez Noguera ◽  
Luis Salgueiro Romero ◽  
Deysi Natalia Leguizamon Correa ◽  
Diego P. Pinto-Roa ◽  
...  

2021 ◽  
Vol 971 (5) ◽  
pp. 51-60
Author(s):  
I.A. Anikeeva

An aerial image quality assessment for mapping purposes using open image databases is carried out. While using methods based on reference images, it is a generally adopted practice to use open databases, designed to provide researchers with reference data, i.e. shots whose quality can be accepted as a reference. Such databases are widely used to test the efficiency of methods and algorithms for assessing and improving photographic quality. The possibility of applying this approach for aerial photography assessment in terms of visual properties is shown. The values of fine quality indicators for reference sample of open image library LIVE Database are determined


2021 ◽  
Author(s):  
Tijl Grootswagers ◽  
Ivy Zhou ◽  
Amanda K Robinson ◽  
Martin N Hebart ◽  
Thomas A Carlson

The neural basis of object recognition and semantic knowledge have been the focus of a large body of research but given the high dimensionality of object space, it is challenging to develop an overarching theory on how brain organises object knowledge. To help understand how the brain allows us to recognise, categorise, and represent objects and object categories, there is a growing interest in using large-scale image databases for neuroimaging experiments. Traditional image databases are based on manually selected object concepts and often single images per concept. In contrast, 'big data' stimulus sets typically consist of images that can vary significantly in quality and may be biased in content. To address this issue, recent work developed THINGS: a large stimulus set of 1,854 object concepts and 26,107 associated images. In the current paper, we present THINGS-EEG, a dataset containing human electroencephalography responses from 50 subjects to all concepts and 22,248 images in the THINGS stimulus set. The THINGS-EEG dataset provides neuroimaging recordings to a systematic collection of objects and concepts and can therefore support a wide array of research to understand visual object processing in the human brain.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Daniela Dal Fabbro ◽  
Giulia Catissi ◽  
Gustavo Borba ◽  
Luciano Lima ◽  
Erika Hingst-Zaher ◽  
...  

AbstractAffectively rated image databases have their main application in studies that require inducing distinct stimuli on subjects. Widespread databases are designed to cover a broad range of stimuli, from negative to positive (valence), and relaxed to excited (arousal). The availability of narrow domain databases, designed to cover and thoroughly analyze a few categories of images that induce a particular stimulus, is limited. We present a narrow domain affective database with positive images, named e-Nature Positive Emotions Photography Database (e-NatPOEM), consisting of 433 high-quality images produced by professional and amateur photographers. A total of 739 participants evaluated them using a web-based tool to input valence-arousal values and a single word describing the evoked feeling. Ratings per image ranged from 36 to 108, median: 57; first/third quartiles: 56/59. 84% of the images presented valence > middle of the scale and arousal < middle of the scale. Words describing the images were classified into semantical groups, being predominant: Peace/tranquility (39% of all words), Beauty (23%), and Positive states (15%). e-NatPOEM is free and publicly available, it is a valid resource for affective research, and presents the potential for clinical use to assist positive emotions promotion.


Author(s):  
Ashraf AlDabbas ◽  
Zoltan Gal

Developing a deep learning (DL) model for image classification commonly demands a crucial architecture organization. Planetary expeditions produce a massive quantity of data and images. However, manually analyzing and classifying flight missions image databases with hundreds of thousands of images is ungainly and yield weak accuracy. In this paper, we speculate an essential topic related to the classification of remotely sensed images, in which the process of feature coding and extraction are decisive procedures. Diverse feature extraction techniques are intended to stimulate a discriminative image classifier. Features extraction is the primary engagement in raw data processing with the purpose of data classification; when it comes across the task of analysis of vast and varied data, these kinds of tasks are considered as time-consuming and hard to be treated with. Most of these classifiers are either, in principle, quite intricate or virtually unattainable to calculate for massive datasets. Stimulated by this perception, we put forward a straightforward, efficient classifier based on feature extraction by analyzing the cell of tensors via layered MapReduce framework beside meta-learning LSTM followed by a SoftMax classifier. Experiment results show that the provided model attains a classification accuracy of 96.7%, which makes the provided model quite valid for diverse image databases with varying sizes.


2021 ◽  
Vol 23 (06) ◽  
pp. 47-57
Author(s):  
Aditya Kulkarni ◽  
◽  
Manali Munot ◽  
Sai Salunkhe ◽  
Shubham Mhaske ◽  
...  

With the development in technologies right from serial to parallel computing, GPU, AI, and deep learning models a series of tools to process complex images have been developed. The main focus of this research is to compare various algorithms(pre-trained models) and their contributions to process complex images in terms of performance, accuracy, time, and their limitations. The pre-trained models we are using are CNN, R-CNN, R-FCN, and YOLO. These models are python language-based and use libraries like TensorFlow, OpenCV, and free image databases (Microsoft COCO and PAS-CAL VOC 2007/2012). These not only aim at object detection but also on building bounding boxes around appropriate locations. Thus, by this review, we get a better vision of these models and their performance and a good idea of which models are ideal for various situations.


2021 ◽  
Vol 7 (4) ◽  
pp. 69
Author(s):  
Ivan Castillo Castillo Camacho ◽  
Kai Wang

Seeing is not believing anymore. Different techniques have brought to our fingertips the ability to modify an image. As the difficulty of using such techniques decreases, lowering the necessity of specialized knowledge has been the focus for companies who create and sell these tools. Furthermore, image forgeries are presently so realistic that it becomes difficult for the naked eye to differentiate between fake and real media. This can bring different problems, from misleading public opinion to the usage of doctored proof in court. For these reasons, it is important to have tools that can help us discern the truth. This paper presents a comprehensive literature review of the image forensics techniques with a special focus on deep-learning-based methods. In this review, we cover a broad range of image forensics problems including the detection of routine image manipulations, detection of intentional image falsifications, camera identification, classification of computer graphics images and detection of emerging Deepfake images. With this review it can be observed that even if image forgeries are becoming easy to create, there are several options to detect each kind of them. A review of different image databases and an overview of anti-forensic methods are also presented. Finally, we suggest some future working directions that the research community could consider to tackle in a more effective way the spread of doctored images.


Sign in / Sign up

Export Citation Format

Share Document