reference image
Recently Published Documents


TOTAL DOCUMENTS

1213
(FIVE YEARS 427)

H-INDEX

44
(FIVE YEARS 7)

2022 ◽  
Vol 2022 (1) ◽  
Author(s):  
Shahi Dost ◽  
Faryal Saud ◽  
Maham Shabbir ◽  
Muhammad Gufran Khan ◽  
Muhammad Shahid ◽  
...  

AbstractWith the growing demand for image and video-based applications, the requirements of consistent quality assessment metrics of image and video have increased. Different approaches have been proposed in the literature to estimate the perceptual quality of images and videos. These approaches can be divided into three main categories; full reference (FR), reduced reference (RR) and no-reference (NR). In RR methods, instead of providing the original image or video as a reference, we need to provide certain features (i.e., texture, edges, etc.) of the original image or video for quality assessment. During the last decade, RR-based quality assessment has been a popular research area for a variety of applications such as social media, online games, and video streaming. In this paper, we present review and classification of the latest research work on RR-based image and video quality assessment. We have also summarized different databases used in the field of 2D and 3D image and video quality assessment. This paper would be helpful for specialists and researchers to stay well-informed about recent progress of RR-based image and video quality assessment. The review and classification presented in this paper will also be useful to gain understanding of multimedia quality assessment and state-of-the-art approaches used for the analysis. In addition, it will help the reader select appropriate quality assessment methods and parameters for their respective applications.


2022 ◽  
Vol 2022 ◽  
pp. 1-9
Author(s):  
R. Dinesh Kumar ◽  
E. Golden Julie ◽  
Y. Harold Robinson ◽  
S. Vimal ◽  
Gaurav Dhiman ◽  
...  

Humans have mastered the skill of creativity for many decades. The process of replicating this mechanism is introduced recently by using neural networks which replicate the functioning of human brain, where each unit in the neural network represents a neuron, which transmits the messages from one neuron to other, to perform subconscious tasks. Usually, there are methods to render an input image in the style of famous art works. This issue of generating art is normally called nonphotorealistic rendering. Previous approaches rely on directly manipulating the pixel representation of the image. While using deep neural networks which are constructed using image recognition, this paper carries out implementations in feature space representing the higher levels of the content image. Previously, deep neural networks are used for object recognition and style recognition to categorize the artworks consistent with the creation time. This paper uses Visual Geometry Group (VGG16) neural network to replicate this dormant task performed by humans. Here, the images are input where one is the content image which contains the features you want to retain in the output image and the style reference image which contains patterns or images of famous paintings and the input image which needs to be style and blend them together to produce a new image where the input image is transformed to look like the content image but “sketched” to look like the style image.


Author(s):  
V. V. Starovoitov ◽  
Y. I. Golub ◽  
M. M. Lukashevich

Diabetic retinopathy (DR) is a disease caused by complications of diabetes. It starts asymptomatically and can end in blindness. To detect it, doctors use special fundus cameras that allow them to register images of the retina in the visible range of the spectrum. On these images one can see features, which determine the presence of DR and its grade. Researchers around the world are developing systems for the automated analysis of fundus images. At present, the level of accuracy of classification of diseases caused by DR by systems based on machine learning is comparable to the level of qualified medical doctors.The article shows variants for representation of the retina in digital images by different cameras. We define the task to develop a universal approach for the image quality assessment of a retinal image obtained by an arbitrary fundus camera. It is solved in the first block of any automated retinal image analysis system. The quality assessment procedure is carried out in several stages. At the first stage, it is necessary to perform binarization of the original image and build a retinal mask. Such a mask is individual for each image, even among the images recorded by one camera. For this, a new universal retinal image binarization algorithm is proposed. By analyzing result of the binarization, it is possible to identify and remove imagesoutliers, which show not the retina, but other objects. Further, the problem of no-reference image quality assessment is solved and images are classified into two classes: satisfactory and unsatisfactory for analysis. Contrast, sharpness and possibility of segmentation of the vascular system on the retinal image are evaluated step by step. It is shown that the problem of no-reference image quality assessment of an arbitrary fundus image can be solved.Experiments were performed on a variety of images from the available retinal image databases.


2022 ◽  
Vol 3 ◽  
Author(s):  
Nicolas Chiaruttini ◽  
Olivier Burri ◽  
Peter Haub ◽  
Romain Guiet ◽  
Jessica Sordet-Dessimoz ◽  
...  

Image analysis workflows for Histology increasingly require the correlation and combination of measurements across several whole slide images. Indeed, for multiplexing, as well as multimodal imaging, it is indispensable that the same sample is imaged multiple times, either through various systems for multimodal imaging, or using the same system but throughout rounds of sample manipulation (e.g. multiple staining sessions). In both cases slight deformations from one image to another are unavoidable, leading to an imperfect superimposition Redundant and thus a loss of accuracy making it difficult to link measurements, in particular at the cellular level. Using pre-existing software components and developing missing ones, we propose a user-friendly workflow which facilitates the nonlinear registration of whole slide images in order to reach sub-cellular resolution level. The set of whole slide images to register and analyze is at first defined as a QuPath project. Fiji is then used to open the QuPath project and perform the registrations. Each registration is automated by using an elastix backend, or semi-automated by using BigWarp in order to interactively correct the results of the automated registration. These transformations can then be retrieved in QuPath to transfer any regions of interest from an image to the corresponding registered images. In addition, the transformations can be applied in QuPath to produce on-the-fly transformed images that can be displayed on top of the reference image. Thus, relevant data can be combined and analyzed throughout all registered slides, facilitating the analysis of correlative results for multiplexed and multimodal imaging.


Author(s):  
Ahmed Shihab Ahmed ◽  
Hussein Ali Salah

The technology <span>of the multimodal brain image registration is the key method for accurate and rapid diagnosis and treatment of brain diseases. For achieving high-resolution image registration, a fast sub pixel registration algorithm is used based on single-step discrete wavelet transform (DWT) combined with phase convolution neural network (CNN) to classify the registration of brain tumors. In this work apply the genetic algorithm and CNN clasifcation in registration of magnetic resonance imaging (MRI) image. This approach follows eight steps, reading the source of MRI brain image and loading the reference image, enhencment all MRI images by bilateral filter, transforming DWT image by applying the DWT2, evaluating (fitness function) each MRI image by using entropy, applying the genetic algorithm, by selecting the two images based on rollout wheel and crossover of the two images, the CNN classify the result of subtraction to normal or abnormal, “in the eighth one,” the Arduino and global system for mobile (GSM) 8080 are applied to send the message to patient. The proposed model is tested on MRI Medical City Hospital in Baghdad database consist 550 normal and 350 abnormal and split to 80% training and 20 testing, the proposed model result achieves the 98.8% </span>accuracy.


2021 ◽  
Vol 12 (1) ◽  
pp. 288
Author(s):  
Tasleem Kausar ◽  
Adeeba Kausar ◽  
Muhammad Adnan Ashraf ◽  
Muhammad Farhan Siddique ◽  
Mingjiang Wang ◽  
...  

Histopathological image analysis is an examination of tissue under a light microscope for cancerous disease diagnosis. Computer-assisted diagnosis (CAD) systems work well by diagnosing cancer from histopathology images. However, stain variability in histopathology images is inevitable due to the use of different staining processes, operator ability, and scanner specifications. These stain variations present in histopathology images affect the accuracy of the CAD systems. Various stain normalization techniques have been developed to cope with inter-variability issues, allowing standardizing the appearance of images. However, in stain normalization, these methods rely on the single reference image rather than incorporate color distributions of the entire dataset. In this paper, we design a novel machine learning-based model that takes advantage of whole dataset distributions as well as color statistics of a single target image instead of relying only on a single target image. The proposed deep model, called stain acclimation generative adversarial network (SA-GAN), consists of one generator and two discriminators. The generator maps the input images from the source domain to the target domain. Among discriminators, the first discriminator forces the generated images to maintain the color patterns as of target domain. While second discriminator forces the generated images to preserve the structure contents as of source domain. The proposed model is trained using a color attribute metric, extracted from a selected template image. Therefore, the designed model not only learns dataset-specific staining properties but also image-specific textural contents. Evaluated results on four different histopathology datasets show the efficacy of SA-GAN to acclimate stain contents and enhance the quality of normalization by obtaining the highest values of performance metrics. Additionally, the proposed method is also evaluated for multiclass cancer type classification task, showing a 6.9% improvement in accuracy on ICIAR 2018 hidden test data.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 175
Author(s):  
Ghislain Takam Tchendjou ◽  
Emmanuel Simeu

This paper presents the construction of a new objective method for estimation of visual perceiving quality. The proposal provides an assessment of image quality without the need for a reference image or a specific distortion assumption. Two main processes have been used to build our models: The first one uses deep learning with a convolutional neural network process, without any preprocessing. The second objective visual quality is computed by pooling several image features extracted from different concepts: the natural scene statistic in the spatial domain, the gradient magnitude, the Laplacian of Gaussian, as well as the spectral and spatial entropies. The features extracted from the image file are used as the input of machine learning techniques to build the models that are used to estimate the visual quality level of any image. For the machine learning training phase, two main processes are proposed: The first proposed process consists of a direct learning using all the selected features in only one training phase, named direct learning blind visual quality assessment DLBQA. The second process is an indirect learning and consists of two training phases, named indirect learning blind visual quality assessment ILBQA. This second process includes an additional phase of construction of intermediary metrics used for the construction of the prediction model. The produced models are evaluated on many benchmarks image databases as TID2013, LIVE, and LIVE in the wild image quality challenge. The experimental results demonstrate that the proposed models produce the best visual perception quality prediction, compared to the state-of-the-art models. The proposed models have been implemented on an FPGA platform to demonstrate the feasibility of integrating the proposed solution on an image sensor.


Sign in / Sign up

Export Citation Format

Share Document