image fragment
Recently Published Documents


TOTAL DOCUMENTS

19
(FIVE YEARS 9)

H-INDEX

3
(FIVE YEARS 2)

Author(s):  
Tatyana Biloborodova ◽  
Inna Skarga-Bandurova ◽  
Mark Koverga

The methodology of solving the problem of eliminating class imbalance in image data sets is presented. The proposed methodology includes the stages of image fragment extraction, fragment augmentation, feature extraction, duplication of minority objects, and is based on reinforcement learning technology. The degree of imbalance indicator was used as a measure to determine the imbalance of the data set. An experiment was performed using a set of images of the faces of patients with skin rashes, annotated according to the severity of acne. The main steps of the methodology implementation are considered. The results of the classification showed the feasibility of applying the proposed methodology. The accuracy of classification on test data was 85%, which is 5% higher than the result obtained without the use of the proposed methodology. Key words: class imbalance, unbalanced data set, image fragment extraction, augmentation.


2021 ◽  
Vol 2021 (49) ◽  
pp. 45-51
Author(s):  
R. Ya. Kosarevych ◽  
◽  
O. V. Alokhina ◽  
B. P. Rusyn ◽  
O. A. Lutsyk ◽  
...  

The methodology of remote sensing image analysis for detection of dependences in the process of development of biological species is proposed. Classification methods based on convolutional networks are applied to a set of fragments of the input image. In order to increase the accuracy of classification by increasing the training and test samples, an original method of data augmentation is proposed. For a series of images of one part of the landscape, the fragments of images are classified by their numbers, which coincide with the numbers of the previously classified image of the training and test samples which are created manually. This approach has improved the accuracy of classification compared to known methods of data augmentation. Numerous studies of various convolutional neural networks have shown the similarity of the classification results of the remote sensing images fragments with increasing learning time with the complication of the network structure. A set of image fragment centers of a particular class is considered as random point configuration, the class labels are used as a mark for every point. Marked point field is considered as consisting of several sub-point fields in each of which all points have the same qualitative marks. We perform the analysis of the bivariate point pattern to reveal relationships between points of different types, using the characteristics of marked random point fields. Such relationships can characterize dependences and relative degrees of dominance. A series of remote sensing images are studied to identify the relationships between point configurations that describe different classes to monitor their development.


Computers ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 105
Author(s):  
Shurong Sheng ◽  
Katrien Laenen ◽  
Luc Van Gool ◽  
Marie-Francine Moens

In this paper, we target the tasks of fine-grained image–text alignment and cross-modal retrieval in the cultural heritage domain as follows: (1) given an image fragment of an artwork, we retrieve the noun phrases that describe it; (2) given a noun phrase artifact attribute, we retrieve the corresponding image fragment it specifies. To this end, we propose a weakly supervised alignment model where the correspondence between the input training visual and textual fragments is not known but their corresponding units that refer to the same artwork are treated as a positive pair. The model exploits the latent alignment between fragments across modalities using attention mechanisms by first projecting them into a shared common semantic space; the model is then trained by increasing the image–text similarity of the positive pair in the common space. During this process, we encode the inputs of our model with hierarchical encodings and remove irrelevant fragments with different indicator functions. We also study techniques to augment the limited training data with synthetic relevant textual fragments and transformed image fragments. The model is later fine-tuned by a limited set of small-scale image–text fragment pairs. We rank the test image fragments and noun phrases by their intermodal similarity in the learned common space. Extensive experiments demonstrate that our proposed models outperform two state-of-the-art methods adapted to fine-grained cross-modal retrieval of cultural items for two benchmark datasets.


Author(s):  
Alexey Borisovich Raukhvarger ◽  
Pavel Alekseevich Durandin

The paper considers the algorithmic basis of the developed application, which allows the user to select image fragments for viewing not only in enlarged form, but also with increased detail distinctness through brightness-contrast transformations. There has been proposed an algorithm of upsizing and processing the selected image fragment according to the required parameters of average brightness and contrast. The advantages of the proposed method of image processing in comparison with global methods for processing the entire image are investigated. The considered approach develops the advantages when marking low-contrast fragments that are close to monotone in images with fragments of different brightness. The experiments on processing various fragments of images were carried out using a specially developed program. The examples of results have been presented. The behavior of two types of fragments on simplified models of pixel brightness distribution has been analyzed, and for this reason there was made a conclusion about further improving the approach.


2020 ◽  
Vol 135 ◽  
pp. 103405 ◽  
Author(s):  
Erkai Watson ◽  
Nico Kunert ◽  
Robin Putzar ◽  
Hans-Gerd Maas ◽  
Stefan Hiermaier

2019 ◽  
Vol 43 (6) ◽  
pp. 1030-1040 ◽  
Author(s):  
A.V. Kokoshkin ◽  
V.A. Korotkov ◽  
K.V. Korotkov ◽  
E.P. Novichikhin

This paper discusses the use of the interpolation method for the sequential calculation of the Fourier spectrum (IMSCS) for retouching and restoration of missing (shaded) image fragments. The proposed approach can be used with any form of a missing image fragment. Such image processing can give good results even at a significantly high percentage of missing image fragments. The method of digital virtual image reconstruction proposed here is strictly based on a scientific approach; as the source data, it uses all the data available (the image itself is the object to be recovered). Therefore, it is free from the human factor, because of which subjective changes can be introduced in the image under processing. The results presented indicate a significant increase in the quality of digital images (increasing the information content), which can offer helpful auxiliary tools for professionals using these images for their practical purposes.


Author(s):  
Alexandr A. Pozdeev ◽  
Nataliia A. Obukhova ◽  
Alexandr A. Motyko

A set of algorithms, taking in account endoscopic image features and computational cost for real-time realization is proposed. A noise reduction algorithm is based on determining the level of detail in an image fragment. For fragments with a different level of detail, different noise reduction filters are used. The enhancement algorithm is based on nonlinear contrast enhancement which highlights the contrast of vessels relative to the background without significant noise stressing, which is one of the main disadvantages of nonlinear enhancement algorithms. The custom color correction algorithm takes into account user preferences and provides a mean error less than 0.5% for each color coordinate. The “mosaic” synthesis algorithm gets panoramic images of low detail images with a mean stitching error less than 0.75 pix. The software realization of algorithms allows processing 4K endoscopic video with a speed of about 30 fps.


Author(s):  
Erkai Watson ◽  
Nico Kunert ◽  
Robin Putzar ◽  
Hans-Gerd Maas ◽  
Stefan Hiermaier

Abstract Hypervelocity impacts (HVI) often cause significant fragmentation to occur in both target and projectile materials, and is often encountered in space debris and planetary impact applications [1]–[5]. In this paper, we focus on determining the individual velocities and sizes of fragments tracked in high-speed images. Inspired by velocimetry methods such as Particle Image Velocimetry (PIV) [6] and Particle Tracking Velocimetry (PTV) [7] and building on past work [8], we describe the setup and algorithm used for measuring fragmentation data.


Sign in / Sign up

Export Citation Format

Share Document