image size
Recently Published Documents


TOTAL DOCUMENTS

549
(FIVE YEARS 207)

H-INDEX

26
(FIVE YEARS 3)

2022 ◽  
Vol 2022 ◽  
pp. 1-10
Author(s):  
Xuhui Fu

With the continuous development and popularization of artificial intelligence technology in recent years, the field of deep learning has also developed relatively rapidly. The application of deep learning technology has attracted attention in image detection, image recognition, image recoloring, and image artistic style transfer. Some image art style transfer techniques with deep learning as the core are also widely used. This article intends to create an image art style transfer algorithm to quickly realize the image art style transfer based on the generation of confrontation network. The principle of generating a confrontation network is mainly to change the traditional deconvolution operation, by adjusting the image size and then convolving, using the content encoder and style encoder to encode the content and style of the selected image, and by extracting the content and style features. In order to enhance the effect of image artistic style transfer, the image is recognized by using a multi-scale discriminator. The experimental results show that this algorithm is effective and has great application and promotion value.


2022 ◽  
Vol 8 ◽  
Author(s):  
Dong Zhang ◽  
Hongcheng Han ◽  
Shaoyi Du ◽  
Longfei Zhu ◽  
Jing Yang ◽  
...  

Malignant melanoma (MM) recognition in whole-slide images (WSIs) is challenging due to the huge image size of billions of pixels and complex visual characteristics. We propose a novel automatic melanoma recognition method based on the multi-scale features and probability map, named MPMR. First, we introduce the idea of breaking up the WSI into patches to overcome the difficult-to-calculate problem of WSIs with huge sizes. Second, to obtain and visualize the recognition result of MM tissues in WSIs, a probability mapping method is proposed to generate the mask based on predicted categories, confidence probabilities, and location information of patches. Third, considering that the pathological features related to melanoma are at different scales, such as tissue, cell, and nucleus, and to enhance the representation of multi-scale features is important for melanoma recognition, we construct a multi-scale feature fusion architecture by additional branch paths and shortcut connections, which extracts the enriched lesion features from low-level features containing more detail information and high-level features containing more semantic information. Fourth, to improve the extraction feature of the irregular-shaped lesion and focus on essential features, we reconstructed the residual blocks by a deformable convolution and channel attention mechanism, which further reduces information redundancy and noisy features. The experimental results demonstrate that the proposed method outperforms the compared algorithms, and it has a potential for practical applications in clinical diagnosis.


2022 ◽  
Vol 12 (1) ◽  
pp. 468
Author(s):  
Yeonghyeon Gu ◽  
Zhegao Piao ◽  
Seong Joon Yoo

In magnetic resonance imaging (MRI) segmentation, conventional approaches utilize U-Net models with encoder–decoder structures, segmentation models using vision transformers, or models that combine a vision transformer with an encoder–decoder model structure. However, conventional models have large sizes and slow computation speed and, in vision transformer models, the computation amount sharply increases with the image size. To overcome these problems, this paper proposes a model that combines Swin transformer blocks and a lightweight U-Net type model that has an HarDNet blocks-based encoder–decoder structure. To maintain the features of the hierarchical transformer and shifted-windows approach of the Swin transformer model, the Swin transformer is used in the first skip connection layer of the encoder instead of in the encoder–decoder bottleneck. The proposed model, called STHarDNet, was evaluated by separating the anatomical tracings of lesions after stroke (ATLAS) dataset, which comprises 229 T1-weighted MRI images, into training and validation datasets. It achieved Dice, IoU, precision, and recall values of 0.5547, 0.4185, 0.6764, and 0.5286, respectively, which are better than those of the state-of-the-art models U-Net, SegNet, PSPNet, FCHarDNet, TransHarDNet, Swin Transformer, Swin UNet, X-Net, and D-UNet. Thus, STHarDNet improves the accuracy and speed of MRI image-based stroke diagnosis.


Author(s):  
Qianru Zhang ◽  
Meng Zhang ◽  
Chinthaka Gamanayake ◽  
Chau Yuen ◽  
Zehao Geng ◽  
...  

AbstractWith the improvement of electronic circuit production methods, such as reduction of component size and the increase of component density, the risk of defects is increasing in the production line. Many techniques have been incorporated to check for failed solder joints, such as X-ray imaging, optical imaging and thermal imaging, among which X-ray imaging can inspect external and internal defects. However, some advanced algorithms are not accurate enough to meet the requirements of quality control. A lot of manual inspection is required that increases the specialist workload. In addition, automatic X-ray inspection could produce incorrect region of interests that deteriorates the defect detection. The high-dimensionality of X-ray images and changes in image size also pose challenges to detection algorithms. Recently, the latest advances in deep learning provide inspiration for image-based tasks and are competitive with human level. In this work, deep learning is introduced in the inspection for quality control. Four joint defect detection models based on artificial intelligence are proposed and compared. The noisy ROI and the change of image dimension problems are addressed. The effectiveness of the proposed models is verified by experiments on real-world 3D X-ray dataset, which saves the specialist inspection workload greatly.


Tomography ◽  
2022 ◽  
Vol 8 (1) ◽  
pp. 59-76
Author(s):  
Bing Li ◽  
Shaoyong Wu ◽  
Siqin Zhang ◽  
Xia Liu ◽  
Guangqing Li

Automatic image segmentation plays an important role in the fields of medical image processing so that these fields constantly put forward higher requirements for the accuracy and speed of segmentation. In order to improve the speed and performance of the segmentation algorithm of medical images, we propose a medical image segmentation algorithm based on simple non-iterative clustering (SNIC). Firstly, obtain the feature map of the image by extracting the texture information of it with feature extraction algorithm; Secondly, reduce the image to a quarter of the original image size by downscaling; Then, the SNIC super-pixel algorithm with texture information and adaptive parameters which used to segment the downscaling image to obtain the superpixel mark map; Finally, restore the superpixel labeled image to the original size through the idea of the nearest neighbor algorithm. Experimental results show that the algorithm uses an improved superpixel segmentation method on downscaling images, which can increase the segmentation speed when segmenting medical images, while ensuring excellent segmentation accuracy.


2022 ◽  
Vol 29 (1) ◽  
Author(s):  
Cyril Léveillé ◽  
Kewin Desjardins ◽  
Horia Popescu ◽  
Boris Vondungbo ◽  
Marcel Hennes ◽  
...  

The latest Complementary Metal Oxide Semiconductor (CMOS) 2D sensors now rival the performance of state-of-the-art photon detectors for optical application, combining a high-frame-rate speed with a wide dynamic range. While the advent of high-repetition-rate hard X-ray free-electron lasers (FELs) has boosted the development of complex large-area fast CCD detectors in the extreme ultraviolet (EUV) and soft X-ray domains, scientists lacked such high-performance 2D detectors, principally due to the very poor efficiency limited by the sensor processing. Recently, a new generation of large back-side-illuminated scientific CMOS sensors (CMOS-BSI) has been developed and commercialized. One of these cost-efficient and competitive sensors, the GSENSE400BSI, has been implemented and characterized, and the proof of concept has been carried out at a synchrotron or laser-based X-ray source. In this article, we explore the feasibility of single-shot ultra-fast experiments at FEL sources operating in the EUV/soft X-ray regime with an AXIS-SXR camera equipped with the GSENSE400BSI-TVISB sensor. We illustrate the detector capabilities by performing a soft X-ray magnetic scattering experiment at the DiProi end-station of the FERMI FEL. These measurements show the possibility of integrating this camera for collecting single-shot images at the 50 Hz operation mode of FERMI with a cropped image size of 700 × 700 pixels. The efficiency of the sensor at a working photon energy of 58 eV and the linearity over the large FEL intensity have been verified. Moreover, on-the-fly time-resolved single-shot X-ray resonant magnetic scattering imaging from prototype Co/Pt multilayer films has been carried out with a time collection gain of 30 compared to the classical start-and-stop acquisition method performed with the conventional CCD-BSI detector available at the end-station.


2021 ◽  
Vol 38 (6) ◽  
pp. 1677-1687
Author(s):  
Chao Liu ◽  
Jing Yang ◽  
Yining Zhang ◽  
Xuan Zhang ◽  
Weinan Zhao ◽  
...  

Face images, as an information carrier, are naturally weak in privacy. If they are collected and analyzed by malicious third parties, personal privacy will leak, and many other unmeasurable losses will occur. Differential privacy protection of face images is mainly being studied under non-interactive frameworks. However, the ε-effect impacts the entire image under these frameworks. Besides, the noise influence is uniform across the protected image, during the realization of the Laplace mechanism. The differential privacy of face images under interactive mechanisms can protect the privacy of different areas to different degrees, but the total error is still constrained by the image size. To solve the problem, this paper proposes a non-global privacy protection method for sensitive areas in face images, known as differential privacy of landmark positioning (DPLP). The proposed algorithm is realized as follows: Firstly, the active shape model (ASM) algorithm was adopted to position the area of each face landmark. If the landmark overlaps a subgraph of the original image, then the subgraph would be taken as a sensitive area. Then, the sensitive area was treated as the seed for regional growth, following the fusion similarity measurement mechanism (FSMM). In our method, the privacy budget is only allocated to the seed; whether any other insensitive area would be protected depends on whether the area exists in a growing region. In addition, when a subgraph meets the criterion for merging with multiple seeds, the most reasonable seed to be merged would be selected by the exponential mechanism. Experimental results show that the DPLP algorithm satisfies ε-differential privacy, its total error does not change with image size, and the noisy image remains highly available.


2021 ◽  
pp. 1-15
Author(s):  
Milan Ćurković ◽  
Andrijana Ćurković ◽  
Damir Vučina

Image binarization is one of the fundamental methods in image processing and it is mainly used as a preprocessing for other methods in image processing. We present an image binarization method with the primary purpose to find markers such as those used in mobile 3D scanning systems. Handling a mobile 3D scanning system often includes bad conditions such as light reflection and non-uniform illumination. As the basic part of the scanning process, the proposed binarization method successfully overcomes the above problems and does it successfully. Due to the trend of increasing image size and real-time image processing we were able to achieve the required small algorithmic complexity. The paper outlines a comparison with several other methods with a focus on objects with markers including the calibration system plane of the 3D scanning system. Although it is obvious that no binarization algorithm is best for all types of images, we also give the results of the proposed method applied to historical documents.


Author(s):  
Marwa Ahmad ◽  
Nameer N. EL-Emam ◽  
Ali F. AL-Azawi

Steganography algorithms have become a significant technique for preventing illegal users from obtaining secret data. In this paper, a deep hiding/extraction algorithm has been improved (IDHEA) to hide a secret message in colour images. The proposed algorithm has been applied to enhance the payload capacity and reduce the time complexity. Modified LSB (MLSB) is based on disseminating secret data randomly on a cover-image and has been proposed to replace a number of bits per byte (Nbpb), up to 4 bits, to increase payload capacity and make it difficult to access the hiding data. The number of levels of the IDHEA algorithm has been specified randomly; each level uses a colour image, and from one level to the next, the image size is expanded, where this algorithm starts with a small size of a cover-image and increases the size of the image gradually or suddenly at the next level, according to an enlargement ratio. Lossless image compression based on the run-length encoding algorithm and Gzip has been applied to enable the size of the data that is hiding at the next level, and data encryption using the Advanced Encryption Standard algorithm (AES) has been introduced at each level to enhance the security level. Thus, the effectiveness of the proposed IDHEA algorithm has been measured at the last level, and the performance of the proposed hiding algorithm has been checked by many statistical and visual measures in terms of the embedding capacity and imperceptibility. Comparisons between the proposed approach and previous work have been implemented; it appears that the intended approach is better than the previously modified LSB algorithms, and it works against visual and statistical attacks with excellent performance achieved by using the detection error (PE). Furthermore, the results confirmed that the stego-image with high imperceptibility has reached even a payload capacity that is large and replaces twelve bits per pixel (12-bpp). Moreover, testing is confirmed in that the proposed algorithm can embed secret data efficiently with better visual quality.


Morphologia ◽  
2021 ◽  
Vol 15 (3) ◽  
pp. 196-206
Author(s):  
N.I. Maryenko ◽  
O.Yu. Stepanenko

Background. Fractal analysis is an informative and objective method of mathematical analysis that can complement existing methods of morphometry and provides a comprehensive quantitative assessment of the spatial configuration of irregular anatomical structures. Objective: a comparative analysis of fractal analysis methods used for morphometry in biomedical research. Methods. A comprehensive analysis of morphological studies, based on fractal analysis. Results. Different types of medical images with different preprocessing algorithms can be used for fractal analysis. The parameter determined by fractal analysis is the fractal dimension, which is a measure of the complexity of the spatial configuration and the degree of filling of space with a certain geometric object. The most known methods of fractal analysis are the following: box counting, caliper, pixel dilation, "mass-radius", cumulative intersection, grid intercept. The box counting method and its modifications is the most commonly used method due to the simplicity and versatility. Different methods of fractal analysis have a similar principle: fractal measures (different geometric figures) of a certain size completely cover the structure in the image, size of fractal measure is iteratively changed, and the minimum number of fractal measures covering the structure is calculated. Methods of fractal analysis differ in the type of fractal measure, which can be a linear segment, a square of a fractal grid, a cube, a circle, a sphere etc. Conclusion. The choice of the method of fractal analysis and image preprocessing method depends on the studied structure, features of its spatial configuration, the type of image used for the analysis, and the aim of the study.


Sign in / Sign up

Export Citation Format

Share Document