image area
Recently Published Documents


TOTAL DOCUMENTS

186
(FIVE YEARS 40)

H-INDEX

10
(FIVE YEARS 2)

2022 ◽  
pp. 44-79
Author(s):  
Deepti Deepak Nikumbh ◽  
Shahzia Sayyad ◽  
Rupesh R Joshi ◽  
Karan Sanjeev Dubey ◽  
Deep V. Mehta ◽  
...  

Medical imaging is associated with different techniques and processes that are used to create visual representations of internal parts of the human body for diagnostic and treatment purposes within digital health. Machine learning plays a crucial role in the medical imaging field including analysis of various medical images, computer-aided diagnosis or detection, image retrieval, gene data analysis, image reconstruction, and organ segmentation. The machine learning algorithm framework recognizes the best combination of the medical image features for categorizing the medical images or processing some metric for the given image area. The images obtained are then processed using algorithms such as K-means, support vector machines, decision trees, neural networks, and deep learning techniques.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Tao Wang ◽  
Yizhu Chen ◽  
Hangxiang Du ◽  
Yongan Liu ◽  
Lidi Zhang ◽  
...  

This study was aimed at exploring the application value of transcranial Doppler (TCD) based on artificial intelligence algorithm in monitoring the neuroendocrine changes in patients with severe head injury in the acute phase; 80 patients with severe brain injury were included in this study as the study subjects, and they were randomly divided into the control group (conventional TCD) and the experimental group (algorithm-optimized TCD), 40 patients in each group. An artificial intelligence neighborhood segmentation algorithm for TCD images was designed to comprehensively evaluate the application value of this algorithm by measuring the TCD image area segmentation error and running time of this algorithm. In addition, the Glasgow coma scale (GCS) and each neuroendocrine hormone level were used to assess the neuroendocrine status of the patients. The results showed that the running time of the artificial intelligence neighborhood segmentation algorithm for TCD was 3.14 ± 1.02   s , which was significantly shorter than 32.23 ± 9.56   s of traditional convolutional neural network (CNN) algorithms ( P < 0.05 ). The false rejection rate (FRR) of TCD image area segmentation of this algorithm was significantly reduced, and the false acceptance rate (FAR) and true acceptance rate (TAR) were significantly increased ( P < 0.05 ). The consistent rate of the GCS score and Doppler ultrasound imaging diagnosis results in the experimental group was 93.8%, which was significantly higher than the 80.3% in the control group ( P < 0.05 ). The consistency rate of Doppler ultrasound imaging diagnosis results of patients in the experimental group with abnormal levels of follicle stimulating hormone (FSH), prolactin (PRL), growth hormone (GH), adrenocorticotropic hormone (ACTH), and thyroid stimulating hormone (TSH) was significantly higher than that of the control group ( P < 0.05 ). In summary, the artificial intelligence neighborhood segmentation algorithm can significantly shorten the processing time of the TCD image and reduce the segmentation error of the image area, which significantly improves the monitoring level of TCD for patients with severe craniocerebral injury and has good clinical application value.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7790
Author(s):  
Hang Chen ◽  
Weiguo Zhang ◽  
Danghui Yan

Recently, Siamese architecture has been widely used in the field of visual tracking, and has achieved great success. Most Siamese network based trackers aggregate the target information of two branches by cross-correlation. However, since the location of the sampling points in the search feature area is pre-fixed in cross-correlation operation, these trackers suffer from either background noise influence or missing foreground information. Moreover, the cross-correlation between the template and the search area neglects the geometry information of the target. In this paper, we propose a Siamese deformable cross-correlation network to model the geometric structure of target and improve the performance of visual tracking. We propose to learn an offset field end-to-end in cross-correlation. With the guidance of the offset field, the sampling in the search image area can adapt to the deformation of the target, and realize the modeling of the geometric structure of the target. We further propose an online classification sub-network to model the variation of target appearance and enhance the robustness of the tracker. Extensive experiments are conducted on four challenging benchmarks, including OTB2015, VOT2018, VOT2019 and UAV123. The results demonstrate that our tracker achieves state-of-the-art performance.


2021 ◽  
Author(s):  
Xinli Wu ◽  
Jiali Luo ◽  
Minxiong Zhang ◽  
Wenzhen Yang

Abstract Bas-relief, a form of sculpture art representation, has the general characteristics of sculpture art and satisfies people’s visual and tactile feelings by fully leveraging the advantages of painting art in composition, subject matter, and spatial processing. Bas-relief modeling using images is generally classified into the method based on the three-dimensional (3D) model, that based on the image depth restoration, and that based on multi-images. The 3D model method requires the 3D model of the object in advance. Bas-relief modeling based on the image depth restoration method usually either uses a depth camera to obtain object depth information or restores the depth information of pixels through the image. Bas-relief modeling based on the multi-image requires a short running time and has high efficiency in processing high resolution level images. Our method can automatically obtain the pixel height of each area in the image and can adjust the concave–convex relationship of each image area to obtain a bas-relief model based on the RGB monocular image. First, the edge contour of an RGB monocular image is extracted and refined by the Gauss difference algorithm based on tangential flow. Subsequently, the complete image contour information is extracted and the region-based image segmentation is used to calibrate the region. This method has improved running speed and stability compared with the traditional algorithm. Second, the regions of the RGB monocular image are divided by the improved connected-component labeling algorithm. In the traditional region calibration algorithm, the contour search strategy and the inner and outer contour definition rules of the image considered result in a low region division efficiency. This study uses an improved contour-based calibration algorithm. Then, the 3D pixel point cloud of each region is calculated by the shape-from-shading algorithm. The concave–convex relationships among these regions can be adjusted by human–computer interaction to form a reasonable bas-relief model. Lastly, the bas-relief model is obtained through triangular reconstruction using the Delaunay triangulation algorithm. The final bas-relief modeling effect is displayed by OpenGL. In this study, six groups of images are selected for conducting regional division tests, and the results obtained by the proposed method and other existing methods are compared. The proposed algorithm shows improved image processing running time for different complexity levels compared with the traditional two-pass scanning method and seed filling method (by approximately 2 s) and with the contour tracking method (by approximately 4 s). Next, image depth recovery experiments are conducted on four sets of images, namely the “ treasure seal,” “Wen Emperor seal,” “lily pattern,” and “peacock pattern,” and the results are compared. The depth of the image obtained by the traditional algorithm is generally lower than the actual plane, and the relative height of each region is not consistent with the actual situation. The proposed algorithm provides height values consistent with the height value information judged in the original image and adjusts the accurate concave–convex relationships. Moreover, the noise in the image is reduced and relatively smooth surfaces are obtained, with accurate concave–convex relationships. The proposed bas-relief model based on RGB monocular images can automatically determine the pixel height of each image area in the image and adjust the concave–convex relationship of each image area. In addition, it can recover the 3D model of the object from the image, enrich the object of bas-relief modeling, and expand the creation space of bas-relief, thereby improving the production efficiency of the bas-relief model based on RGB monocular images. The method has certain shortcomings, which require further exploration. For example, during the process of image contour extraction for region division, small differences exist between the obtained result and the actual situation, which can in turn affect the image depth recovery in the later stage. In addition, partial distortion may occur in the process of 3D reconstruction, which requires further research on point cloud data processing to reconstruct a high-quality three-dimensional surface.


2021 ◽  
Vol 2066 (1) ◽  
pp. 012086
Author(s):  
Yongyi Cui ◽  
Fang Qu

Abstract Video image-based fire detection technology can overcome some shortcomings of traditional fire detection, and has a good development prospect. This paper summarizes the basic principles of image-based fire detection, and analyzes the main features of fire combustion images. According to these features, firstly, the interframe difference method and the watershed algorithm are used to extract the suspected fire image area which may occur. Then, the features of flame image in early fire stage, such as increasing flame area, fluttering edge, irregular shape and flame color, are used as fire recognition criteria. Meanwhile, various image processing technologies and algorithms are used to extract the four main features of the fire, so as to eliminate various sources of interference and further determine whether a fire has occurred. Finally, a variety of different fuels were selected under indoor conditions to simulate fire experiments under different conditions, and the video was recorded. Fire recognition experiments were carried out using experimental videos and some videos found on the Internet. The experimental results show that the extraction and further recognition of suspected fire areas are both effective. However, the experimental simulation environment is relatively simple, and many theoretical and practical problems need to be further studied and solved.


Author(s):  
Jingyi Shen ◽  
Yun Yao ◽  
Hao Mei

Copy-paste tampering is a common type of digital image tampering, which refers to copying a part of the image area in the same image, and then pasting it into another area of the image to generate a forged image, so as to carry out malicious operations such as fraud and framing. This kind of malicious forgery leads to the security problem of digital image. The research of digital image copy paste forensics has important theoretical significance and practical value. For digital image copy-paste tampering, this paper is based on moment invariant image copy paste tampering detection algorithm, and use Matlab software to design the corresponding tampering forensics system.


2021 ◽  
Vol 7 (2) ◽  
pp. 652-655
Author(s):  
Andreas Götz ◽  
Niels Grabow ◽  
Sabine Illner ◽  
Volkmar Senz

Abstract Electrospun nonwovens are widely applied in biomedicine and various other fields. For control of the manufacturing process and quality assurance Scanning electron microscopy (SEM) imaging is one standard practice. In this study, statistical datasets of 60 SEM images of three nonwoven samples were evaluated using Gaussian fit to obtain numerical results of their fiber diameter distributions. The question of how much effort is required for acceptable imaging and processing is being discussed. As determined here, for reliable statistics, a minimum surface area of the nonwoven has to be evaluated. The fiber diameter should be in a range of approximately 2 - 3% of the edge length of the square equivalent of the evaluated image area, using sufficiently magnified SEM images, in which the fiber diameter is imaged over at least 30 pixels.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Hui Wei ◽  
Wei Zheng

An image denoising method is proposed based on the improved Gaussian mixture model to reduce the noises and enhance the image quality. Unlike the traditional image denoising methods, the proposed method models the pixel information in the neighborhood around each pixel in the image. The Gaussian mixture model is employed to measure the similarity between pixels by calculating the L2 norm between the Gaussian mixture models corresponding to the two pixels. The Gaussian mixture model can model the statistical information such as the mean and variance of the pixel information in the image area. The L2 norm between the two Gaussian mixture models represents the difference in the local grayscale intensity and the richness of the details of the pixel information around the two pixels. In this sense, the L2 norm between Gaussian mixture models can more accurately measure the similarity between pixels. The experimental results show that the proposed method can improve the denoising performance of the images while retaining the detailed information of the image.


2021 ◽  
Vol 3 (1) ◽  
pp. 43-54
Author(s):  
Gabor Hollo

Background: In ophthalmology, thickness and vessel density (VD) measurements for the 6 x 6 mm inner macular retinal area have received increasing attention in glaucomatous progression research. For this area, the Angiovue optical coherence tomography system introduced a 304 x 304 A/B scans function (classic Angio Retina scan) in 2014, and a 400 x 400 A/B scans function (high-definition [HD] Angio Retina scan) in 2017. These scan types cannot be used in combination for the software provided for progression analysis.Purpose: Since losing data for 3 years may negatively influence progression analysis, we investigated whether clinically significant differences exist between consecutive measurements acquired with these scan types on the same eyes.Methods: As a part of our noninterventional prospective glaucoma imaging study, primary-open-angle glaucoma patients (POAG group), and ocular hypertensive and healthy control participants (structurally undamaged group) were imagedusing both the classic and the HD Angio Retina scans, respectively, without changing the patients’ position. High-quality images were obtained on 12 POAG eyes of 12 consecutive POAG patients, and 10 healthy and ocular hypertensive eyes of 10 consecutive participants before the data collection had to be suspended due to the new coronavirus epidemic.Results: For Early Treatment Diabetic Retinopathy Study image area, the mean difference (classic minus HD value) was 0.02 ± 0.37 μm for inner retinal thickness (P = 0.869) and 0.33 ± 1.33 % (P = 0.452) for superficial capillary VD in the structurally normal group (between-methods difference: ≤ 0.8% of the respective normal value). In the POAG group, the corresponding figures were -0.07 ± 1.22 μm for inner retinal thickness (P = 0.854; between-methods difference: 0.6% of the normal value) and 1.12 ± 2.58 % for superficial capillary VD (P = 0.158; classic scan value minus HD scan value: 1.12 ± 2.58 %; 2.3% of the normal value).Conclusion: Our results suggest that combined use of thickness and VD values for structurally normal eyes and thickness values for POAG eyes derived from classic and HD scans, respectively, for progression analysis can be reasonable since the differences between the corresponding values are small. However, combining the corresponding VD parameters for POAG eyes is useful only when the follow-up time before the scan type change is long enough to counterbalance the effect of the change on the result.  


Sign in / Sign up

Export Citation Format

Share Document