scholarly journals Estimation of Ultrasound Echogenicity Map from B-Mode Images Using Convolutional Neural Network

Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4931
Author(s):  
Che-Chou Shen ◽  
Jui-En Yang

In ultrasound B-mode imaging, speckle noises decrease the accuracy of estimation of tissue echogenicity of imaged targets from the amplitude of the echo signals. In addition, since the granular size of the speckle pattern is affected by the point spread function (PSF) of the imaging system, the resolution of B-mode image remains limited, and the boundaries of tissue structures often become blurred. This study proposed a convolutional neural network (CNN) to remove speckle noises together with improvement of image spatial resolution to reconstruct ultrasound tissue echogenicity map. The CNN model is trained using in silico simulation dataset and tested with experimentally acquired images. Results indicate that the proposed CNN method can effectively eliminate the speckle noises in the background of the B-mode images while retaining the contours and edges of the tissue structures. The contrast and the contrast-to-noise ratio of the reconstructed echogenicity map increased from 0.22/2.72 to 0.33/44.14, and the lateral and axial resolutions also improved from 5.9/2.4 to 2.9/2.0, respectively. Compared with other post-processing filtering methods, the proposed CNN method provides better approximation to the original tissue echogenicity by completely removing speckle noises and improving the image resolution together with the capability for real-time implementation.

2020 ◽  
Vol 12 (17) ◽  
pp. 2811
Author(s):  
Yongpeng Dai ◽  
Tian Jin ◽  
Yongkun Song ◽  
Shilong Sun ◽  
Chen Wu

Radar images suffer from the impact of sidelobes. Several sidelobe-suppressing methods including the convolutional neural network (CNN)-based one has been proposed. However, the point spread function (PSF) in the radar images is sometimes spatially variant and affects the performance of the CNN. We propose the spatial-variant convolutional neural network (SV-CNN) aimed at this problem. It will also perform well in other conditions when there are spatially variant features. The convolutional kernels of the CNN can detect motifs with some distinctive features and are invariant to the local position of the motifs. This makes the convolutional neural networks widely used in image processing fields such as image recognition, handwriting recognition, image super-resolution, and semantic segmentation. They also perform well in radar image enhancement. However, the local position invariant character might not be good for radar image enhancement, when features of motifs (also known as the point spread function in the radar imaging field) vary with the positions. In this paper, we proposed an SV-CNN with spatial-variant convolution kernels (SV-CK). Its function is illustrated through a special application of enhancing the radar images. After being trained using radar images with position-codings as the samples, the SV-CNN can enhance the radar images. Because the SV-CNN reads information of the local position contained in the position-coding, it performs better than the conventional CNN. The advance of the proposed SV-CNN is tested using both simulated and real radar images.


2021 ◽  
Vol 11 (9) ◽  
pp. 4292
Author(s):  
Mónica Y. Moreno-Revelo ◽  
Lorena Guachi-Guachi ◽  
Juan Bernardo Gómez-Mendoza ◽  
Javier Revelo-Fuelagán ◽  
Diego H. Peluffo-Ordóñez

Automatic crop identification and monitoring is a key element in enhancing food production processes as well as diminishing the related environmental impact. Although several efficient deep learning techniques have emerged in the field of multispectral imagery analysis, the crop classification problem still needs more accurate solutions. This work introduces a competitive methodology for crop classification from multispectral satellite imagery mainly using an enhanced 2D convolutional neural network (2D-CNN) designed at a smaller-scale architecture, as well as a novel post-processing step. The proposed methodology contains four steps: image stacking, patch extraction, classification model design (based on a 2D-CNN architecture), and post-processing. First, the images are stacked to increase the number of features. Second, the input images are split into patches and fed into the 2D-CNN model. Then, the 2D-CNN model is constructed within a small-scale framework, and properly trained to recognize 10 different types of crops. Finally, a post-processing step is performed in order to reduce the classification error caused by lower-spatial-resolution images. Experiments were carried over the so-named Campo Verde database, which consists of a set of satellite images captured by Landsat and Sentinel satellites from the municipality of Campo Verde, Brazil. In contrast to the maximum accuracy values reached by remarkable works reported in the literature (amounting to an overall accuracy of about 81%, a f1 score of 75.89%, and average accuracy of 73.35%), the proposed methodology achieves a competitive overall accuracy of 81.20%, a f1 score of 75.89%, and an average accuracy of 88.72% when classifying 10 different crops, while ensuring an adequate trade-off between the number of multiply-accumulate operations (MACs) and accuracy. Furthermore, given its ability to effectively classify patches from two image sequences, this methodology may result appealing for other real-world applications, such as the classification of urban materials.


2022 ◽  
Vol 15 (2) ◽  
pp. 027001
Author(s):  
Yang Cui ◽  
Taiki Takamatsu ◽  
Koichi Shimizu ◽  
Takeo Miyake

Abstract As for the diagnosis and treatment of eye diseases, an ideal fundus imaging system is expected to be portability, low cost, and high resolution. Here, we demonstrate a non-mydriatic near-infrared fundus imaging system with light illumination from an electronic contact lens (E-lens). The E-lens can illuminate the retinal and choroidal structures for capturing the fundus images when voltage is applied wirelessly to the lens. And we also reconstruct the images with a depth-dependent point-spread function to suppress the scattering effect that eventually visualizes the clear fundus images.


Nanophotonics ◽  
2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Elyas Bayati ◽  
Raphaël Pestourie ◽  
Shane Colburn ◽  
Zin Lin ◽  
Steven G. Johnson ◽  
...  

Abstract We report an inverse-designed, high numerical aperture (∼0.44), extended depth of focus (EDOF) meta-optic, which exhibits a lens-like point spread function (PSF). The EDOF meta-optic maintains a focusing efficiency comparable to that of a hyperboloid metalens throughout its depth of focus. Exploiting the extended depth of focus and computational post processing, we demonstrate broadband imaging across the full visible spectrum using a 1 mm, f/1 meta-optic. Unlike other canonical EDOF meta-optics, characterized by phase masks such as a log-asphere or cubic function, our design exhibits a highly invariant PSF across ∼290 nm optical bandwidth, which leads to significantly improved image quality, as quantified by structural similarity metrics.


2020 ◽  
Author(s):  
Florian Dupuy ◽  
Olivier Mestre ◽  
Léo Pfitzner

<p>Cloud cover is a crucial information for many applications such as planning land observation missions from space. However, cloud cover remains a challenging variable to forecast, and Numerical Weather Prediction (NWP) models suffer from significant biases, hence justifying the use of statistical post-processing techniques. In our application, the ground truth is a gridded cloud cover product derived from satellite observations over Europe, and predictors are spatial fields of various variables produced by ARPEGE (Météo-France global NWP) at the corresponding lead time.</p><p>In this study, ARPEGE cloud cover is post-processed using a convolutional neural network (CNN). CNN is the most popular machine learning tool to deal with images. In our case, CNN allows to integrate spatial information contained in NWP outputs. We show that a simple U-Net architecture produces significant improvements over Europe. Compared to the raw ARPEGE forecasts, MAE drops from 25.1 % to 17.8 % and RMSE decreases from 37.0 % to 31.6 %. Considering specific needs for earth observation, special interest was put on forecasts with low cloud cover conditions (< 10 %). For this particular nebulosity class, we show that hit rate jumps from 40.6 to 70.7 (which is the order of magnitude of what can be achieved using classical machine learning algorithms such as random forests) while false alarm decreases from 38.2 to 29.9. This is an excellent result, since improving hit rates by means of random forests usually also results in a slight increase of false alarms.</p>


2013 ◽  
Vol 33 (4) ◽  
pp. 0411002
Author(s):  
周红仙 Zhou Hongxian ◽  
周有平 Zhou Youping ◽  
王毅 Wang Yi

Sign in / Sign up

Export Citation Format

Share Document