scholarly journals Hybrid Machine Learning Approach to Detect the Changes in SAR Images for Salvation of Spectral Constriction Problem

2021 ◽  
Vol 3 (2) ◽  
pp. 118-130
Author(s):  
Dhaya R

For implementing change detection approaches in image processing domain, spectral limitations in remotely sensed images are remaining as an unresolved challenge. Recently, many algorithms have been developed to detect spectral, spatial, and temporal constraints to detect digital change from the synthetic aperture radar (SAR) images. The unsupervised method is used to detect the appropriate changes in the digital images, which are taken between two different consecutive periods at the same scene. Many of the algorithms are identifying the changes in the image by utilizing a similarity index-based approach. Therefore, it fails to detect the original changes in the images due to the recurring spectral effects. This necessitated the need to initiate more research for suppressing the spectral effects in the SAR images. This research article strongly believes that the unsupervised learning approach can solve the spectral issues to correct in the appropriate scene. The convolutional neural network has been implemented here to extract the image features and classification, which will be done through a SVM classifier to detect the changes in the remote sensing images. This fusion type algorithm provides better accuracy to detect the relevant changes between different temporal images. In the feature extraction, the semantic segmentation procedure will be performed to extract the flattened image features. Due to this procedure, the spectral problem in the image will be subsided successfully. The CNN is generating feature map information and trained by various spectral images in the dataset. The proposed hybrid technique has developed an unsupervised method to segment, train, and classify the given input images by using a pre-trained semantic segmentation approach. It demonstrates a high level of accuracy in identifying the changes in images.

2021 ◽  
Vol 13 (24) ◽  
pp. 5121
Author(s):  
Yu Zhou ◽  
Yi Li ◽  
Weitong Xie ◽  
Lu Li

It is very common to apply convolutional neural networks (CNNs) to synthetic aperture radar (SAR) automatic target recognition (ATR). However, most of the SAR ATR methods using CNN mainly use the image features of SAR images and make little use of the unique electromagnetic scattering characteristics of SAR images. For SAR images, attributed scattering centers (ASCs) reflect the electromagnetic scattering characteristics and the local structures of the target, which are useful for SAR ATR. Therefore, we propose a network to comprehensively use the image features and the features related to ASCs for improving the performance of SAR ATR. There are two branches in the proposed network, one extracts the more discriminative image features from the input SAR image; the other extracts physically meaningful features from the ASC schematic map that reflects the local structure of the target corresponding to each ASC. Finally, the high-level features obtained by the two branches are fused to recognize the target. The experimental results on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset prove the capability of the SAR ATR method proposed in this letter.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Matthew D. Guay ◽  
Zeyad A. S. Emam ◽  
Adam B. Anderson ◽  
Maria A. Aronova ◽  
Irina D. Pokrovskaya ◽  
...  

AbstractBiologists who use electron microscopy (EM) images to build nanoscale 3D models of whole cells and their organelles have historically been limited to small numbers of cells and cellular features due to constraints in imaging and analysis. This has been a major factor limiting insight into the complex variability of cellular environments. Modern EM can produce gigavoxel image volumes containing large numbers of cells, but accurate manual segmentation of image features is slow and limits the creation of cell models. Segmentation algorithms based on convolutional neural networks can process large volumes quickly, but achieving EM task accuracy goals often challenges current techniques. Here, we define dense cellular segmentation as a multiclass semantic segmentation task for modeling cells and large numbers of their organelles, and give an example in human blood platelets. We present an algorithm using novel hybrid 2D–3D segmentation networks to produce dense cellular segmentations with accuracy levels that outperform baseline methods and approach those of human annotators. To our knowledge, this work represents the first published approach to automating the creation of cell models with this level of structural detail.


Author(s):  
Amrita Naik ◽  
Damodar Reddy Edla

Lung cancer is the most common cancer throughout the world and identification of malignant tumors at an early stage is needed for diagnosis and treatment of patient thus avoiding the progression to a later stage. In recent times, deep learning architectures such as CNN have shown promising results in effectively identifying malignant tumors in CT scans. In this paper, we combine the CNN features with texture features such as Haralick and Gray level run length matrix features to gather benefits of high level and spatial features extracted from the lung nodules to improve the accuracy of classification. These features are further classified using SVM classifier instead of softmax classifier in order to reduce the overfitting problem. Our model was validated on LUNA dataset and achieved an accuracy of 93.53%, sensitivity of 86.62%, the specificity of 96.55%, and positive predictive value of 94.02%.


2019 ◽  
Vol 45 (10) ◽  
pp. 3193-3201 ◽  
Author(s):  
Yajuan Li ◽  
Xialing Huang ◽  
Yuwei Xia ◽  
Liling Long

Abstract Purpose To explore the value of CT-enhanced quantitative features combined with machine learning for differential diagnosis of renal chromophobe cell carcinoma (chRCC) and renal oncocytoma (RO). Methods Sixty-one cases of renal tumors (chRCC = 44; RO = 17) that were pathologically confirmed at our hospital between 2008 and 2018 were retrospectively analyzed. All patients had undergone preoperative enhanced CT scans including the corticomedullary (CMP), nephrographic (NP), and excretory phases (EP) of contrast enhancement. Volumes of interest (VOIs), including lesions on the images, were manually delineated using the RadCloud platform. A LASSO regression algorithm was used to screen the image features extracted from all VOIs. Five machine learning classifications were trained to distinguish chRCC from RO by using a fivefold cross-validation strategy. The performance of the classifier was mainly evaluated by areas under the receiver operating characteristic (ROC) curve and accuracy. Results In total, 1029 features were extracted from CMP, NP, and EP. The LASSO regression algorithm was used to screen out the four, four, and six best features, respectively, and eight features were selected when CMP and NP were combined. All five classifiers had good diagnostic performance, with area under the curve (AUC) values greater than 0.850, and support vector machine (SVM) classifier showed a diagnostic accuracy of 0.945 (AUC 0.964 ± 0.054; sensitivity 0.999; specificity 0.800), showing the best performance. Conclusions Accurate preoperative differential diagnosis of chRCC and RO can be facilitated by a combination of CT-enhanced quantitative features and machine learning.


2021 ◽  
Vol 13 (4) ◽  
pp. 596
Author(s):  
David Vint ◽  
Matthew Anderson ◽  
Yuhao Yang ◽  
Christos Ilioudis ◽  
Gaetano Di Caterina ◽  
...  

In recent years, the technological advances leading to the production of high-resolution Synthetic Aperture Radar (SAR) images has enabled more and more effective target recognition capabilities. However, high spatial resolution is not always achievable, and, for some particular sensing modes, such as Foliage Penetrating Radars, low resolution imaging is often the only option. In this paper, the problem of automatic target recognition in Low Resolution Foliage Penetrating (FOPEN) SAR is addressed through the use of Convolutional Neural Networks (CNNs) able to extract both low and high level features of the imaged targets. Additionally, to address the issue of limited dataset size, Generative Adversarial Networks are used to enlarge the training set. Finally, a Receiver Operating Characteristic (ROC)-based post-classification decision approach is used to reduce classification errors and measure the capability of the classifier to provide a reliable output. The effectiveness of the proposed framework is demonstrated through the use of real SAR FOPEN data.


Author(s):  
Bo Wang ◽  
Xiaoting Yu ◽  
Chengeng Huang ◽  
Qinghong Sheng ◽  
Yuanyuan Wang ◽  
...  

The excellent feature extraction ability of deep convolutional neural networks (DCNNs) has been demonstrated in many image processing tasks, by which image classification can achieve high accuracy with only raw input images. However, the specific image features that influence the classification results are not readily determinable and what lies behind the predictions is unclear. This study proposes a method combining the Sobel and Canny operators and an Inception module for ship classification. The Sobel and Canny operators obtain enhanced edge features from the input images. A convolutional layer is replaced with the Inception module, which can automatically select the proper convolution kernel for ship objects in different image regions. The principle is that the high-level features abstracted by the DCNN, and the features obtained by multi-convolution concatenation of the Inception module must ultimately derive from the edge information of the preprocessing input images. This indicates that the classification results are based on the input edge features, which indirectly interpret the classification results to some extent. Experimental results show that the combination of the edge features and the Inception module improves DCNN ship classification performance. The original model with the raw dataset has an average accuracy of 88.72%, while when using enhanced edge features as input, it achieves the best performance of 90.54% among all models. The model that replaces the fifth convolutional layer with the Inception module has the best performance of 89.50%. It performs close to VGG-16 on the raw dataset and is significantly better than other deep neural networks. The results validate the functionality and feasibility of the idea posited.


1987 ◽  
Vol 10 (3) ◽  
pp. 407-436 ◽  
Author(s):  
Michael A. Arbib

AbstractIntermediate constructs are required as bridges between complex behaviors and realistic models of neural circuitry. For cognitive scientists in general, schemas are the appropriate functional units; brain theorists can work with neural layers as units intermediate between structures subserving schemas and small neural circuits.After an account of different levels of analysis, we describe visuomotor coordination in terms of perceptual schemas and motor schemas. The interest of schemas to cognitive science in general is illustrated with the example of perceptual schemas in high-level vision and motor schemas in the control of dextrous hands.Rana computatrix, the computational frog, is introduced to show how one constructs an evolving set of model families to mediate flexible cooperation between theory and experiment. Rana computatrix may be able to do for the study of the organizational principles of neural circuitry what Aplysia has done for the study of subcellular mechanisms of learning. Approach, avoidance, and detour behavior in frogs and toads are analyzed in terms of interacting schemas. Facilitation and prey recognition are implemented as tectal-pretectal interactions, with the tectum modeled by an array of tectal columns. We show how layered neural computation enters into models of stereopsis and how depth schemas may involve the interaction of accommodation and binocular cues in anurans.


2021 ◽  
Vol 8 ◽  
Author(s):  
Mojtaba Akbari ◽  
Jay Carriere ◽  
Tyler Meyer ◽  
Ron Sloboda ◽  
Siraj Husain ◽  
...  

During an ultrasound (US) scan, the sonographer is in close contact with the patient, which puts them at risk of COVID-19 transmission. In this paper, we propose a robot-assisted system that automatically scans tissue, increasing sonographer/patient distance and decreasing contact duration between them. This method is developed as a quick response to the COVID-19 pandemic. It considers the preferences of the sonographers in terms of how US scanning is done and can be trained quickly for different applications. Our proposed system automatically scans the tissue using a dexterous robot arm that holds US probe. The system assesses the quality of the acquired US images in real-time. This US image feedback will be used to automatically adjust the US probe contact force based on the quality of the image frame. The quality assessment algorithm is based on three US image features: correlation, compression and noise characteristics. These US image features are input to the SVM classifier, and the robot arm will adjust the US scanning force based on the SVM output. The proposed system enables the sonographer to maintain a distance from the patient because the sonographer does not have to be holding the probe and pressing against the patient's body for any prolonged time. The SVM was trained using bovine and porcine biological tissue, the system was then tested experimentally on plastisol phantom tissue. The result of the experiments shows us that our proposed quality assessment algorithm successfully maintains US image quality and is fast enough for use in a robotic control loop.


Sign in / Sign up

Export Citation Format

Share Document