scholarly journals High-quality Strong Lens Candidates in the Final Kilo-Degree Survey Footprint

2021 ◽  
Vol 923 (1) ◽  
pp. 16
Author(s):  
R. Li ◽  
N. R. Napolitano ◽  
C. Spiniello ◽  
C. Tortora ◽  
K. Kuijken ◽  
...  

Abstract We present 97 new high-quality strong lensing candidates found in the final ∼350 deg2 that complete the full ∼1350 deg2 area of the Kilo-Degree Survey (KiDS). Together with our previous findings, the final list of high-quality candidates from KiDS sums up to 268 systems. The new sample is assembled using a new convolutional neural network (CNN) classifier applied to r-band (best-seeing) and g, r, and i color-composited images separately. This optimizes the complementarity of the morphology and color information on the identification of strong lensing candidates. We apply the new classifiers to a sample of luminous red galaxies (LRGs) and a sample of bright galaxies (BGs) and select candidates that received a high probability to be a lens from the CNN (P CNN). In particular, setting P CNN > 0.8 for the LRGs, the one-band CNN predicts 1213 candidates, while the three-band classifier yields 1299 candidates, with only ∼30% overlap. For the BGs, in order to minimize the false positives, we adopt a more conservative threshold, P CNN > 0.9, for both CNN classifiers. This results in 3740 newly selected objects. The candidates from the two samples are visually inspected by seven coauthors to finally select 97 “high-quality” lens candidates which received mean scores larger than 6 (on a scale from 0 to 10). We finally discuss the effect of the seeing on the accuracy of CNN classification and possible avenues to increase the efficiency of multiband classifiers, in preparation of next-generation surveys from ground and space.

2020 ◽  
Vol 12 (1) ◽  
pp. 39-55
Author(s):  
Hadj Ahmed Bouarara

In recent years, surveillance video has become a familiar phenomenon because it gives us a feeling of greater security, but we are continuously filmed and our privacy is greatly affected. This work deals with the development of a private video surveillance system (PVSS) using regression residual convolutional neural network (RR-CNN) with the goal to propose a new security policy to ensure the privacy of no-dangerous person and prevent crime. The goal is to best meet the interests of all parties: the one who films and the one who is filmed.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Gaoyang Li ◽  
Kazuhiro Watanabe ◽  
Hitomi Anzai ◽  
Xiaorui Song ◽  
Aike Qiao ◽  
...  

Abstract Owing to the diversity of pulse-wave morphology, pulse-based diagnosis is difficult, especially pulse-wave-pattern classification (PWPC). A powerful method for PWPC is a convolutional neural network (CNN). It outperforms conventional methods in pattern classification due to extracting informative abstraction and features. For previous PWPC criteria, the relationship between pulse and disease types is not clear. In order to improve the clinical practicability, there is a need for a CNN model to find the one-to-one correspondence between pulse pattern and disease categories. In this study, five cardiovascular diseases (CVD) and complications were extracted from medical records as classification criteria to build pulse data set 1. Four physiological parameters closely related to the selected diseases were also extracted as classification criteria to build data set 2. An optimized CNN model with stronger feature extraction capability for pulse signals was proposed, which achieved PWPC with 95% accuracy in data set 1 and 89% accuracy in data set 2. It demonstrated that pulse waves are the result of multiple physiological parameters. There are limitations when using a single physiological parameter to characterise the overall pulse pattern. The proposed CNN model can achieve high accuracy of PWPC while using CVD and complication categories as classification criteria.


2020 ◽  
Vol 12 (8) ◽  
pp. 1289
Author(s):  
Stefan Bachhofner ◽  
Ana-Maria Loghin ◽  
Johannes Otepka ◽  
Norbert Pfeifer ◽  
Michael Hornacek ◽  
...  

We studied the applicability of point clouds derived from tri-stereo satellite imagery for semantic segmentation for generalized sparse convolutional neural networks by the example of an Austrian study area. We examined, in particular, if the distorted geometric information, in addition to color, influences the performance of segmenting clutter, roads, buildings, trees, and vehicles. In this regard, we trained a fully convolutional neural network that uses generalized sparse convolution one time solely on 3D geometric information (i.e., 3D point cloud derived by dense image matching), and twice on 3D geometric as well as color information. In the first experiment, we did not use class weights, whereas in the second we did. We compared the results with a fully convolutional neural network that was trained on a 2D orthophoto, and a decision tree that was once trained on hand-crafted 3D geometric features, and once trained on hand-crafted 3D geometric as well as color features. The decision tree using hand-crafted features has been successfully applied to aerial laser scanning data in the literature. Hence, we compared our main interest of study, a representation learning technique, with another representation learning technique, and a non-representation learning technique. Our study area is located in Waldviertel, a region in Lower Austria. The territory is a hilly region covered mainly by forests, agriculture, and grasslands. Our classes of interest are heavily unbalanced. However, we did not use any data augmentation techniques to counter overfitting. For our study area, we reported that geometric and color information only improves the performance of the Generalized Sparse Convolutional Neural Network (GSCNN) on the dominant class, which leads to a higher overall performance in our case. We also found that training the network with median class weighting partially reverts the effects of adding color. The network also started to learn the classes with lower occurrences. The fully convolutional neural network that was trained on the 2D orthophoto generally outperforms the other two with a kappa score of over 90% and an average per class accuracy of 61%. However, the decision tree trained on colors and hand-crafted geometric features has a 2% higher accuracy for roads.


2020 ◽  
Vol 10 (12) ◽  
pp. 4059
Author(s):  
Chung-Ming Lo ◽  
Yu-Hung Wu ◽  
Yu-Chuan (Jack) Li ◽  
Chieh-Chi Lee

Mycobacterial infections continue to greatly affect global health and result in challenging histopathological examinations using digital whole-slide images (WSIs), histopathological methods could be made more convenient. However, screening for stained bacilli is a highly laborious task for pathologists due to the microscopic and inconsistent appearance of bacilli. This study proposed a computer-aided detection (CAD) system based on deep learning to automatically detect acid-fast stained mycobacteria. A total of 613 bacillus-positive image blocks and 1202 negative image blocks were cropped from WSIs (at approximately 20 × 20 pixels) and divided into training and testing samples of bacillus images. After randomly selecting 80% of the samples as the training set and the remaining 20% of samples as the testing set, a transfer learning mechanism based on a deep convolutional neural network (DCNN) was applied with a pretrained AlexNet to the target bacillus image blocks. The transferred DCNN model generated the probability that each image block contained a bacillus. A probability higher than 0.5 was regarded as positive for a bacillus. Consequently, the DCNN model achieved an accuracy of 95.3%, a sensitivity of 93.5%, and a specificity of 96.3%. For samples without color information, the performances were an accuracy of 73.8%, a sensitivity of 70.7%, and a specificity of 75.4%. The proposed DCNN model successfully distinguished bacilli from other tissues with promising accuracy. Meanwhile, the contribution of color information was revealed. This information will be helpful for pathologists to establish a more efficient diagnostic procedure.


2020 ◽  
Vol 17 (2) ◽  
pp. 445-458
Author(s):  
Yonghui Dai ◽  
Bo Xu ◽  
Siyu Yan ◽  
Jing Xu

Cardiovascular disease is one of the diseases threatening the human health, and its diagnosis has always been a research hotspot in the medical field. In particular, the diagnosis technology based on ECG (electrocardiogram) signal as an effective method for studying cardiovascular diseases has attracted many scholars? attention. In this paper, Convolutional Neural Network (CNN) is used to study the feature classification of three kinds of ECG signals, which including sinus rhythm (SR), Ventricular Tachycardia (VT) and Ventricular Fibrillation (VF). Specifically, different convolution layer structures and different time intervals are used for ECG signal classification, such as the division of 2-layer and 4-layer convolution layers, the setting of four time periods (1s, 2s, 3s, 10s), etc. by performing the above classification conditions, the best classification results are obtained. The contribution of this paper is mainly in two aspects. On the one hand, the convolution neural network is used to classify the arrhythmia data, and different classification effects are obtained by setting different convolution layers. On the other hand, according to the data characteristics of three kinds of ECG signals, different time periods are designed to optimize the classification performance. The research results provide a reference for the classification of ECG signals and contribute to the research of cardiovascular diseases.


Author(s):  
Asma Abdulelah Abdulrahman ◽  
Fouad Shaker Tahir

<p>In this work, it was proposed to compress the color image after de-noise by proposing a coding for the discrete transport of new wavelets called discrete chebysheve wavelet transduction (DCHWT) and linking it to a neural network that relies on the convolutional neural network to compress the color image. The aim of this work is to find an effective method for face recognition, which is to raise the noise and compress the image in convolutional neural networks to remove the noise that caused the image while it was being transmitted in the communication network. The work results of the algorithm were calculated by calculating the peak signal to noise ratio (PSNR), mean square error (MSE), compression ratio (CR) and bit-per-pixel (BPP) of the compressed image after a color image (256×256) was entered to demonstrate the quality and efficiency of the proposed algorithm in this work. The result obtained by using a convolutional neural network with new wavelets is to provide a better CR with the ratio of PSNR to be a high value that increases the high-quality ratio of the compressed image to be ready for face recognition.</p>


2018 ◽  
Vol 232 ◽  
pp. 01061
Author(s):  
Danhua Li ◽  
Xiaofeng Di ◽  
Xuan Qu ◽  
Yunfei Zhao ◽  
Honggang Kong

Pedestrian detection aims to localize and recognize every pedestrian instance in an image with a bounding box. The current state-of-the-art method is Faster RCNN, which is such a network that uses a region proposal network (RPN) to generate high quality region proposals, while Fast RCNN is used to classifiers extract features into corresponding categories. The contribution of this paper is integrated low-level features and high-level features into a Faster RCNN-based pedestrian detection framework, which efficiently increase the capacity of the feature. Through our experiments, we comprehensively evaluate our framework, on the Caltech pedestrian detection benchmark and our methods achieve state-of-the-art accuracy and present a competitive result on Caltech dataset.


2021 ◽  
Vol 36 (Supplement_1) ◽  
Author(s):  
T B Haugen ◽  
S A Hicks ◽  
O Witczak ◽  
J M Andersen ◽  
L Björndahl ◽  
...  

Abstract Study question How does convolutional neural network (CNN)-predicted sperm motility correlate with manual assessment according to the WHO guidelines. Summary answer CNN predicts sperm motility comparable to reference laboratories in the ESHRE-SIGA External Quality Assessment Programme for Semen Analysis. What is known already Manual sperm motility assessment according to WHO guidelines is regarded as the gold standard. To obtain reliable and reproducible results, comprehensive training is essential as well as running internal and external quality control. Prediction based on artificial intelligence can potentially transfer human-level performance into models that perform the task faster and can avoid human assessor variations. CNNs have been groundbreaking in image processing. To develop AI models with high predictive power, the data set used should be of high quality and sperm motility assessment based on WHO guidelines. Study design, size, duration Videos of 65 fresh semen samples obtained from the ESHRE-SIGA External Quality Assessment Programme for Semen Analysis (from the period 2006–2018) were used in the development of the model. One video was captured for each semen sample. Sperm motility data was obtained from manual assessment of the videos according to WHO criteria by reference laboratories in the programme. Rapid progressive motility was also included. Ten-fold cross-validation was used to compensate for the relatively small dataset. Participants/materials, setting, methods The mean values of the reference laboratories were used. Sparse optical flow of the sperm videos was generated from each second of each video and fed into a ResNet50 convolutional neural network. For training, Adam was used to optimize the weights and mean squared error (MSE) to measure loss. For baseline, ZeroR (pseudo regression) was performed. Results are reported as MAE. For correlation analysis, Pearson’s r was used. Main results and the role of chance Predicting sperm motility based on the optical flow generated from the videos, achieved an average MAE of 0.05 across progressive (0.06), non-progressive (0.04) and immotile sperm (0.05). The ZeroR baseline was 0.09, indicating that the method is able to capture the movement of the spermatozoa and predict motility with low error. Pearson’s correlation between manually and AI-predicted motility showed r of 0.88, p &lt; 0.001 for progressive, 0.59, p &lt; 0.001 for non-progressive and 0.89, p &lt; 0.001 for immotile sperm. When predicting rapid progressive motility, the average MAE was 0.07 across rapid progressive (0.11), slow progressive (0.09), non-progressive (0.04) and immotile sperm (0.05). Pearson’s correlation analysis between manually and AI-predicted motility showed r of 0.67, p &lt; 0.001 for rapid progressive, 0.41, p &lt; 0.001 for slow progressive, 0.51, p &lt; 0.001 for non-progressive and 0.88, p &lt; 0.001 for immotile sperm. The results show that differentiating between rapid progressive and slow progressive motility is difficult, but the model is still able to do this better than the ZeroR baseline, which was 0.15 for rapid progressive and 0.11 for slow progressive. This is interesting since rapid progressive motility has been regarded challenging to assess. The next step would be to compare the results of the algorithm to the human performance. Limitations, reasons for caution The sample size is small. The model is based on videos of high quality, and the performance may not transfer well to videos of lower quality. The performance for rapid progressive motility, which may have an important clinical value, has to be improved. Wider implications of the findings: This CNN model has a potential to assess sperm motility according to WHO guidelines for progressive motility and immotility. The error values for the automatic predictions are low, and the model shows a good performance taking into account that only videos were used to perform the prediction. Trial registration number Not applicable


2021 ◽  
Vol 11 (10) ◽  
pp. 4402
Author(s):  
Chang-Bae Moon ◽  
Jong-Yeol Lee ◽  
Dong-Seong Kim ◽  
Byeong-Man Kim

This paper proposes a method to detect the defects in the region of interest (ROI) based on a convolutional neural network (CNN) after alignment (position and rotation calibration) of a manufacturer’s headlights to determine whether the vehicle headlights are defective. The results were compared with an existing method for distinguishing defects among the previously proposed methods. One hundred original headlight images were acquired for each of the two vehicle types for the purpose of this experiment, and 20,000 high quality images and 20,000 defective images were obtained by applying the position and rotation transformation to the original images. It was found that the method proposed in this paper demonstrated a performance improvement of more than 0.1569 (15.69% on average) as compared to the existing method.


Sign in / Sign up

Export Citation Format

Share Document