scholarly journals A Deep Learning-Based Method for Quantifying and Mapping the Grain Size on Pebble Beaches

2020 ◽  
Vol 12 (21) ◽  
pp. 3659
Author(s):  
Antoine Soloy ◽  
Imen Turki ◽  
Matthieu Fournier ◽  
Stéphane Costa ◽  
Bastien Peuziat ◽  
...  

This article proposes a new methodological approach to measure and map the size of coarse clasts on a land surface from photographs. This method is based on the use of the Mask Regional Convolutional Neural Network (R-CNN) deep learning algorithm, which allows the instance segmentation of objects after an initial training on manually labeled data. The algorithm is capable of identifying and classifying objects present in an image at the pixel scale, without human intervention, in a matter of seconds. This work demonstrates that it is possible to train the model to detect non-overlapping coarse sediments on scaled images, in order to extract their individual size and morphological characteristics with high efficiency (R2 = 0.98; Root Mean Square Error (RMSE) = 3.9 mm). It is then possible to measure element size profiles over a sedimentary body, as it was done on the pebble beach of Etretat (Normandy, France) in order to monitor the granulometric spatial variability before and after a storm. Applied at a larger scale using Unmanned Aerial Vehicle (UAV) derived ortho-images, the method allows the accurate characterization and high-resolution mapping of the surface coarse sediment size, as it was performed on the two pebble beaches of Etretat (D50 = 5.99 cm) and Hautot-sur-Mer (D50 = 7.44 cm) (Normandy, France). Validation results show a very satisfying overall representativity (R2 = 0.45 and 0.75; RMSE = 6.8 mm and 9.3 mm at Etretat and Hautot-sur-Mer, respectively), while the method remains fast, easy to apply and low-cost, although the method remains limited by the image resolution (objects need to be longer than 4 cm), and could still be improved in several ways, for instance by adding more manually labeled data to the training dataset, and by considering more accurate methods than the ellipse fitting for measuring the particle sizes.

2021 ◽  
Vol 13 (9) ◽  
pp. 1779
Author(s):  
Xiaoyan Yin ◽  
Zhiqun Hu ◽  
Jiafeng Zheng ◽  
Boyong Li ◽  
Yuanyuan Zuo

Radar beam blockage is an important error source that affects the quality of weather radar data. An echo-filling network (EFnet) is proposed based on a deep learning algorithm to correct the echo intensity under the occlusion area in the Nanjing S-band new-generation weather radar (CINRAD/SA). The training dataset is constructed by the labels, which are the echo intensity at the 0.5° elevation in the unblocked area, and by the input features, which are the intensity in the cube including multiple elevations and gates corresponding to the location of bottom labels. Two loss functions are applied to compile the network: one is the common mean square error (MSE), and the other is a self-defined loss function that increases the weight of strong echoes. Considering that the radar beam broadens with distance and height, the 0.5° elevation scan is divided into six range bands every 25 km to train different models. The models are evaluated by three indicators: explained variance (EVar), mean absolute error (MAE), and correlation coefficient (CC). Two cases are demonstrated to compare the effect of the echo-filling model by different loss functions. The results suggest that EFnet can effectively correct the echo reflectivity and improve the data quality in the occlusion area, and there are better results for strong echoes when the self-defined loss function is used.


Animals ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 1549
Author(s):  
Robert D. Chambers ◽  
Nathanael C. Yoder ◽  
Aletha B. Carson ◽  
Christian Junge ◽  
David E. Allen ◽  
...  

Collar-mounted canine activity monitors can use accelerometer data to estimate dog activity levels, step counts, and distance traveled. With recent advances in machine learning and embedded computing, much more nuanced and accurate behavior classification has become possible, giving these affordable consumer devices the potential to improve the efficiency and effectiveness of pet healthcare. Here, we describe a novel deep learning algorithm that classifies dog behavior at sub-second resolution using commercial pet activity monitors. We built machine learning training databases from more than 5000 videos of more than 2500 dogs and ran the algorithms in production on more than 11 million days of device data. We then surveyed project participants representing 10,550 dogs, which provided 163,110 event responses to validate real-world detection of eating and drinking behavior. The resultant algorithm displayed a sensitivity and specificity for detecting drinking behavior (0.949 and 0.999, respectively) and eating behavior (0.988, 0.983). We also demonstrated detection of licking (0.772, 0.990), petting (0.305, 0.991), rubbing (0.729, 0.996), scratching (0.870, 0.997), and sniffing (0.610, 0.968). We show that the devices’ position on the collar had no measurable impact on performance. In production, users reported a true positive rate of 95.3% for eating (among 1514 users), and of 94.9% for drinking (among 1491 users). The study demonstrates the accurate detection of important health-related canine behaviors using a collar-mounted accelerometer. We trained and validated our algorithms on a large and realistic training dataset, and we assessed and confirmed accuracy in production via user validation.


Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 497
Author(s):  
Sébastien Villon ◽  
Corina Iovan ◽  
Morgan Mangeas ◽  
Laurent Vigliola

With the availability of low-cost and efficient digital cameras, ecologists can now survey the world’s biodiversity through image sensors, especially in the previously rather inaccessible marine realm. However, the data rapidly accumulates, and ecologists face a data processing bottleneck. While computer vision has long been used as a tool to speed up image processing, it is only since the breakthrough of deep learning (DL) algorithms that the revolution in the automatic assessment of biodiversity by video recording can be considered. However, current applications of DL models to biodiversity monitoring do not consider some universal rules of biodiversity, especially rules on the distribution of species abundance, species rarity and ecosystem openness. Yet, these rules imply three issues for deep learning applications: the imbalance of long-tail datasets biases the training of DL models; scarce data greatly lessens the performances of DL models for classes with few data. Finally, the open-world issue implies that objects that are absent from the training dataset are incorrectly classified in the application dataset. Promising solutions to these issues are discussed, including data augmentation, data generation, cross-entropy modification, few-shot learning and open set recognition. At a time when biodiversity faces the immense challenges of climate change and the Anthropocene defaunation, stronger collaboration between computer scientists and ecologists is urgently needed to unlock the automatic monitoring of biodiversity.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Hua Zheng ◽  
Zhenglong Wu ◽  
Shiqiang Duan ◽  
Jiangtao Zhou

Due to the inevitable deviations between the results of theoretical calculations and physical experiments, flutter tests and flutter signal analysis often play significant roles in designing the aeroelasticity of a new aircraft. The measured structural response from aeroelastic models in both wind tunnel tests and real fight flutter tests contain an abundance of structural information, but traditional methods tend to have limited ability to extract features of concern. Inspired by deep learning concepts, a novel feature extraction method for flutter signal analysis was established in this study by combining the convolutional neural network (CNN) with empirical mode decomposition (EMD). It is widely hypothesized that when flutter occurs, the measured structural signals are harmonic or divergent in the time domain, and that the flutter modal (1) is singular and (2) its energy increases significantly in the frequency domain. A measured-signal feature extraction and flutter criterion framework was constructed accordingly. The measured signals from a wind tunnel test were manually labeled “flutter” and “no-flutter” as the foundational dataset for the deep learning algorithm. After the normalized preprocessing, the intrinsic mode functions (IMFs) of the flutter test signals are obtained by the EMD method. The IMFs are then reshaped to make them the suitable size to be input to the CNN. The CNN parameters are optimized though the training dataset, and the trained model is validated through the test dataset (i.e., cross-validation). The accuracy rate of the proposed method reached 100% on the test dataset. The training model appears to effectively distinguish whether or not the structural response signal contains flutter. The combination of EMD and CNN provides effective feature extraction of time series signals in flutter test data. This research explores the connection between structural response signals and flutter from the perspective of artificial intelligence. The method allows for real-time, online prediction with low computational complexity.


2021 ◽  
Author(s):  
Sidhant Idgunji ◽  
Madison Ho ◽  
Jonathan L. Payne ◽  
Daniel Lehrmann ◽  
Michele Morsilli ◽  
...  

<p>The growing digitization of fossil images has vastly improved and broadened the potential application of big data and machine learning, particularly computer vision, in paleontology. Recent studies show that machine learning is capable of approaching human abilities of classifying images, and with the increase in computational power and visual data, it stands to reason that it can match human ability but at much greater efficiency in the near future. Here we demonstrate this potential of using deep learning to identify skeletal grains at different levels of the Linnaean taxonomic hierarchy. Our approach was two-pronged. First, we built a database of skeletal grain images spanning a wide range of animal phyla and classes and used this database to train the model. We used a Python-based method to automate image recognition and extraction from published sources. Second, we developed a deep learning algorithm that can attach multiple labels to a single image. Conventionally, deep learning is used to predict a single class from an image; here, we adopted a Branch Convolutional Neural Network (B-CNN) technique to classify multiple taxonomic levels for a single skeletal grain image. Using this method, we achieved over 90% accuracy for both the coarse, phylum-level recognition and the fine, class-level recognition across diverse skeletal grains (6 phyla and 15 classes). Furthermore, we found that image augmentation improves the overall accuracy. This tool has potential applications in geology ranging from biostratigraphy to paleo-bathymetry, paleoecology, and microfacies analysis. Further improvement of the algorithm and expansion of the training dataset will continue to narrow the efficiency gap between human expertise and machine learning.</p>


2021 ◽  
pp. bjophthalmol-2020-318107
Author(s):  
Kenichi Nakahara ◽  
Ryo Asaoka ◽  
Masaki Tanito ◽  
Naoto Shibata ◽  
Keita Mitsuhashi ◽  
...  

Background/aimsTo validate a deep learning algorithm to diagnose glaucoma from fundus photography obtained with a smartphone.MethodsA training dataset consisting of 1364 colour fundus photographs with glaucomatous indications and 1768 colour fundus photographs without glaucomatous features was obtained using an ordinary fundus camera. The testing dataset consisted of 73 eyes of 73 patients with glaucoma and 89 eyes of 89 normative subjects. In the testing dataset, fundus photographs were acquired using an ordinary fundus camera and a smartphone. A deep learning algorithm was developed to diagnose glaucoma using a training dataset. The trained neural network was evaluated by prediction result of the diagnostic of glaucoma or normal over the test datasets, using images from both an ordinary fundus camera and a smartphone. Diagnostic accuracy was assessed using the area under the receiver operating characteristic curve (AROC).ResultsThe AROC with a fundus camera was 98.9% and 84.2% with a smartphone. When validated only in eyes with advanced glaucoma (mean deviation value < −12 dB, N=26), the AROC with a fundus camera was 99.3% and 90.0% with a smartphone. There were significant differences between these AROC values using different cameras.ConclusionThe usefulness of a deep learning algorithm to automatically screen for glaucoma from smartphone-based fundus photographs was validated. The algorithm had a considerable high diagnostic ability, particularly in eyes with advanced glaucoma.


Smart Cities ◽  
2021 ◽  
Vol 4 (3) ◽  
pp. 1220-1243
Author(s):  
Hafiz Suliman Munawar ◽  
Fahim Ullah ◽  
Siddra Qayyum ◽  
Amirhossein Heravi

Floods are one of the most fatal and devastating disasters, instigating an immense loss of human lives and damage to property, infrastructure, and agricultural lands. To cater to this, there is a need to develop and implement real-time flood management systems that could instantly detect flooded regions to initiate relief activities as early as possible. Current imaging systems, relying on satellites, have demonstrated low accuracy and delayed response, making them unreliable and impractical to be used in emergency responses to natural disasters such as flooding. This research employs Unmanned Aerial Vehicles (UAVs) to develop an automated imaging system that can identify inundated areas from aerial images. The Haar cascade classifier was explored in the case study to detect landmarks such as roads and buildings from the aerial images captured by UAVs and identify flooded areas. The extracted landmarks are added to the training dataset that is used to train a deep learning algorithm. Experimental results show that buildings and roads can be detected from the images with 91% and 94% accuracy, respectively. The overall accuracy of 91% is recorded in classifying flooded and non-flooded regions from the input case study images. The system has shown promising results on test images belonging to both pre- and post-flood classes. The flood relief and rescue workers can quickly locate flooded regions and rescue stranded people using this system. Such real-time flood inundation systems will help transform the disaster management systems in line with modern smart cities initiatives.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Chia-Yen Lee ◽  
Guan-Lin Chen ◽  
Zhong-Xuan Zhang ◽  
Yi-Hong Chou ◽  
Chih-Chung Hsu

The sonogram is currently an effective cancer screening and diagnosis way due to the convenience and harmlessness in humans. Traditionally, lesion boundary segmentation is first adopted and then classification is conducted, to reach the judgment of benign or malignant tumor. In addition, sonograms often contain much speckle noise and intensity inhomogeneity. This study proposes a novel benign or malignant tumor classification system, which comprises intensity inhomogeneity correction and stacked denoising autoencoder (SDAE), and it is suitable for small-size dataset. A classifier is established by extracting features in the multilayer training of SDAE; automatic analysis of imaging features by the deep learning algorithm is applied on image classification, thus allowing the system to have high efficiency and robust distinguishing. In this study, two kinds of dataset (private data and public data) are used for deep learning models training. For each dataset, two groups of test images are compared: the original images and the images after intensity inhomogeneity correction, respectively. The results show that when deep learning algorithm is applied on the sonograms after intensity inhomogeneity correction, there is a significant increase of the tumor distinguishing accuracy. This study demonstrated that it is important to use preprocessing to highlight the image features and further give these features for deep learning models. In this way, the classification accuracy will be better to just use the original images for deep learning.


2022 ◽  
Vol 12 (2) ◽  
pp. 639
Author(s):  
Yin-Chun Hung ◽  
Yu-Xiang Zhao ◽  
Wei-Chen Hung

Kinmen Island was in a state of combat readiness during the 1950s–1980s. It opened for tourism in 1992, when all troops withdrew from the island. Most military installations, such as bunkers, anti airborne piles, and underground tunnels, became deserted and disordered. The entries to numerous underground bunkers are closed or covered with weeds, creating dangerous spaces on the island. This study evaluates the feasibility of using Electrical Resistivity Tomography (ERT) to detect and discuss the location, size, and depth of underground tunnels. In order to discuss the reliability of the 2D-ERT result, this study built a numerical model to validate the correctness of in situ measured data. In addition, this study employed the artificial intelligence deep learning technique for reprocessing and predicting the ERT image and discussed using an artificial intelligence deep learning algorithm to enhance the image resolution and interpretation. A total of three 2D-ERT survey lines were implemented in this study. The results indicate that the three survey lines clearly show the tunnel location and shape. The numerical simulation results also indicate that using 2D-ERT to survey underground tunnels is highly feasible. Moreover, according to a series of studies in Multilayer Perceptron of deep learning, using deep learning can clearly show the tunnel location and path and effectively enhance the interpretation ability and resolution for 2D-ERT measurement results.


Sign in / Sign up

Export Citation Format

Share Document