scholarly journals Confronting Deep-Learning and Biodiversity Challenges for Automatic Video-Monitoring of Marine Ecosystems

Sensors ◽  
2022 ◽  
Vol 22 (2) ◽  
pp. 497
Author(s):  
Sébastien Villon ◽  
Corina Iovan ◽  
Morgan Mangeas ◽  
Laurent Vigliola

With the availability of low-cost and efficient digital cameras, ecologists can now survey the world’s biodiversity through image sensors, especially in the previously rather inaccessible marine realm. However, the data rapidly accumulates, and ecologists face a data processing bottleneck. While computer vision has long been used as a tool to speed up image processing, it is only since the breakthrough of deep learning (DL) algorithms that the revolution in the automatic assessment of biodiversity by video recording can be considered. However, current applications of DL models to biodiversity monitoring do not consider some universal rules of biodiversity, especially rules on the distribution of species abundance, species rarity and ecosystem openness. Yet, these rules imply three issues for deep learning applications: the imbalance of long-tail datasets biases the training of DL models; scarce data greatly lessens the performances of DL models for classes with few data. Finally, the open-world issue implies that objects that are absent from the training dataset are incorrectly classified in the application dataset. Promising solutions to these issues are discussed, including data augmentation, data generation, cross-entropy modification, few-shot learning and open set recognition. At a time when biodiversity faces the immense challenges of climate change and the Anthropocene defaunation, stronger collaboration between computer scientists and ecologists is urgently needed to unlock the automatic monitoring of biodiversity.

2019 ◽  
Vol 2019 ◽  
pp. 1-14 ◽  
Author(s):  
Yong He ◽  
Hong Zeng ◽  
Yangyang Fan ◽  
Shuaisheng Ji ◽  
Jianjian Wu

In this paper, we proposed an approach to detect oilseed rape pests based on deep learning, which improves the mean average precision (mAP) to 77.14%; the result increased by 9.7% with the original model. We adopt this model to mobile platform to let every farmer able to use this program, which will diagnose pests in real time and provide suggestions on pest controlling. We designed an oilseed rape pest imaging database with 12 typical oilseed rape pests and compared the performance of five models, SSD w/Inception is chosen as the optimal model. Moreover, for the purpose of the high mAP, we have used data augmentation (DA) and added a dropout layer. The experiments are performed on the Android application we developed, and the result shows that our approach surpasses the original model obviously and is helpful for integrated pest management. This application has improved environmental adaptability, response speed, and accuracy by contrast with the past works and has the advantage of low cost and simple operation, which are suitable for the pest monitoring mission of drones and Internet of Things (IoT).


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 343
Author(s):  
Kim Bjerge ◽  
Jakob Bonde Nielsen ◽  
Martin Videbæk Sepstrup ◽  
Flemming Helsing-Nielsen ◽  
Toke Thomas Høye

Insect monitoring methods are typically very time-consuming and involve substantial investment in species identification following manual trapping in the field. Insect traps are often only serviced weekly, resulting in low temporal resolution of the monitoring data, which hampers the ecological interpretation. This paper presents a portable computer vision system capable of attracting and detecting live insects. More specifically, the paper proposes detection and classification of species by recording images of live individuals attracted to a light trap. An Automated Moth Trap (AMT) with multiple light sources and a camera was designed to attract and monitor live insects during twilight and night hours. A computer vision algorithm referred to as Moth Classification and Counting (MCC), based on deep learning analysis of the captured images, tracked and counted the number of insects and identified moth species. Observations over 48 nights resulted in the capture of more than 250,000 images with an average of 5675 images per night. A customized convolutional neural network was trained on 2000 labeled images of live moths represented by eight different classes, achieving a high validation F1-score of 0.93. The algorithm measured an average classification and tracking F1-score of 0.71 and a tracking detection rate of 0.79. Overall, the proposed computer vision system and algorithm showed promising results as a low-cost solution for non-destructive and automatic monitoring of moths.


Author(s):  
A. Loulidi ◽  
R. Houssa ◽  
L. Buhl-Mortensen ◽  
H. Zidane ◽  
H. Rhinane

Abstract. The marine environment provides many ecosystems that support habitats biodiversity. Benthic habitats and fish species associations are investigated using underwater gears to secure and manage these marine ecosystems in a sustainable manner. The current study evaluates the possibility of using deep learning methods in particular the You Only Look Once version 3 algorithm to detect fish in different environments such as; different shading, low light, and high noise within images and by each frame within an underwater video, recorded in the Atlantic Coast of Morocco. The training dataset was collected from Open Images Dataset V6, a total of 1295 Fish images were captured and split into a training set and a test set. An optimization approach was applied to the YOLOv3 algorithm which is data augmentation transformation to provide more learning samples. The mean average precision (mAP) metric was applied to measure the YOLOv3 model’s performance. Results of this study revealed with a mAP of 91,3% the proposed method is proved to have the capability of detecting fish species in different natural marine environments also it has the potential to be applied to detect other underwater species and substratum.


Electronics ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 73
Author(s):  
Kuldoshbay Avazov ◽  
Mukhriddin Mukhiddinov ◽  
Fazliddin Makhmudov ◽  
Young Im Cho

In the construction of new smart cities, traditional fire-detection systems can be replaced with vision-based systems to establish fire safety in society using emerging technologies, such as digital cameras, computer vision, artificial intelligence, and deep learning. In this study, we developed a fire detector that accurately detects even small sparks and sounds an alarm within 8 s of a fire outbreak. A novel convolutional neural network was developed to detect fire regions using an enhanced You Only Look Once (YOLO) v4network. Based on the improved YOLOv4 algorithm, we adapted the network to operate on the Banana Pi M3 board using only three layers. Initially, we examined the originalYOLOv4 approach to determine the accuracy of predictions of candidate fire regions. However, the anticipated results were not observed after several experiments involving this approach to detect fire accidents. We improved the traditional YOLOv4 network by increasing the size of the training dataset based on data augmentation techniques for the real-time monitoring of fire disasters. By modifying the network structure through automatic color augmentation, reducing parameters, etc., the proposed method successfully detected and notified the incidence of disastrous fires with a high speed and accuracy in different weather environments—sunny or cloudy, day or night. Experimental results revealed that the proposed method can be used successfully for the protection of smart cities and in monitoring fires in urban areas. Finally, we compared the performance of our method with that of recently reported fire-detection approaches employing widely used performance matrices to test the fire classification results achieved.


Diagnostics ◽  
2020 ◽  
Vol 10 (6) ◽  
pp. 417 ◽  
Author(s):  
Mohammad Farukh Hashmi ◽  
Satyarth Katiyar ◽  
Avinash G Keskar ◽  
Neeraj Dhanraj Bokde ◽  
Zong Woo Geem

Pneumonia causes the death of around 700,000 children every year and affects 7% of the global population. Chest X-rays are primarily used for the diagnosis of this disease. However, even for a trained radiologist, it is a challenging task to examine chest X-rays. There is a need to improve the diagnosis accuracy. In this work, an efficient model for the detection of pneumonia trained on digital chest X-ray images is proposed, which could aid the radiologists in their decision making process. A novel approach based on a weighted classifier is introduced, which combines the weighted predictions from the state-of-the-art deep learning models such as ResNet18, Xception, InceptionV3, DenseNet121, and MobileNetV3 in an optimal way. This approach is a supervised learning approach in which the network predicts the result based on the quality of the dataset used. Transfer learning is used to fine-tune the deep learning models to obtain higher training and validation accuracy. Partial data augmentation techniques are employed to increase the training dataset in a balanced way. The proposed weighted classifier is able to outperform all the individual models. Finally, the model is evaluated, not only in terms of test accuracy, but also in the AUC score. The final proposed weighted classifier model is able to achieve a test accuracy of 98.43% and an AUC score of 99.76 on the unseen data from the Guangzhou Women and Children’s Medical Center pneumonia dataset. Hence, the proposed model can be used for a quick diagnosis of pneumonia and can aid the radiologists in the diagnosis process.


2021 ◽  
Vol 8 ◽  
Author(s):  
Mohamed Elgendi ◽  
Muhammad Umer Nasir ◽  
Qunfeng Tang ◽  
David Smith ◽  
John-Paul Grenier ◽  
...  

Chest X-ray imaging technology used for the early detection and screening of COVID-19 pneumonia is both accessible worldwide and affordable compared to other non-invasive technologies. Additionally, deep learning methods have recently shown remarkable results in detecting COVID-19 on chest X-rays, making it a promising screening technology for COVID-19. Deep learning relies on a large amount of data to avoid overfitting. While overfitting can result in perfect modeling on the original training dataset, on a new testing dataset it can fail to achieve high accuracy. In the image processing field, an image augmentation step (i.e., adding more training data) is often used to reduce overfitting on the training dataset, and improve prediction accuracy on the testing dataset. In this paper, we examined the impact of geometric augmentations as implemented in several recent publications for detecting COVID-19. We compared the performance of 17 deep learning algorithms with and without different geometric augmentations. We empirically examined the influence of augmentation with respect to detection accuracy, dataset diversity, augmentation methodology, and network size. Contrary to expectation, our results show that the removal of recently used geometrical augmentation steps actually improved the Matthews correlation coefficient (MCC) of 17 models. The MCC without augmentation (MCC = 0.51) outperformed four recent geometrical augmentations (MCC = 0.47 for Data Augmentation 1, MCC = 0.44 for Data Augmentation 2, MCC = 0.48 for Data Augmentation 3, and MCC = 0.49 for Data Augmentation 4). When we retrained a recently published deep learning without augmentation on the same dataset, the detection accuracy significantly increased, with a χMcNemar′s statistic2=163.2 and a p-value of 2.23 × 10−37. This is an interesting finding that may improve current deep learning algorithms using geometrical augmentations for detecting COVID-19. We also provide clinical perspectives on geometric augmentation to consider regarding the development of a robust COVID-19 X-ray-based detector.


2020 ◽  
Vol 12 (21) ◽  
pp. 3659
Author(s):  
Antoine Soloy ◽  
Imen Turki ◽  
Matthieu Fournier ◽  
Stéphane Costa ◽  
Bastien Peuziat ◽  
...  

This article proposes a new methodological approach to measure and map the size of coarse clasts on a land surface from photographs. This method is based on the use of the Mask Regional Convolutional Neural Network (R-CNN) deep learning algorithm, which allows the instance segmentation of objects after an initial training on manually labeled data. The algorithm is capable of identifying and classifying objects present in an image at the pixel scale, without human intervention, in a matter of seconds. This work demonstrates that it is possible to train the model to detect non-overlapping coarse sediments on scaled images, in order to extract their individual size and morphological characteristics with high efficiency (R2 = 0.98; Root Mean Square Error (RMSE) = 3.9 mm). It is then possible to measure element size profiles over a sedimentary body, as it was done on the pebble beach of Etretat (Normandy, France) in order to monitor the granulometric spatial variability before and after a storm. Applied at a larger scale using Unmanned Aerial Vehicle (UAV) derived ortho-images, the method allows the accurate characterization and high-resolution mapping of the surface coarse sediment size, as it was performed on the two pebble beaches of Etretat (D50 = 5.99 cm) and Hautot-sur-Mer (D50 = 7.44 cm) (Normandy, France). Validation results show a very satisfying overall representativity (R2 = 0.45 and 0.75; RMSE = 6.8 mm and 9.3 mm at Etretat and Hautot-sur-Mer, respectively), while the method remains fast, easy to apply and low-cost, although the method remains limited by the image resolution (objects need to be longer than 4 cm), and could still be improved in several ways, for instance by adding more manually labeled data to the training dataset, and by considering more accurate methods than the ellipse fitting for measuring the particle sizes.


2020 ◽  
Vol 10 (11) ◽  
pp. 3861
Author(s):  
Marcel Sheeny ◽  
Andrew Wallace ◽  
Sen Wang

We present a novel, parameterised radar data augmentation (RADIO) technique to generate realistic radar samples from small datasets for the development of radar-related deep learning models. RADIO leverages the physical properties of radar signals, such as attenuation, azimuthal beam divergence and speckle noise, for data generation and augmentation. Exemplary applications on radar-based classification and detection demonstrate that RADIO can generate meaningful radar samples that effectively boost the accuracy of classification and generalisability of deep models trained with a small dataset.


Author(s):  
Tomoki Uemura ◽  
Janne J. Näppi ◽  
Yasuji Ryu ◽  
Chinatsu Watari ◽  
Tohru Kamiya ◽  
...  

Abstract Purpose Deep learning can be used for improving the performance of computer-aided detection (CADe) in various medical imaging tasks. However, in computed tomographic (CT) colonography, the performance is limited by the relatively small size and the variety of the available training datasets. Our purpose in this study was to develop and evaluate a flow-based generative model for performing 3D data augmentation of colorectal polyps for effective training of deep learning in CADe for CT colonography. Methods We developed a 3D-convolutional neural network (3D CNN) based on a flow-based generative model (3D Glow) for generating synthetic volumes of interest (VOIs) that has characteristics similar to those of the VOIs of its training dataset. The 3D Glow was trained to generate synthetic VOIs of polyps by use of our clinical CT colonography case collection. The evaluation was performed by use of a human observer study with three observers and by use of a CADe-based polyp classification study with a 3D DenseNet. Results The area-under-the-curve values of the receiver operating characteristic analysis of the three observers were not statistically significantly different in distinguishing between real polyps and synthetic polyps. When trained with data augmentation by 3D Glow, the 3D DenseNet yielded a statistically significantly higher polyp classification performance than when it was trained with alternative augmentation methods. Conclusion The 3D Glow-generated synthetic polyps are visually indistinguishable from real colorectal polyps. Their application to data augmentation can substantially improve the performance of 3D CNNs in CADe for CT colonography. Thus, 3D Glow is a promising method for improving the performance of deep learning in CADe for CT colonography.


Author(s):  
Kosuke Takaya ◽  
Atsuki Shibata ◽  
Yuji Mizuno ◽  
Takeshi Ise

Abstract The increasing prevalence of marine debris is a global problem, and urgent action for amelioration is needed. Identifying hotspots where marine debris accumulates will enable effective control; however, knowledge on the location of accumulation hotspots remains incomplete. In particular, marine debris accumulation on beaches is a concern. Surveys of beaches require intensive human effort, and survey methods are not standardized. If marine debris monitoring is conducted using a standardized method, data from different regions can be compared. With an unmanned aerial vehicle (UAV) and deep learning computational methods, monitoring a wide area at a low cost in a standardized way may be possible. In this study, we aimed to identify marine debris on beaches through deep learning using high-resolution UAV images by conducting a survey on Narugashima Island in the Seto Inland Sea of Japan. The flight altitude relative to the ground was set to 5 m, and images of a 0.81-ha area were obtained. Flight was conducted twice: before and after the beach cleaning. The combination of UAVs equipped with a zoom lens and operation at a low altitude allows for the acquisition of high resolution images of 1.1 mm/pixel. The training dataset (2970 images) was annotated by using VoTT, categorizing them into two classes: “anthropogenic marine debris” and “natural objects.” Using RetinaNet, marine debris was identified with an average sensitivity of 51% and a precision of 76%. In addition, the abundance and area of marine debris coverage were estimated. In this study, it was revealed that the combination of UAVs and deep learning enables the effective identification of marine debris. The effects of cleanup activities by citizens were able to be quantified. This method can widely be used to evaluate the effectiveness of citizen efforts toward beach cleaning and low-cost long-term monitoring.


Sign in / Sign up

Export Citation Format

Share Document