bird detection
Recently Published Documents


TOTAL DOCUMENTS

62
(FIVE YEARS 31)

H-INDEX

9
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Kyuwon Shim ◽  
Andre Barczak ◽  
Napoleon Reyes ◽  
Nasim Ahmed

2021 ◽  
Author(s):  
Angelo Coluccia ◽  
Alessio Fascista ◽  
Arne Schumann ◽  
Lars Sommer ◽  
Anastasios Dimou ◽  
...  
Keyword(s):  

2021 ◽  
Vol 7 (11) ◽  
pp. 227
Author(s):  
Hiba Alqaysi ◽  
Igor Fedorov ◽  
Faisal Z. Qureshi ◽  
Mattias O’Nils

Object detection for sky surveillance is a challenging problem due to having small objects in a large volume and a constantly changing background which requires high resolution frames. For example, detecting flying birds in wind farms to prevent their collision with the wind turbines. This paper proposes a YOLOv4-based ensemble model for bird detection in grayscale videos captured around wind turbines in wind farms. In order to tackle this problem, we introduce two datasets—(1) Klim and (2) Skagen—collected at two locations in Denmark. We use Klim training set to train three increasingly capable YOLOv4 based models. Model 1 uses YOLOv4 trained on the Klim dataset, Model 2 introduces tiling to improve small bird detection, and the last model uses tiling and temporal stacking and achieves the best mAP values on both Klim and Skagen datasets. We used this model to set up an ensemble detector, which further improves mAP values on both datasets. The three models achieve testing mAP values of 82%, 88%, and 90% on the Klim dataset. mAP values for Model 1 and Model 3 on the Skagen dataset are 60% and 92%. Improving object detection accuracy could mitigate birds’ mortality rate by choosing the locations for such establishment and the turbines location. It can also be used to improve the collision avoidance systems used in wind energy facilities.


Author(s):  
Guan-Zhou Lin ◽  
Hoang Minh Nguyen ◽  
Chi-Chia Sun ◽  
Po-Yu Kuo ◽  
Ming-Hwa Sheu

2021 ◽  
Vol 107 (2) ◽  
pp. 56-70
Author(s):  
Matthew Toenies ◽  
Lindsey Rich

Recent advances in acoustic recorder technology and automated species identification hold great promise for avian monitoring efforts. Assessing how these innovations compare to existing recorder models and traditional species identification techniques is vital to understanding their utility to researchers and managers. We carried out field trials in Monterey County, California, to compare bird detection among four acoustic recorder models (AudioMoth, Swift Recorder, and Wildlife Acoustics SM3BAT and SM Mini) and concurrent point counts, and to assess the ability of the artificial neural network BirdNET to correctly identify bird species from AudioMoth recordings. We found that the lowest-cost unit (AudioMoth) performed comparably to higher-cost units and that on average, species detections were higher for three of the five recorder models (range 9.8 to 14.0) than for point counts (12.8). In our assessment of BirdNET, we developed a subsetting process that enabled us to achieve a high rate of correctly identified species (96%). Using longer recordings from a single recorder model, BirdNET identified a mean of 8.5 verified species per recording and a mean of 16.4 verified species per location over a 5-day period (more than point counts conducted in similar habitats). We demonstrate that a combination of long recordings from low-cost recorders and a conservative method for subsetting automated identifications from BirdNET presents a process for sampling avian community composition with low misidentification rates and limited need for human vetting. These low-cost and automated tools may greatly improve efforts to survey bird communities and their ecosystems, and consequently, efforts to conserve threatened indigenous biodiversity.


2021 ◽  
Author(s):  
Ben G Weinstein ◽  
Lindsey Gardner ◽  
Vienna Saccomanno ◽  
Ashley Steinkraus ◽  
Andrew Ortega ◽  
...  

Advances in artificial intelligence for image processing hold great promise for increasing the scales at which ecological systems can be studied. The distribution and behavior of individuals is central to ecology, and computer vision using deep neural networks can learn to detect individual objects in imagery. However, developing computer vision for ecological monitoring is challenging because it needs large amounts of human-labeled training data, requires advanced technical expertise and computational infrastructure, and is prone to overfitting. This limits application across space and time. One solution is developing generalized models that can be applied across species and ecosystems. Using over 250,000 annotations from 13 projects from around the world, we develop a general bird detection model that achieves over 65% recall and 50% precision on novel aerial data without any local training despite differences in species, habitat, and imaging methodology. Fine-tuning this model with only 1000 local annotations increases these values to an average of 84% recall and 69% precision by building on the general features learned from other data sources. Retraining from the general model improves local predictions even when moderately large annotation sets are available and makes model training faster and more stable. Our results demonstrate that general models for detecting broad classes of organisms using airborne imagery are achievable. These models can reduce the effort, expertise, and computational resources necessary for automating the detection of individual organisms across large scales, helping to transform the scale of data collection in ecology and the questions that can be addressed.


2021 ◽  
Author(s):  
Sanae Fujii ◽  
Kazutoshi Akita ◽  
Norimichi Ukita
Keyword(s):  

Author(s):  
Neha Sharma ◽  
Reetvik Chatterjee ◽  
Akhil Bisht ◽  
Harit Yadav

Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2824
Author(s):  
Angelo Coluccia ◽  
Alessio Fascista ◽  
Arne Schumann ◽  
Lars Sommer ◽  
Anastasios Dimou ◽  
...  

Adopting effective techniques to automatically detect and identify small drones is a very compelling need for a number of different stakeholders in both the public and private sectors. This work presents three different original approaches that competed in a grand challenge on the “Drone vs. Bird” detection problem. The goal is to detect one or more drones appearing at some time point in video sequences where birds and other distractor objects may be also present, together with motion in background or foreground. Algorithms should raise an alarm and provide a position estimate only when a drone is present, while not issuing alarms on birds, nor being confused by the rest of the scene. In particular, three original approaches based on different deep learning strategies are proposed and compared on a real-world dataset provided by a consortium of universities and research centers, under the 2020 edition of the Drone vs. Bird Detection Challenge. Results show that there is a range in difficulty among different test sequences, depending on the size and the shape visibility of the drone in the sequence, while sequences recorded by a moving camera and very distant drones are the most challenging ones. The performance comparison reveals that the different approaches perform somewhat complementary, in terms of correct detection rate, false alarm rate, and average precision.


Sign in / Sign up

Export Citation Format

Share Document