A Deep Transfer Learning Model for the Identification of Bird Songs: A Case Study for Mauritius

Author(s):  
Evans Jason Henri ◽  
Zahra Mungloo-Dilmohamud
2021 ◽  
Vol 10 (3) ◽  
pp. 137
Author(s):  
Youngok Kang ◽  
Nahye Cho ◽  
Jiyoung Yoon ◽  
Soyeon Park ◽  
Jiyeon Kim

Recently, as computer vision and image processing technologies have rapidly advanced in the artificial intelligence (AI) field, deep learning technologies have been applied in the field of urban and regional study through transfer learning. In the tourism field, studies are emerging to analyze the tourists’ urban image by identifying the visual content of photos. However, previous studies have limitations in properly reflecting unique landscape, cultural characteristics, and traditional elements of the region that are prominent in tourism. With the purpose of going beyond these limitations of previous studies, we crawled 168,216 Flickr photos, created 75 scenes and 13 categories as a tourist’ photo classification by analyzing the characteristics of photos posted by tourists and developed a deep learning model by continuously re-training the Inception-v3 model. The final model shows high accuracy of 85.77% for the Top 1 and 95.69% for the Top 5. The final model was applied to the entire dataset to analyze the regions of attraction and the tourists’ urban image in Seoul. We found that tourists feel attracted to Seoul where the modern features such as skyscrapers and uniquely designed architectures and traditional features such as palaces and cultural elements are mixed together in the city. This work demonstrates a tourist photo classification suitable for local characteristics and the process of re-training a deep learning model to effectively classify a large volume of tourists’ photos.


2020 ◽  
Vol 13 (1) ◽  
pp. 23
Author(s):  
Wei Zhao ◽  
William Yamada ◽  
Tianxin Li ◽  
Matthew Digman ◽  
Troy Runge

In recent years, precision agriculture has been researched to increase crop production with less inputs, as a promising means to meet the growing demand of agriculture products. Computer vision-based crop detection with unmanned aerial vehicle (UAV)-acquired images is a critical tool for precision agriculture. However, object detection using deep learning algorithms rely on a significant amount of manually prelabeled training datasets as ground truths. Field object detection, such as bales, is especially difficult because of (1) long-period image acquisitions under different illumination conditions and seasons; (2) limited existing prelabeled data; and (3) few pretrained models and research as references. This work increases the bale detection accuracy based on limited data collection and labeling, by building an innovative algorithms pipeline. First, an object detection model is trained using 243 images captured with good illimitation conditions in fall from the crop lands. In addition, domain adaptation (DA), a kind of transfer learning, is applied for synthesizing the training data under diverse environmental conditions with automatic labels. Finally, the object detection model is optimized with the synthesized datasets. The case study shows the proposed method improves the bale detecting performance, including the recall, mean average precision (mAP), and F measure (F1 score), from averages of 0.59, 0.7, and 0.7 (the object detection) to averages of 0.93, 0.94, and 0.89 (the object detection + DA), respectively. This approach could be easily scaled to many other crop field objects and will significantly contribute to precision agriculture.


2021 ◽  
pp. 026638212098473
Author(s):  
Jela Webb

Disruption is the by-word for 2020. Across the globe organisations have been affected by the COVID-19 pandemic and consequent lockdowns, which accelerated new ways of working and learning. In this article, I share my experience of transitioning from a face-to-face model of delivering post-graduate education to a remote learning model. I reflect on how the corporate sector might learn from my experience as it considers re-skilling and up-skilling the workforce to meet the demands faced by a changing jobs landscape.


2011 ◽  
Vol 4 (2) ◽  
pp. 88
Author(s):  
Peter Baggetta

The Teaching Games for Understanding (TGfU) model was first developed by Bunker and Thorpe in 1982 as a model for coaches to help players become more skillful players. Since then other versions of the model have been developed such as the tactical decision-learning model (Grehaigne, Godbout, & Bouthier, 2001) in France and the game–sense approach (Australian Sports Commission, 1991) in Australia and New Zealand. The key aspect of all the models is the design of well-structured conditioned and modified games that require players to make decisions to develop their game understanding and tactical awareness. However, both novice and experienced coaches often struggle with connecting theory to practice especially in the area of creating and developing contextualized games that actually transfer learning from training to performance in games. In order to effectively create and use games that transfer learning, coaches can use a Principles-Based approach to develop games. The Principles-Based approach removes the dichotomy of traditional drills versus games and instead combines the drills approach with a games-context approach that links principles to skills that allow for increased individual and team expertise development. This presentation will first describe a model for developing and connecting principles, policies, tactics and skills for team play. Following this the presentation will then describe how to use the principles to create contextualized games that connect practices with performance and progresses novice players toward becoming more competent performers.


2021 ◽  
Vol 27 ◽  
Author(s):  
Qi Zhou ◽  
Wenjie Zhu ◽  
Fuchen Li ◽  
Mingqing Yuan ◽  
Linfeng Zheng ◽  
...  

Objective: To verify the ability of the deep learning model in identifying five subtypes and normal images in noncontrast enhancement CT of intracranial hemorrhage. Method: A total of 351 patients (39 patients in the normal group, 312 patients in the intracranial hemorrhage group) performed with intracranial hemorrhage noncontrast enhanced CT were selected, with 2768 images in total (514 images for the normal group, 398 images for the epidural hemorrhage group, 501 images for the subdural hemorrhage group, 497 images for the intraventricular hemorrhage group, 415 images for the cerebral parenchymal hemorrhage group, and 443 images for the subarachnoid hemorrhage group). Based on the diagnostic reports of two radiologists with more than 10 years of experience, the ResNet-18 and DenseNet-121 deep learning models were selected. Transfer learning was used. 80% of the data was used for training models, 10% was used for validating model performance against overfitting, and the last 10% was used for the final evaluation of the model. Assessment indicators included accuracy, sensitivity, specificity, and AUC values. Results: The overall accuracy of ResNet-18 and DenseNet-121 models were 89.64% and 82.5%, respectively. The sensitivity and specificity of identifying five subtypes and normal images were above 0.80. The sensitivity of DenseNet-121 model to recognize intraventricular hemorrhage and cerebral parenchymal hemorrhage was lower than 0.80, 0.73, and 0.76 respectively. The AUC values of the two deep learning models were above 0.9. Conclusion: The deep learning model can accurately identify the five subtypes of intracranial hemorrhage and normal images, and it can be used as a new tool for clinical diagnosis in the future.


Sign in / Sign up

Export Citation Format

Share Document