Deep Learning with Weak Supervision for Disaster Scene Description in Low-Altitude Imagery

Author(s):  
Maria Presa-Reyes ◽  
Yudong Tao ◽  
Shu-Ching Chen ◽  
Mei-Ling Shyu
2021 ◽  
Vol 136 ◽  
pp. 95-102
Author(s):  
Marika Cusick ◽  
Prakash Adekkanattu ◽  
Thomas R. Campion ◽  
Evan T. Sholle ◽  
Annie Myers ◽  
...  

2020 ◽  
Vol 34 (09) ◽  
pp. 13634-13635
Author(s):  
Kun Qian ◽  
Poornima Chozhiyath Raman ◽  
Yunyao Li ◽  
Lucian Popa

Entity name disambiguation is an important task for many text-based AI tasks. Entity names usually have internal semantic structures that are useful for resolving different variations of the same entity. We present, PARTNER, a deep learning-based interactive system for entity name understanding. Powered by effective active learning and weak supervision, PARTNER can learn deep learning-based models for identifying entity name structure with low human effort. PARTNER also allows the user to design complex normalization and variant generation functions without coding skills.


Electronics ◽  
2019 ◽  
Vol 8 (9) ◽  
pp. 963
Author(s):  
Jin Young Lee

Scene description refers to the automatic generation of natural language descriptions from videos. In general, deep learning-based scene description networks utilize multimodalities, such as image, motion, audio, and label information, to improve the description quality. In particular, image information plays an important role in scene description. However, scene description has a potential issue, because it may handle images with severe compression artifacts. Hence, this paper analyzes the impact of video compression on scene description, and then proposes a simple network that is robust to compression artifacts. In addition, a network cascading more encoding layers for efficient multimodal embedding is also proposed. Experimental results show that the proposed network is more efficient than conventional networks.


2020 ◽  
Vol 287 (1923) ◽  
pp. 20192968 ◽  
Author(s):  
Manyu Ding ◽  
Tianyi Wang ◽  
Albert Min-Shan Ko ◽  
Honghai Chen ◽  
Hui Wang ◽  
...  

The clarification of the genetic origins of present-day Tibetans requires an understanding of their past relationships with the ancient populations of the Tibetan Plateau. Here we successfully sequenced 67 complete mitochondrial DNA genomes of 5200 to 300-year-old humans from the plateau. Apart from identifying two ancient plateau lineages (haplogroups D4j1b and M9a1a1c1b1a) that suggest some ancestors of Tibetans came from low-altitude areas 4750 to 2775 years ago and that some were involved in an expansion of people moving between high-altitude areas 2125 to 1100 years ago, we found limited evidence of recent matrilineal continuity on the plateau. Furthermore, deep learning of the ancient data incorporated into simulation models with an accuracy of 97% supports that present-day Tibetan matrilineal ancestry received partial contribution rather than complete continuity from the plateau populations of the last 5200 years.


Author(s):  
Manuel Pérez-Pelegrí ◽  
José V. Monmeneu ◽  
María P. López-Lereu ◽  
Lucía Pérez-Pelegrí ◽  
Alicia M. Maceira ◽  
...  

Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 210
Author(s):  
Dongsuk Park ◽  
Seungeui Lee ◽  
SeongUk Park ◽  
Nojun Kwak

With the upsurge in the use of Unmanned Aerial Vehicles (UAVs) in various fields, detecting and identifying them in real-time are becoming important topics. However, the identification of UAVs is difficult due to their characteristics such as low altitude, slow speed, and small radar cross-section (LSS). With the existing deterministic approach, the algorithm becomes complex and requires a large number of computations, making it unsuitable for real-time systems. Hence, effective alternatives enabling real-time identification of these new threats are needed. Deep learning-based classification models learn features from data by themselves and have shown outstanding performance in computer vision tasks. In this paper, we propose a deep learning-based classification model that learns the micro-Doppler signatures (MDS) of targets represented on radar spectrogram images. To enable this, first, we recorded five LSS targets (three types of UAVs and two different types of human activities) with a frequency modulated continuous wave (FMCW) radar in various scenarios. Then, we converted signals into spectrograms in the form of images by Short time Fourier transform (STFT). After the data refinement and augmentation, we made our own radar spectrogram dataset. Secondly, we analyzed characteristics of the radar spectrogram dataset with the ResNet-18 model and designed the ResNet-SP model with less computation, higher accuracy and stability based on the ResNet-18 model. The results show that the proposed ResNet-SP has a training time of 242 s and an accuracy of 83.39%, which is superior to the ResNet-18 that takes 640 s for training with an accuracy of 79.88%.


2020 ◽  
Vol 50 (5) ◽  
pp. 692-703
Author(s):  
Jianfa WU ◽  
Zhan KOU ◽  
Honglun WANG ◽  
Wenyang RUAN

Author(s):  
Khaled Saab ◽  
Jared Dunnmon ◽  
Roger Goldman ◽  
Alex Ratner ◽  
Hersh Sagreiya ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document