An IOT Data Collection Mechanism Based on Cloud-Edge Coordinated Deep Learning

Author(s):  
Zi-hao Wang ◽  
Jing Wang
Sensors ◽  
2019 ◽  
Vol 19 (15) ◽  
pp. 3434 ◽  
Author(s):  
Nattaya Mairittha ◽  
Tittaya Mairittha ◽  
Sozo Inoue

Labeling activity data is a central part of the design and evaluation of human activity recognition systems. The performance of the systems greatly depends on the quantity and “quality” of annotations; therefore, it is inevitable to rely on users and to keep them motivated to provide activity labels. While mobile and embedded devices are increasingly using deep learning models to infer user context, we propose to exploit on-device deep learning inference using a long short-term memory (LSTM)-based method to alleviate the labeling effort and ground truth data collection in activity recognition systems using smartphone sensors. The novel idea behind this is that estimated activities are used as feedback for motivating users to collect accurate activity labels. To enable us to perform evaluations, we conduct the experiments with two conditional methods. We compare the proposed method showing estimated activities using on-device deep learning inference with the traditional method showing sentences without estimated activities through smartphone notifications. By evaluating with the dataset gathered, the results show our proposed method has improvements in both data quality (i.e., the performance of a classification model) and data quantity (i.e., the number of data collected) that reflect our method could improve activity data collection, which can enhance human activity recognition systems. We discuss the results, limitations, challenges, and implications for on-device deep learning inference that support activity data collection. Also, we publish the preliminary dataset collected to the research community for activity recognition.


2019 ◽  
Vol 1 (3) ◽  
pp. 883-903 ◽  
Author(s):  
Daulet Baimukashev ◽  
Alikhan Zhilisbayev ◽  
Askat Kuzdeuov ◽  
Artemiy Oleinikov ◽  
Denis Fadeyev ◽  
...  

Recognizing objects and estimating their poses have a wide range of application in robotics. For instance, to grasp objects, robots need the position and orientation of objects in 3D. The task becomes challenging in a cluttered environment with different types of objects. A popular approach to tackle this problem is to utilize a deep neural network for object recognition. However, deep learning-based object detection in cluttered environments requires a substantial amount of data. Collection of these data requires time and extensive human labor for manual labeling. In this study, our objective was the development and validation of a deep object recognition framework using a synthetic depth image dataset. We synthetically generated a depth image dataset of 22 objects randomly placed in a 0.5 m × 0.5 m × 0.1 m box, and automatically labeled all objects with an occlusion rate below 70%. Faster Region Convolutional Neural Network (R-CNN) architecture was adopted for training using a dataset of 800,000 synthetic depth images, and its performance was tested on a real-world depth image dataset consisting of 2000 samples. Deep object recognizer has 40.96% detection accuracy on the real depth images and 93.5% on the synthetic depth images. Training the deep learning model with noise-added synthetic images improves the recognition accuracy for real images to 46.3%. The object detection framework can be trained on synthetically generated depth data, and then employed for object recognition on the real depth data in a cluttered environment. Synthetic depth data-based deep object detection has the potential to substantially decrease the time and human effort required for the extensive data collection and labeling.


2020 ◽  
Author(s):  
Ghazi Abdalla ◽  
Fatih Özyurt

Abstract In the modern era, Internet usage has become a basic necessity in the lives of people. Nowadays, people can perform online shopping and check the customer’s views about products that purchased online. Social networking services enable users to post opinions on public platforms. Analyzing people’s opinions helps corporations to improve the quality of products and provide better customer service. However, analyzing this content manually is a daunting task. Therefore, we implemented sentiment analysis to make the process automatically. The entire process includes data collection, pre-processing, word embedding, sentiment detection and classification using deep learning techniques. Twitter was chosen as the source of data collection and tweets collected automatically by using Tweepy. In this paper, three deep learning techniques were implemented, which are CNN, Bi-LSTM and CNN-Bi-LSTM. Each of the models trained on three datasets consists of 50K, 100K and 200K tweets. The experimental result revealed that, with the increasing amount of training data size, the performance of the models improved, especially the performance of the Bi-LSTM model. When the model trained on the 200K dataset, it achieved about 3% higher accuracy than the 100K dataset and achieved about 7% higher accuracy than the 50K dataset. Finally, the Bi-LSTM model scored the highest performance in all metrics and achieved an accuracy of 95.35%.


Author(s):  
Mohammad Hanan Bhat

: Plant health monitoring has been a significant field of research since a very long time. The scope of this research work conducted lies in the vast domain of plant pathology with its applications extending in the field of agriculture production monitoring to forest health monitoring. It deals with the data collection techniques based on IOT, pre-processing and post-processing of Image dataset and identification of disease using deep learning model. Therefore, providing a multi-modal end-to-end approach for plant health monitoring. This paper reviews the various methods used for monitoring plant health remotely in a non-invasive manner. An end-to-end low cost framework has been proposed for monitoring plant health by using IOT based data collection methods and cloud computing for a single-point-of-contact for the data storage and processing. The cloud agent gateway connects the devices and collects the data from sensors to ensure a single source of truth. Further, the deep learning computational infrastructure provided by the public cloud infrastructure is exploited to train the image dataset and derive the plant health status


2021 ◽  
Author(s):  
Anton Hristov ◽  
◽  
Aleksandar Tahchiev ◽  
Hristo Papazov ◽  
Nikola Tulechki ◽  
...  

2019 ◽  
Vol 77 (4) ◽  
pp. 1274-1285 ◽  
Author(s):  
Ketil Malde ◽  
Nils Olav Handegard ◽  
Line Eikvil ◽  
Arnt-Børre Salberg

Abstract Oceans constitute over 70% of the earth's surface, and the marine environment and ecosystems are central to many global challenges. Not only are the oceans an important source of food and other resources, but they also play a important roles in the earth's climate and provide crucial ecosystem services. To monitor the environment and ensure sustainable exploitation of marine resources, extensive data collection and analysis efforts form the backbone of management programmes on global, regional, or national levels. Technological advances in sensor technology, autonomous platforms, and information and communications technology now allow marine scientists to collect data in larger volumes than ever before. But our capacity for data analysis has not progressed comparably, and the growing discrepancy is becoming a major bottleneck for effective use of the available data, as well as an obstacle to scaling up data collection further. Recent years have seen rapid advances in the fields of artificial intelligence and machine learning, and in particular, so-called deep learning systems are now able to solve complex tasks that previously required human expertise. This technology is directly applicable to many important data analysis problems and it will provide tools that are needed to solve many complex challenges in marine science and resource management. Here we give a brief review of recent developments in deep learning, and highlight the many opportunities and challenges for effective adoption of this technology across the marine sciences.


Author(s):  
Sunita Vikrant Dhavale

Insiders are considered as the weakest link. The digital records of a person's Facebook likes against motivational quotes can be used for automatic and accurate prediction of sensitive attributes related to their personality traits depression, and their views against company/government policies, etc. Such analysis will help organization to take proactive measures against vulnerable insiders. Insiders managing their impressions differently than their basic personality traits can also be identified. Deep learning models can be utilized to learn and map the association among extracted features and insider behavioral patterns. Further, reinforcement techniques can be used to select appropriate motivational quotes in order to collect additional data required for further analysis. At the same time, the same exposed motivational messages on insider's social platform can aid to improve their psychological health over a time. However, due to implications involved in data collections related to personalization and data collection privacy, the authors have quoted their work in terms of this concept chapter only.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Balint Armin Pataki ◽  
Joan Garriga ◽  
Roger Eritja ◽  
John R. B. Palmer ◽  
Frederic Bartumeus ◽  
...  

AbstractGlobal monitoring of disease vectors is undoubtedly becoming an urgent need as the human population rises and becomes increasingly mobile, international commercial exchanges increase, and climate change expands the habitats of many vector species. Traditional surveillance of mosquitoes, vectors of many diseases, relies on catches, which requires regular manual inspection and reporting, and dedicated personnel, making large-scale monitoring difficult and expensive. New approaches are solving the problem of scalability by relying on smartphones and the Internet to enable novel community-based and digital observatories, where people can upload pictures of mosquitoes whenever they encounter them. An example is the Mosquito Alert citizen science system, which includes a dedicated mobile phone app through which geotagged images are collected. This system provides a viable option for monitoring the spread of various mosquito species across the globe, although it is partly limited by the quality of the citizen scientists’ photos. To make the system useful for public health agencies, and to give feedback to the volunteering citizens, the submitted images are inspected and labeled by entomology experts. Although citizen-based data collection can greatly broaden disease-vector monitoring scales, manual inspection of each image is not an easily scalable option in the long run, and the system could be improved through automation. Based on Mosquito Alert’s curated database of expert-validated mosquito photos, we trained a deep learning model to find tiger mosquitoes (Aedes albopictus), a species that is responsible for spreading chikungunya, dengue, and Zika among other diseases. The highly accurate 0.96 area under the receiver operating characteristic curve score promises not only a helpful pre-selector for the expert validation process but also an automated classifier giving quick feedback to the app participants, which may help to keep them motivated. In the paper, we also explored the possibilities of using the model to improve future data collection quality as a feedback loop.


Sign in / Sign up

Export Citation Format

Share Document