scholarly journals Affordable ML Based Collaborative Approach for Baby Monitoring

Author(s):  
Taiba Naz ◽  
Ravi Shukla ◽  
Krishna Tiwari

There are numerous baby-monitoring devices available in the market that parents use to keep an eye on babies while they are away. The majority of them are reliant on the installation of expensive hardware, which many parents cannot afford. Another issue with these devices is that they detect high-pitched sounds and frequently give false alarms, causing both children and parents to be disturbed. The majority of smartphone applications in the market work on sound wave and only sound an alarm when the infant start crying. In this project, we proposed the design of a mobile application to detect the status of a baby inside a crib/ on a bed. The application will alert parents when their child requires assistance, will be able to determine whether the child is sleeping in a safe or hazardous position, and will keep track of the child's sleeping patterns. It is less reliant on hardware, making it less expensive. Here the only requirement is two paired mobile phones with the application installed instead of expensive hardware (IoT-based devices). The application is utilizing the transfer-learning technique on tensor flow lite Mobilenet classification and SSD_mobilenet_V1_coco object detection models. The accuracy of the model is 97% for the Mobilenet classification model and 98% for the object detection model.

2019 ◽  
Vol 11 (13) ◽  
pp. 1516 ◽  
Author(s):  
Chang Lai ◽  
Jiyao Xu ◽  
Jia Yue ◽  
Wei Yuan ◽  
Xiao Liu ◽  
...  

With the development of ground-based all-sky airglow imager (ASAI) technology, a large amount of airglow image data needs to be processed for studying atmospheric gravity waves. We developed a program to automatically extract gravity wave patterns in the ASAI images. The auto-extraction program includes a classification model based on convolutional neural network (CNN) and an object detection model based on faster region-based convolutional neural network (Faster R-CNN). The classification model selects the images of clear nights from all ASAI raw images. The object detection model locates the region of wave patterns. Then, the wave parameters (horizontal wavelength, period, direction, etc.) can be calculated within the region of the wave patterns. Besides auto-extraction, we applied a wavelength check to remove the interference of wavelike mist near the imager. To validate the auto-extraction program, a case study was conducted on the images captured in 2014 at Linqu (36.2°N, 118.7°E), China. Compared to the result of the manual check, the auto-extraction recognized less (28.9% of manual result) wave-containing images due to the strict threshold, but the result shows the same seasonal variation as the references. The auto-extraction program applies a uniform criterion to avoid the accidental error in manual distinction of gravity waves and offers a reliable method to process large ASAI images for efficiently studying the climatology of atmospheric gravity waves.


2020 ◽  
Vol 2020 (12) ◽  
pp. 172-1-172-7 ◽  
Author(s):  
Tejaswini Ananthanarayana ◽  
Raymond Ptucha ◽  
Sean C. Kelly

CMOS Image sensors play a vital role in the exponentially growing field of Artificial Intelligence (AI). Applications like image classification, object detection and tracking are just some of the many problems now solved with the help of AI, and specifically deep learning. In this work, we target image classification to discern between six categories of fruits — fresh/ rotten apples, fresh/ rotten oranges, fresh/ rotten bananas. Using images captured from high speed CMOS sensors along with lightweight CNN architectures, we show the results on various edge platforms. Specifically, we show results using ON Semiconductor’s global-shutter based, 12MP, 90 frame per second image sensor (XGS-12), and ON Semiconductor’s 13 MP AR1335 image sensor feeding into MobileNetV2, implemented on NVIDIA Jetson platforms. In addition to using the data captured with these sensors, we utilize an open-source fruits dataset to increase the number of training images. For image classification, we train our model on approximately 30,000 RGB images from the six categories of fruits. The model achieves an accuracy of 97% on edge platforms using ON Semiconductor’s 13 MP camera with AR1335 sensor. In addition to the image classification model, work is currently in progress to improve the accuracy of object detection using SSD and SSDLite with MobileNetV2 as the feature extractor. In this paper, we show preliminary results on the object detection model for the same six categories of fruits.


Object detection is closely related with video and image analysis. Under computer vision technology, object detection model training with image-level labels only is challenging research area.Researchers have not yet discovered accurate model for Weakly Supervised Object Detection (WSOD). WSOD is used for detecting and localizing the objects under the supervision of image level annotations only.The proposed work usesself-paced approach which is applied on region proposal network of Faster R-CNN architecture which gives better solution from previous weakly-supervised object detectors and it can be applied for computer visionapplications in near future.


2021 ◽  
Vol 11 (8) ◽  
pp. 3531
Author(s):  
Hesham M. Eraqi ◽  
Karim Soliman ◽  
Dalia Said ◽  
Omar R. Elezaby ◽  
Mohamed N. Moustafa ◽  
...  

Extensive research efforts have been devoted to identify and improve roadway features that impact safety. Maintaining roadway safety features relies on costly manual operations of regular road surveying and data analysis. This paper introduces an automatic roadway safety features detection approach, which harnesses the potential of artificial intelligence (AI) computer vision to make the process more efficient and less costly. Given a front-facing camera and a global positioning system (GPS) sensor, the proposed system automatically evaluates ten roadway safety features. The system is composed of an oriented (or rotated) object detection model, which solves an orientation encoding discontinuity problem to improve detection accuracy, and a rule-based roadway safety evaluation module. To train and validate the proposed model, a fully-annotated dataset for roadway safety features extraction was collected covering 473 km of roads. The proposed method baseline results are found encouraging when compared to the state-of-the-art models. Different oriented object detection strategies are presented and discussed, and the developed model resulted in improving the mean average precision (mAP) by 16.9% when compared with the literature. The roadway safety feature average prediction accuracy is 84.39% and ranges between 91.11% and 63.12%. The introduced model can pervasively enable/disable autonomous driving (AD) based on safety features of the road; and empower connected vehicles (CV) to send and receive estimated safety features, alerting drivers about black spots or relatively less-safe segments or roads.


2020 ◽  
Vol 13 (1) ◽  
pp. 23
Author(s):  
Wei Zhao ◽  
William Yamada ◽  
Tianxin Li ◽  
Matthew Digman ◽  
Troy Runge

In recent years, precision agriculture has been researched to increase crop production with less inputs, as a promising means to meet the growing demand of agriculture products. Computer vision-based crop detection with unmanned aerial vehicle (UAV)-acquired images is a critical tool for precision agriculture. However, object detection using deep learning algorithms rely on a significant amount of manually prelabeled training datasets as ground truths. Field object detection, such as bales, is especially difficult because of (1) long-period image acquisitions under different illumination conditions and seasons; (2) limited existing prelabeled data; and (3) few pretrained models and research as references. This work increases the bale detection accuracy based on limited data collection and labeling, by building an innovative algorithms pipeline. First, an object detection model is trained using 243 images captured with good illimitation conditions in fall from the crop lands. In addition, domain adaptation (DA), a kind of transfer learning, is applied for synthesizing the training data under diverse environmental conditions with automatic labels. Finally, the object detection model is optimized with the synthesized datasets. The case study shows the proposed method improves the bale detecting performance, including the recall, mean average precision (mAP), and F measure (F1 score), from averages of 0.59, 0.7, and 0.7 (the object detection) to averages of 0.93, 0.94, and 0.89 (the object detection + DA), respectively. This approach could be easily scaled to many other crop field objects and will significantly contribute to precision agriculture.


Animals ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 357
Author(s):  
Dae-Hyun Jung ◽  
Na Yeon Kim ◽  
Sang Ho Moon ◽  
Changho Jhin ◽  
Hak-Jin Kim ◽  
...  

The priority placed on animal welfare in the meat industry is increasing the importance of understanding livestock behavior. In this study, we developed a web-based monitoring and recording system based on artificial intelligence analysis for the classification of cattle sounds. The deep learning classification model of the system is a convolutional neural network (CNN) model that takes voice information converted to Mel-frequency cepstral coefficients (MFCCs) as input. The CNN model first achieved an accuracy of 91.38% in recognizing cattle sounds. Further, short-time Fourier transform-based noise filtering was applied to remove background noise, improving the classification model accuracy to 94.18%. Categorized cattle voices were then classified into four classes, and a total of 897 classification records were acquired for the classification model development. A final accuracy of 81.96% was obtained for the model. Our proposed web-based platform that provides information obtained from a total of 12 sound sensors provides cattle vocalization monitoring in real time, enabling farm owners to determine the status of their cattle.


Author(s):  
Runze Liu ◽  
Guangwei Yan ◽  
Hui He ◽  
Yubin An ◽  
Ting Wang ◽  
...  

Background: Power line inspection is essential to ensure the safe and stable operation of the power system. Object detection for tower equipment can significantly improve inspection efficiency. However, due to the low resolution of small targets and limited features, the detection accuracy of small targets is not easy to improve. Objective: This study aimed to improve the tiny targets’ resolution while making the small target's texture and detailed features more prominent to be perceived by the detection model. Methods: In this paper, we propose an algorithm that employs generative adversarial networks to improve small objects' detection accuracy. First, the original image is converted into a super-resolution one by a super-resolution reconstruction network (SRGAN). Then the object detection framework Faster RCNN is utilized to detect objects on the super-resolution images. Result: The experimental results on two small object recognition datasets show that the model proposed in this paper has good robustness. It can especially detect the targets missed by Faster RCNN, which indicates that SRGAN can effectively enhance the detailed information of small targets by improving the resolution. Conclusion: We found that higher resolution data is conducive to obtaining more detailed information of small targets, which can help the detection algorithm achieve higher accuracy. The small object detection model based on the generative adversarial network proposed in this paper is feasible and more efficient. Compared with Faster RCNN, this model has better performance on small object detection.


2021 ◽  
Author(s):  
D. Nathasha U. Naranpanawa ◽  
Yanyang Gu ◽  
Shekhar S. Chandra ◽  
Brigid Betz-Stablein ◽  
Richard A. Sturm ◽  
...  

MEST Journal ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 54-60
Author(s):  
Alaa Mohammed ◽  
Fawzi Al-Naima

Sign in / Sign up

Export Citation Format

Share Document