Telescope performance real-time monitoring based on machine learning

2020 ◽  
Vol 500 (1) ◽  
pp. 388-396
Author(s):  
Tian Z Hu ◽  
Yong Zhang ◽  
Xiang Q Cui ◽  
Qing Y Zhang ◽  
Ye P Li ◽  
...  

ABSTRACT In astronomy, the demand for high-resolution imaging and high-efficiency observation requires telescopes that are maintained at peak performance. To improve telescope performance, it is useful to conduct real-time monitoring of the telescope status and detailed recordings of the operational data of the telescope. In this paper, we provide a method based on machine learning to monitor the telescope performance in real-time. First, we use picture features and the random forest algorithm to select normal pictures captured by the acquisition camera or science camera. Next, we cut out the source image of the picture and use convolutional neural networks to recognize star shapes. Finally, we monitor the telescope performance based on the relationship between the source image shape and telescope performance. Through this method, we achieve high-performance real-time monitoring with the Large Sky Area Multi-Object Fibre Spectroscopic Telescope, including guiding system performance, focal surface defocus, submirror performance, and active optics system performance. The ultimate performance detection accuracy can reach up to 96.7 per cent.

Author(s):  
Ignacio Martinez-Alpiste ◽  
Gelayol Golcarenarenji ◽  
Qi Wang ◽  
Jose Maria Alcaraz-Calero

AbstractMachine learning algorithms based on convolutional neural networks (CNNs) have recently been explored in a myriad of object detection applications. Nonetheless, many devices with limited computation resources and strict power consumption constraints are not suitable to run such algorithms designed for high-performance computers. Hence, a novel smartphone-based architecture intended for portable and constrained systems is designed and implemented to run CNN-based object recognition in real time and with high efficiency. The system is designed and optimised by leveraging the integration of the best of its kind from the state-of-the-art machine learning platforms including OpenCV, TensorFlow Lite, and Qualcomm Snapdragon informed by empirical testing and evaluation of each candidate framework in a comparable scenario with a high demanding neural network. The final system has been prototyped combining the strengths from these frameworks and led to a new machine learning-based object recognition execution environment embedded in a smartphone with advantageous performance compared with the previous frameworks.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4736
Author(s):  
Sk. Tanzir Mehedi ◽  
Adnan Anwar ◽  
Ziaur Rahman ◽  
Kawsar Ahmed

The Controller Area Network (CAN) bus works as an important protocol in the real-time In-Vehicle Network (IVN) systems for its simple, suitable, and robust architecture. The risk of IVN devices has still been insecure and vulnerable due to the complex data-intensive architectures which greatly increase the accessibility to unauthorized networks and the possibility of various types of cyberattacks. Therefore, the detection of cyberattacks in IVN devices has become a growing interest. With the rapid development of IVNs and evolving threat types, the traditional machine learning-based IDS has to update to cope with the security requirements of the current environment. Nowadays, the progression of deep learning, deep transfer learning, and its impactful outcome in several areas has guided as an effective solution for network intrusion detection. This manuscript proposes a deep transfer learning-based IDS model for IVN along with improved performance in comparison to several other existing models. The unique contributions include effective attribute selection which is best suited to identify malicious CAN messages and accurately detect the normal and abnormal activities, designing a deep transfer learning-based LeNet model, and evaluating considering real-world data. To this end, an extensive experimental performance evaluation has been conducted. The architecture along with empirical analyses shows that the proposed IDS greatly improves the detection accuracy over the mainstream machine learning, deep learning, and benchmark deep transfer learning models and has demonstrated better performance for real-time IVN security.


2020 ◽  
Vol 15 ◽  
pp. 155892502097726
Author(s):  
Wei Wang ◽  
Zhiqiang Pang ◽  
Ling Peng ◽  
Fei Hu

Performing real-time monitoring for human vital signs during sleep at home is of vital importance to achieve timely detection and rescue. However, the existing smart equipment for monitoring human vital signs suffers the drawbacks of high complexity, high cost, and intrusiveness, or low accuracy. Thus, it is of great need to develop a simplified, nonintrusive, comfortable and low cost real-time monitoring system during sleep. In this study, a novel intelligent pillow was developed based on a low-cost piezoelectric ceramic sensor. It was manufactured by locating a smart system (consisting of a sensing unit i.e. a piezoelectric ceramic sensor, a data processing unit and a GPRS communication module) in the cavity of the pillow made of shape memory foam. The sampling frequency of the intelligent pillow was set at 1000 Hz to capture the signals more accurately, and vital signs including heart rate, respiratory rate and body movement were derived through series of well established algorithms, which were sent to the user’s app. Validation experimental results demonstrate that high heart-rate detection accuracy (i.e. 99.18%) was achieved in using the intelligent pillow. Besides, human tests were conducted by detecting vital signs of six elder participants at their home, and results showed that the detected vital signs may well predicate their health conditions. In addition, no contact discomfort was reported by the participants. With further studies in terms of validity of the intelligent pillow and large-scale human trials, the proposed intelligent pillow was expected to play an important role in daily sleep monitoring.


2021 ◽  
Author(s):  
Nicholas Parkyn

Emerging heterogeneous computing, computing at the edge, machine learning and AI at the edge technology drives approaches and techniques for processing and analysing onboard instrument data in near real-time. The author has used edge computing and neural networks combined with high performance heterogeneous computing platforms to accelerate AI workloads. Heterogeneous computing hardware used is readily available, low cost, delivers impressive AI performance and can run multiple neural networks in parallel. Collecting, processing and machine learning from onboard instruments data in near real-time is not a trivial problem due to data volumes, complexities of data filtering, data storage and continual learning. Little research has been done on continual machine learning which aims at a higher level of machine intelligence through providing the artificial agents with the ability to learn from a non-stationary and never-ending stream of data. The author has applied the concept of continual learning to building a system that continually learns from actual boat performance and refines predictions previously done using static VPP data. The neural networks used are initially trained using the output from traditional VPP software and continue to learn from actual data collected under real sailing conditions. The author will present the system design, AI, and edge computing techniques used and the approaches he has researched for incremental training to realise continual learning.


2019 ◽  
Vol 29 (39) ◽  
pp. 1903436 ◽  
Author(s):  
Yingpeng Wan ◽  
Guihong Lu ◽  
Jinfeng Zhang ◽  
Ziying Wang ◽  
Xiaozhen Li ◽  
...  

Energies ◽  
2020 ◽  
Vol 13 (15) ◽  
pp. 3930 ◽  
Author(s):  
Ayaz Hussain ◽  
Umar Draz ◽  
Tariq Ali ◽  
Saman Tariq ◽  
Muhammad Irfan ◽  
...  

Increasing waste generation has become a significant issue over the globe due to the rapid increase in urbanization and industrialization. In the literature, many issues that have a direct impact on the increase of waste and the improper disposal of waste have been investigated. Most of the existing work in the literature has focused on providing a cost-efficient solution for the monitoring of garbage collection system using the Internet of Things (IoT). Though an IoT-based solution provides the real-time monitoring of a garbage collection system, it is limited to control the spreading of overspill and bad odor blowout gasses. The poor and inadequate disposal of waste produces toxic gases, and radiation in the environment has adverse effects on human health, the greenhouse system, and global warming. While considering the importance of air pollutants, it is imperative to monitor and forecast the concentration of air pollutants in addition to the management of the waste. In this paper, we present and IoT-based smart bin using a machine and deep learning model to manage the disposal of garbage and to forecast the air pollutant present in the surrounding bin environment. The smart bin is connected to an IoT-based server, the Google Cloud Server (GCP), which performs the computation necessary for predicting the status of the bin and for forecasting air quality based on real-time data. We experimented with a traditional model (k-nearest neighbors algorithm (k-NN) and logistic reg) and a non-traditional (long short term memory (LSTM) network-based deep learning) algorithm for the creation of alert messages regarding bin status and forecasting the amount of air pollutant carbon monoxide (CO) present in the air at a specific instance. The recalls of logistic regression and k-NN algorithm is 79% and 83%, respectively, in a real-time testing environment for predicting the status of the bin. The accuracy of modified LSTM and simple LSTM models is 90% and 88%, respectively, to predict the future concentration of gases present in the air. The system resulted in a delay of 4 s in the creation and transmission of the alert message to a sanitary worker. The system provided the real-time monitoring of garbage levels along with notifications from the alert mechanism. The proposed works provide improved accuracy by utilizing machine learning as compared to existing solutions based on simple approaches.


2020 ◽  
Vol 24 (5) ◽  
pp. 709-722
Author(s):  
Kieran Woodward ◽  
Eiman Kanjo ◽  
Andreas Oikonomou ◽  
Alan Chamberlain

Abstract In recent years, machine learning has developed rapidly, enabling the development of applications with high levels of recognition accuracy relating to the use of speech and images. However, other types of data to which these models can be applied have not yet been explored as thoroughly. Labelling is an indispensable stage of data pre-processing that can be particularly challenging, especially when applied to single or multi-model real-time sensor data collection approaches. Currently, real-time sensor data labelling is an unwieldy process, with a limited range of tools available and poor performance characteristics, which can lead to the performance of the machine learning models being compromised. In this paper, we introduce new techniques for labelling at the point of collection coupled with a pilot study and a systematic performance comparison of two popular types of deep neural networks running on five custom built devices and a comparative mobile app (68.5–89% accuracy within-device GRU model, 92.8% highest LSTM model accuracy). These devices are designed to enable real-time labelling with various buttons, slide potentiometer and force sensors. This exploratory work illustrates several key features that inform the design of data collection tools that can help researchers select and apply appropriate labelling techniques to their work. We also identify common bottlenecks in each architecture and provide field tested guidelines to assist in building adaptive, high-performance edge solutions.


Sign in / Sign up

Export Citation Format

Share Document