real time system
Recently Published Documents


TOTAL DOCUMENTS

1543
(FIVE YEARS 306)

H-INDEX

36
(FIVE YEARS 4)

Agronomy ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. 212
Author(s):  
Maira Sami ◽  
Saad Qasim Khan ◽  
Muhammad Khurram ◽  
Muhammad Umar Farooq ◽  
Rukhshanda Anjum ◽  
...  

The use of Internet of things (IoT)-based physical sensors to perceive the environment is a prevalent and global approach. However, one major problem is the reliability of physical sensors’ nodes, which creates difficulty in a real-time system to identify whether the physical sensor is transmitting correct values or malfunctioning due to external disturbances affecting the system, such as noise. In this paper, the use of Long Short-Term Memory (LSTM)-based neural networks is proposed as an alternate approach to address this problem. The proposed solution is tested for a smart irrigation system, where a physical sensor is replaced by a neural sensor. The Smart Irrigation System (SIS) contains several physical sensors, which transmit temperature, humidity, and soil moisture data to calculate the transpiration in a particular field. The real-world values are taken from an agriculture field, located in a field of lemons near the Ghadap Sindh province of Pakistan. The LM35 sensor is used for temperature, DHT-22 for humidity, and we designed a customized sensor in our lab for the acquisition of moisture values. The results of the experiment show that the proposed deep learning-based neural sensor predicts the real-time values with high accuracy, especially the temperature values. The humidity and moisture values are also in an acceptable range. Our results highlight the possibility of using a neural network, referred to as a neural sensor here, to complement the functioning of a physical sensor deployed in an agriculture field in order to make smart irrigation systems more reliable.


2022 ◽  
Vol 4 ◽  
Author(s):  
Qasem Abu Al-Haija

With the prompt revolution and emergence of smart, self-reliant, and low-power devices, Internet of Things (IoT) has inconceivably expanded and impacted almost every real-life application. Nowadays, for example, machines and devices are now fully reliant on computer control and, instead, they have their own programmable interfaces, such as cars, unmanned aerial vehicles (UAVs), and medical devices. With this increased use of IoT, attack capabilities have increased in response, which became imperative that new methods for securing these systems be developed to detect attacks launched against IoT devices and gateways. These attacks are usually aimed at accessing, changing, or destroying sensitive information; extorting money from users; or interrupting normal business processes. In this research, we present new efficient and generic top-down architecture for intrusion detection, and classification in IoT networks using non-traditional machine learning is proposed in this article. The proposed architecture can be customized and used for intrusion detection/classification incorporating any IoT cyber-attack datasets, such as CICIDS Dataset, MQTT dataset, and others. Specifically, the proposed system is composed of three subsystems: feature engineering (FE) subsystem, feature learning (FL) subsystem, and detection and classification (DC) subsystem. All subsystems have been thoroughly described and analyzed in this article. Accordingly, the proposed architecture employs deep learning models to enable the detection of slightly mutated attacks of IoT networking with high detection/classification accuracy for the IoT traffic obtained from either real-time system or a pre-collected dataset. Since this work employs the system engineering (SE) techniques, the machine learning technology, the cybersecurity of IoT systems field, and the collective corporation of the three fields have successfully yielded a systematic engineered system that can be implemented with high-performance trajectories.


Author(s):  
Ajitesh Kumar

Background: Nowadays, there is an immense increase in the demand for high power computation of real-time workloads and the trend towards multi-core and multiprocessor CPUs. The real-time system needs to be implemented upon multiprocessor platforms. Introduction: The nature of processors in an embedded real-time system is changing day by day. The two most significant challenges in a multiprocessor environment are scheduling and synchronization. The popularity of real-time multi-core systems has exploded in recent years, driving the rapid development of a variety of methods for multiprocessor scheduling of essential tasks, on the other hand, these systems have constraints when it comes to maintaining synchronization in order to access shared resources. Method: This research work presents a systematic review of different existing scheduling algorithms and synchronization protocols for shared resources in a real-time multiprocessor environment. The manuscript also presents a study based on various metrics of resource scheduling and comparison among different resource scheduling techniques. Result and Conclusion: The survey classifies open issues, key challenges, and likely useful research directions. Finally, we accept that there is still a lot of capacity in getting better resource management and further maintaining the overall quality. The paper considers such a future path of research in this field.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 160
Author(s):  
Xuelin Zhang ◽  
Donghao Zhang ◽  
Alexander Leye ◽  
Adrian Scott ◽  
Luke Visser ◽  
...  

This paper focuses on improving the performance of scientific instrumentation that uses glass spray chambers for sample introduction, such as spectrometers, which are widely used in analytical chemistry, by detecting incidents using deep convolutional models. The performance of these instruments can be affected by the quality of the introduction of the sample into the spray chamber. Among the indicators of poor quality sample introduction are two primary incidents: The formation of liquid beads on the surface of the spray chamber, and flooding at the bottom of the spray chamber. Detecting such events autonomously as they occur can assist with improving the overall operational accuracy and efficacy of the chemical analysis, and avoid severe incidents such as malfunction and instrument damage. In contrast to objects commonly seen in the real world, beading and flooding detection are more challenging since they are of significantly small size and transparent. Furthermore, the non-rigid property increases the difficulty of the detection of these incidents, as such that existing deep-learning-based object detection frameworks are prone to fail for this task. There is no former work that uses computer vision to detect these incidents in the chemistry industry. In this work, we propose two frameworks for the detection task of these two incidents, which not only leverage the modern deep learning architectures but also integrate with expert knowledge of the problems. Specifically, the proposed networks first localize the regions of interest where the incidents are most likely generated and then refine these incident outputs. The use of data augmentation and synthesis, and choice of negative sampling in training, allows for a large increase in accuracy while remaining a real-time system for inference. In the data collected from our laboratory, our method surpasses widely used object detection baselines and can correctly detect 95% of the beads and 98% of the flooding. At the same time, out method can process four frames per second and is able to be implemented in real time.


2021 ◽  
Author(s):  
Dongsheng Zhang ◽  
Gang Zhang ◽  
Jiawei Wu ◽  
Yunjie Xiao ◽  
Liang Liang ◽  
...  

We propose a symbol synchronization algorithm for high-speed data streams in IMDD-OOFDM system using a training sequence. Sampling point phase offset approximately sustains within ±π/32 and symbol synchronization deviation stabilizes within ±0.5 sampling point in a real-time system of 1.5Gsa/s.


Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8489
Author(s):  
Liangchen Zhang ◽  
Xiaodong Ju ◽  
Junqiang Lu ◽  
Baiyong Men ◽  
Weiliang He

To increase the accuracy of reservoir evaluation, a new type of seismoelectric logging instrument was designed. The designed tool comprises the invented sonde-structured array complex. The tool includes several modules, including a signal excitation module, data acquisition module, phased array transmitting module, impedance matching module and a main system control circuit, which are interconnected through high-speed tool bus to form a distributed architecture. UC/OS-II was used for the real-time system control. After constructing the experimental measurement system prototype of the seismoelectric logging detector, its performance was verified in the laboratory. The obtained results showed that the consistency between the multi-channel received waveform amplitude and benchmark spectrum was more than 97%. The binary phased linear array transmitting module of the instrument can realize 0° to 20° deflection and directional radiation. In the end, a field test was conducted to verify the tool’s performance in downhole conditions. The results of this test proved the effectiveness of the developed seismoelectric logging tool.


Author(s):  
Mohit Panwar ◽  
Rohit Pandey ◽  
Rohan Singla ◽  
Kavita Saxena

Every day we see many people, who are facing illness like deaf, dumb etc. There are not as many technologies which help them to interact with each other. They face difficulty in interacting with others. Sign language is used by deaf and hard hearing people to exchange information between their own community and with other people. Computer recognition of sign language deals from sign gesture acquisition and continues till text/speech generation. Sign gestures can be classified as static and dynamic. However static gesture recognition is simpler than dynamic gesture recognition but both recognition systems are important to the human community. The ASL American sign language recognition steps are described in this survey. There are not as many technologies which help them to interact with each other. They face difficulty in interacting with others. Image classification and machine learning can be used to help computers recognize sign language, which could then be interpreted by other people. Earlier we have Glove-based method in which the person has to wear a hardware glove, while the hand movements are getting captured. It seems a bit uncomfortable for practical use. Here we use visual based method. Convolutional neural networks and mobile ssd model have been employed in this paper to recognize sign language gestures. Preprocessing was performed on the images, which then served as the cleaned input. Tensor flow is used for training of images. A system will be developed which serves as a tool for sign language detection. Tensor flow is used for training of images. Keywords: ASL recognition system, convolutional neural network (CNNs), classification, real time, tensor flow


2021 ◽  
Author(s):  
Ayman Amer ◽  
Ali Alshehri ◽  
Hamad Saiari ◽  
Ali Meshaikhis ◽  
Abdulaziz Alshamrany

Abstract Corrosion under insulation (CUI) is a critical challenge that affects the integrity of assets where the oil and gas industry is not immune. Its severity arises due to its hidden nature as it can often times go unnoticed. CUI is stimulated, in principle, by moisture ingress through the insulation layers to the surface of the pipeline. This Artificial Intelligence (AI)-powered detection technology stemmed from an urgent need to detect the presence of these corrosion types. The new approach is based on a Cyber Physical (CP) system that maximizes the potential of thermographic imaging by using a Machine Learning application of Artificial Intelligence. In this work, we describe how common image processing techniques from infra-red images of assets can be enhanced using a machine learning approach allowing the detection of locations highly vulnerable to corrosion through pinpointing locations of CUI anomalies and areas of concern. The machine learning is examining the progression of thermal images, captured over time, corrosion and factors that cause this degradation are predicted by extracting thermal anomaly features and correlating them with corrosion and irregularities in the structural integrity of assets verified visually during the initial learning phase of the ML algorithm. The ML classifier has shown outstanding results in predicting CUI anomalies with a predictive accuracy in the range of 85 – 90% projected from 185 real field assets. Also, IR imaging by itself is subjective and operator dependent, however with this cyber physical transfer learning approach, such dependency has been eliminated. The results and conclusions of this work on real field assets in operation demonstrate the feasibility of this technique to predict and detect thermal anomalies directly correlated to CUI. This innovative work has led to the development of a cyber-physical that meets the demands of inspection units across the oil and gas industry, providing a real-time system and online assessment tool to monitor the presence of CUI enhancing the output from thermography technologies, using Artificial Intelligence (AI) and machine learning technology. Additional benefits of this approach include safety enhancement through non-contact online inspection and cost savings by reducing the associated scaffolding and downtime.


Sign in / Sign up

Export Citation Format

Share Document