On-Line, Real-Time Recognition System on Telegraph Codes: TETAC

1975 ◽  
Vol AES-11 (4) ◽  
pp. 456-464
Author(s):  
Hisashi Yasunaga ◽  
Hitoshi Mochizuki
Keyword(s):  
Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 405
Author(s):  
Marcos Lupión ◽  
Javier Medina-Quero ◽  
Juan F. Sanjuan ◽  
Pilar M. Ortigosa

Activity Recognition (AR) is an active research topic focused on detecting human actions and behaviours in smart environments. In this work, we present the on-line activity recognition platform DOLARS (Distributed On-line Activity Recognition System) where data from heterogeneous sensors are evaluated in real time, including binary, wearable and location sensors. Different descriptors and metrics from the heterogeneous sensor data are integrated in a common feature vector whose extraction is developed by a sliding window approach under real-time conditions. DOLARS provides a distributed architecture where: (i) stages for processing data in AR are deployed in distributed nodes, (ii) temporal cache modules compute metrics which aggregate sensor data for computing feature vectors in an efficient way; (iii) publish-subscribe models are integrated both to spread data from sensors and orchestrate the nodes (communication and replication) for computing AR and (iv) machine learning algorithms are used to classify and recognize the activities. A successful case study of daily activities recognition developed in the Smart Lab of The University of Almería (UAL) is presented in this paper. Results present an encouraging performance in recognition of sequences of activities and show the need for distributed architectures to achieve real time recognition.


2008 ◽  
Vol 381-382 ◽  
pp. 375-378
Author(s):  
K.T. Song ◽  
M.J. Han ◽  
F.Y. Chang ◽  
S.H. Chang

The capability of recognizing human facial expression plays an important role in advanced human-robot interaction development. Through recognizing facial expressions, a robot can interact with a user in a more natural and friendly manner. In this paper, we proposed a facial expression recognition system based on an embedded image processing platform to classify different facial expressions on-line in real time. A low-cost embedded vision system has been designed and realized for robotic applications using a CMOS image sensor and digital signal processor (DSP). The current design acquires thirty 640x480 image frames per second (30 fps). The proposed emotion recognition algorithm has been successfully implemented on the real-time vision system. Experimental results on a pet robot show that the robot can interact with a person in a responding manner. The developed image processing platform is effective for accelerating the recognition speed to 25 recognitions per second with an average on-line recognition rate of 74.4% for five facial expressions.


Author(s):  
HIROSHI MURASE

This paper describes an on-line recognition system for free-format handwritten Japanese character strings which may contain characters with separated constituents or overlapping characters. The recognition method for the system, called candidate lattice method, conducts segmentation and recognition of individual character candidates, and applies linguistic information to determine the most probable character string in order to achieve high recognition rates. Special hardware designed to realize a real-time recognition system is also introduced. The method used on the special hardware attained a segmentation rate of 98.8% and an overall recognition rate of 98.7% for 105 samples.


1994 ◽  
Vol 33 (01) ◽  
pp. 60-63 ◽  
Author(s):  
E. J. Manders ◽  
D. P. Lindstrom ◽  
B. M. Dawant

Abstract:On-line intelligent monitoring, diagnosis, and control of dynamic systems such as patients in intensive care units necessitates the context-dependent acquisition, processing, analysis, and interpretation of large amounts of possibly noisy and incomplete data. The dynamic nature of the process also requires a continuous evaluation and adaptation of the monitoring strategy to respond to changes both in the monitored patient and in the monitoring equipment. Moreover, real-time constraints may imply data losses, the importance of which has to be minimized. This paper presents a computer architecture designed to accomplish these tasks. Its main components are a model and a data abstraction module. The model provides the system with a monitoring context related to the patient status. The data abstraction module relies on that information to adapt the monitoring strategy and provide the model with the necessary information. This paper focuses on the data abstraction module and its interaction with the model.


2010 ◽  
Vol 5 (3) ◽  
Author(s):  
Cheng-Nan Chang ◽  
Li-Ling Lee ◽  
Han-Hsien Huang ◽  
Ying-Chih Chiu

The performance of a real-time controlled Sequencing Batch Membrane Bioreactor (SBMBR) for removing organic matter and nitrogen from synthetic wastewater has been investigated in this study under two specific ammonia loadings of 0.0086 and 0.0045g NH4+-N gVSS−1 day−1. Laboratory results indicate that both COD and DOC removal are greater than 97.5% (w/w) but the major benefit of using membrane for solid-liquid separation is that the effluent can be decanted through the membrane while aeration is continued during the draw stage. With a continued aeration, the sludge cake layer is prevented from forming thus alleviating the membrane clogging problem in addition to significant nitrification activities observed in the draw stage. With adequate aeration in the oxic stage, the nitrogen removal efficiency exceeding 99% can be achieved with the SBMBR system. Furthermore, the SBMBR system has also been used to study the occurrence of ammonia valley and nitrate knee that can be used for real-time control of the biological process. Under appropriate ammonia loading rates, applicable ammonia valley and nitrate knee are detected. The real-time control of the SBMBR can be performed based on on-line ORP and pH measurements.


1999 ◽  
Vol 39 (9) ◽  
pp. 201-207
Author(s):  
Andreas Cassar ◽  
Hans-Reinhard Verworn

Most of the existing rainfall runoff models for urban drainage systems have been designed for off-line calculations. With a design storm or a historical rain event and the model system the rainfall runoff processes are simulated, the faster the better. Since very recently, hydrodynamic models have been considered to be much too slow for real time applications. However, with the computing power of today - and even more so of tomorrow - very complex and detailed models may be run on-line and in real time. While the algorithms basically remain the same as for off-line simulations, problems concerning timing, data management and inter process communication have to be identified and solved. This paper describes the upgrading of the existing hydrodynamic rainfall runoff model HYSTEM/EXTRAN and the decision finding model INTL for real time performance, their implementation on a network of UNIX stations and the experiences from running them within an urban drainage real time control project. The main focus is not on what the models do but how they are put into action and made to run smoothly embedded in all the processes necessary in operational real time control.


2021 ◽  
Vol 11 (11) ◽  
pp. 4758
Author(s):  
Ana Malta ◽  
Mateus Mendes ◽  
Torres Farinha

Maintenance professionals and other technical staff regularly need to learn to identify new parts in car engines and other equipment. The present work proposes a model of a task assistant based on a deep learning neural network. A YOLOv5 network is used for recognizing some of the constituent parts of an automobile. A dataset of car engine images was created and eight car parts were marked in the images. Then, the neural network was trained to detect each part. The results show that YOLOv5s is able to successfully detect the parts in real time video streams, with high accuracy, thus being useful as an aid to train professionals learning to deal with new equipment using augmented reality. The architecture of an object recognition system using augmented reality glasses is also designed.


Sign in / Sign up

Export Citation Format

Share Document