Protection of Signals in the Video Stream of a Mobile Robotic Group Agent

Author(s):  
Olga Shumskaya ◽  
Andrey Iskhakov ◽  
Anastasia Iskhakova
2020 ◽  
Vol 39 (6) ◽  
pp. 8463-8475
Author(s):  
Palanivel Srinivasan ◽  
Manivannan Doraipandian

Rare event detections are performed using spatial domain and frequency domain-based procedures. Omnipresent surveillance camera footages are increasing exponentially due course the time. Monitoring all the events manually is an insignificant and more time-consuming process. Therefore, an automated rare event detection contrivance is required to make this process manageable. In this work, a Context-Free Grammar (CFG) is developed for detecting rare events from a video stream and Artificial Neural Network (ANN) is used to train CFG. A set of dedicated algorithms are used to perform frame split process, edge detection, background subtraction and convert the processed data into CFG. The developed CFG is converted into nodes and edges to form a graph. The graph is given to the input layer of an ANN to classify normal and rare event classes. Graph derived from CFG using input video stream is used to train ANN Further the performance of developed Artificial Neural Network Based Context-Free Grammar – Rare Event Detection (ACFG-RED) is compared with other existing techniques and performance metrics such as accuracy, precision, sensitivity, recall, average processing time and average processing power are used for performance estimation and analyzed. Better performance metrics values have been observed for the ANN-CFG model compared with other techniques. The developed model will provide a better solution in detecting rare events using video streams.


2015 ◽  
Vol 18 (2) ◽  
pp. 042 ◽  
Author(s):  
Mehmet Ezelsoy ◽  
Baris Caynak ◽  
Muhammed Bayram ◽  
Kerem Oral ◽  
Zehra Bayramoglu ◽  
...  

<strong>Background</strong>: Minimally invasive bypass grafting surgery has entered the clincal routine in several centers around the world, with an increasing popularity in the last decade. In our study, we aimed to make a comparison between minimally invasive coronary artery bypass grafting surgery and conventional bypass grafting surgery in isolated proximal left anterior descending artery (LAD) lesions. <br /><strong>Methods</strong>: Between January 2004 and December 2011, patients with proximal LAD lesions, who were treated with robotically assisted minimally invasive coronary artery bypass surgery and conventional bypass surgery, were included in the study. In Group 1, coronary bypass with cardiopulmonary bypass and complete sternotomy were applied to 35 patients and in Group 2, robotically assisted minimally invasive bypass surgery was applied to 35 patients. The demographic, preoperative, perioperative, and postoperative data were collected retrospectively.<br /><strong>Results</strong>: The mean follow-up time of the conventional bypass group was 5.7 ± 1.7 years, whereas this ratio was 7.3 ±1.3 in the robotic group. There was no postoperative transient ischemic attack (TIA), wound infection, mortality, or need for intra-aortic balloon pump (IABP) in any of the patients. In the conventional bypass group, blood transfusion and ventilation time were significantly higher (P &lt; .05) than in the robotic group. The intensive care unit (ICU) stay and hospital stay were remarkably shorter in the robotic group <br />(P &lt; .01). The postoperative pneumonia rate was significantly higher (20%) in the conventional bypass group <br />(P &lt; .01). Postoperative day 1 pain score was higher in the robotic group (P &lt; .05), however, postoperative day 3 pain score in the conventional bypass group was higher (P &lt; .05). Graft patency rate was 88.6% in the conventional bypass group whereas this ratio was 91.4% in the robotic bypass group, which was not clinically significant (P &gt; .05).<br /><strong>Conclusion</strong>: In isolated proximal LAD stenosis, robotic assisted minimally invasive coronary artery bypass grafting surgery requires less blood products, is associated with shorter ICU and hospital stay, and lesser pain in the early postoperative period in contrast to conventional surgery. The result of our studies, which showed similarities to the past studies, lead us to recognize the importance of minimally invasive interventions and the need to perform them more frequently in the future.


2019 ◽  
Vol 4 (91) ◽  
pp. 21-29 ◽  
Author(s):  
Yaroslav Trofimenko ◽  
Lyudmila Vinogradova ◽  
Evgeniy Ershov

2017 ◽  
Vol 2 (1) ◽  
pp. 80-87
Author(s):  
Puyda V. ◽  
◽  
Stoian. A.

Detecting objects in a video stream is a typical problem in modern computer vision systems that are used in multiple areas. Object detection can be done on both static images and on frames of a video stream. Essentially, object detection means finding color and intensity non-uniformities which can be treated as physical objects. Beside that, the operations of finding coordinates, size and other characteristics of these non-uniformities that can be used to solve other computer vision related problems like object identification can be executed. In this paper, we study three algorithms which can be used to detect objects of different nature and are based on different approaches: detection of color non-uniformities, frame difference and feature detection. As the input data, we use a video stream which is obtained from a video camera or from an mp4 video file. Simulations and testing of the algoritms were done on a universal computer based on an open-source hardware, built on the Broadcom BCM2711, quad-core Cortex-A72 (ARM v8) 64-bit SoC processor with frequency 1,5GHz. The software was created in Visual Studio 2019 using OpenCV 4 on Windows 10 and on a universal computer operated under Linux (Raspbian Buster OS) for an open-source hardware. In the paper, the methods under consideration are compared. The results of the paper can be used in research and development of modern computer vision systems used for different purposes. Keywords: object detection, feature points, keypoints, ORB detector, computer vision, motion detection, HSV model color


2013 ◽  
Vol 18 (2-3) ◽  
pp. 49-60 ◽  
Author(s):  
Damian Dudzńiski ◽  
Tomasz Kryjak ◽  
Zbigniew Mikrut

Abstract In this paper a human action recognition algorithm, which uses background generation with shadow elimination, silhouette description based on simple geometrical features and a finite state machine for recognizing particular actions is described. The performed tests indicate that this approach obtains a 81 % correct recognition rate allowing real-time image processing of a 360 X 288 video stream.


2020 ◽  
Vol 10 (19) ◽  
pp. 6885
Author(s):  
Sahar Ujan ◽  
Neda Navidi ◽  
Rene Jr Landry

Radio Frequency Interference (RFI) detection and characterization play a critical role in ensuring the security of all wireless communication networks. Advances in Machine Learning (ML) have led to the deployment of many robust techniques dealing with various types of RFI. To sidestep an unavoidable complicated feature extraction step in ML, we propose an efficient Deep Learning (DL)-based methodology using transfer learning to determine both the type of received signals and their modulation type. To this end, the scalogram of the received signals is used as the input of the pretrained convolutional neural networks (CNN), followed by a fully-connected classifier. This study considers a digital video stream as the signal of interest (SoI), transmitted in a real-time satellite-to-ground communication using DVB-S2 standards. To create the RFI dataset, the SoI is combined with three well-known jammers namely, continuous-wave interference (CWI), multi- continuous-wave interference (MCWI), and chirp interference (CI). This study investigated four well-known pretrained CNN architectures, namely, AlexNet, VGG-16, GoogleNet, and ResNet-18, for the feature extraction to recognize the visual RFI patterns directly from pixel images with minimal preprocessing. Moreover, the robustness of the proposed classifiers is evaluated by the data generated at different signal to noise ratios (SNR).


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Matthias Ivantsits ◽  
Lennart Tautz ◽  
Simon Sündermann ◽  
Isaac Wamala ◽  
Jörg Kempfert ◽  
...  

AbstractMinimally invasive surgery is increasingly utilized for mitral valve repair and replacement. The intervention is performed with an endoscopic field of view on the arrested heart. Extracting the necessary information from the live endoscopic video stream is challenging due to the moving camera position, the high variability of defects, and occlusion of structures by instruments. During such minimally invasive interventions there is no time to segment regions of interest manually. We propose a real-time-capable deep-learning-based approach to detect and segment the relevant anatomical structures and instruments. For the universal deployment of the proposed solution, we evaluate them on pixel accuracy as well as distance measurements of the detected contours. The U-Net, Google’s DeepLab v3, and the Obelisk-Net models are cross-validated, with DeepLab showing superior results in pixel accuracy and distance measurements.


Author(s):  
Byron D. Patton ◽  
Daniel Zarif ◽  
Donna M. Bahroloomi ◽  
Iam C. Sarmiento ◽  
Paul C. Lee ◽  
...  

Objective In the tide of robot-assisted minimally invasive surgery, few cases of robot-assisted pneumonectomy exist in the literature. This study evaluates the perioperative outcomes and risk factors for conversion to thoracotomy with an initial robotic approach to pneumonectomy for lung cancer. Methods This study is a single-center retrospective review of all pneumonectomies for lung cancer with an initial robotic approach between 2015 and 2019. Patients were divided into 2 groups: surgeries completed robotically and surgeries converted to thoracotomy. Patient demographics, preoperative clinical data, surgical pathology, and perioperative outcomes were compared for meaningful differences between the groups. Results Thirteen total patients underwent robotic pneumonectomy with 8 of them completed robotically and 5 converted to thoracotomy. There were no significant differences in patient characteristics between the groups. The Robotic group had a shorter operative time ( P < 0.01) and less estimated blood loss ( P = 0.02). There were more lymph nodes harvested in the Robotic group ( P = 0.08) but without statistical significance. There were 2 major complications in the Robotic group and none in the Conversion group. Neither tumor size nor stage were predictive of conversion to thoracotomy. Conversions decreased over time with a majority occurring in the first 2 years. There were no conversions for bleeding and no mortalities. Conclusions Robotic pneumonectomy for lung cancer is a safe procedure and a reasonable alternative to thoracotomy. With meticulous technique, major bleeding can be avoided and most procedures can be completed robotically. Larger studies are needed to elucidate any advantages of a robotic versus open approach.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 391
Author(s):  
Luca Bigazzi ◽  
Stefano Gherardini ◽  
Giacomo Innocenti ◽  
Michele Basso

In this paper, solutions for precise maneuvering of an autonomous small (e.g., 350-class) Unmanned Aerial Vehicles (UAVs) are designed and implemented from smart modifications of non expensive mass market technologies. The considered class of vehicles suffers from light load, and, therefore, only a limited amount of sensors and computing devices can be installed on-board. Then, to make the prototype capable of moving autonomously along a fixed trajectory, a “cyber-pilot”, able on demand to replace the human operator, has been implemented on an embedded control board. This cyber-pilot overrides the commands thanks to a custom hardware signal mixer. The drone is able to localize itself in the environment without ground assistance by using a camera possibly mounted on a 3 Degrees Of Freedom (DOF) gimbal suspension. A computer vision system elaborates the video stream pointing out land markers with known absolute position and orientation. This information is fused with accelerations from a 6-DOF Inertial Measurement Unit (IMU) to generate a “virtual sensor” which provides refined estimates of the pose, the absolute position, the speed and the angular velocities of the drone. Due to the importance of this sensor, several fusion strategies have been investigated. The resulting data are, finally, fed to a control algorithm featuring a number of uncoupled digital PID controllers which work to bring to zero the displacement from the desired trajectory.


Sign in / Sign up

Export Citation Format

Share Document