Sliding Windows: A Software Method Suitable for Real-Time Inspection of Textile Surfaces

2004 ◽  
Vol 74 (7) ◽  
pp. 646-651 ◽  
Author(s):  
C. Anagnostopoulos ◽  
I. Anagnostopoulos ◽  
D. Vergados ◽  
E. Kayafas ◽  
V. Loumos
Keyword(s):  
Author(s):  
Wu Shanshan ◽  
Yu Ge ◽  
Yu Yaxin ◽  
Ou Zhengyu ◽  
Yang Xinhua ◽  
...  

2021 ◽  
pp. 107561
Author(s):  
Zhongliang Yang ◽  
Hao Yang ◽  
Ching-Chun Chang ◽  
Yongfeng Huang ◽  
Chin-Chen Chang

Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4758
Author(s):  
Jen-Kai Tsai ◽  
Chen-Chien Hsu ◽  
Wei-Yen Wang ◽  
Shao-Kang Huang

Action recognition has gained great attention in automatic video analysis, greatly reducing the cost of human resources for smart surveillance. Most methods, however, focus on the detection of only one action event for a single person in a well-segmented video, rather than the recognition of multiple actions performed by more than one person at the same time for an untrimmed video. In this paper, we propose a deep learning-based multiple-person action recognition system for use in various real-time smart surveillance applications. By capturing a video stream of the scene, the proposed system can detect and track multiple people appearing in the scene and subsequently recognize their actions. Thanks to high resolution of the video frames, we establish a zoom-in function to obtain more satisfactory action recognition results when people in the scene become too far from the camera. To further improve the accuracy, recognition results from inflated 3D ConvNet (I3D) with multiple sliding windows are processed by a nonmaximum suppression (NMS) approach to obtain a more robust decision. Experimental results show that the proposed method can perform multiple-person action recognition in real time suitable for applications such as long-term care environments.


Robotica ◽  
2015 ◽  
Vol 35 (1) ◽  
pp. 85-100
Author(s):  
Caio César Teodoro Mendes ◽  
Fernando Santos Osório ◽  
Denis Fernando Wolf

SUMMARYAn efficient obstacle detection technique is required so that navigating robots can avoid obstacles and potential hazards. This task is usually simplified by relying on structural patterns. However, obstacle detection constitutes a challenging problem in unstructured unknown environments, where such patterns may not exist. Talukder et al. (2002, IEEE Intelligent Vehicles Symposium, pp. 610–618.) successfully derived a method to deal with such environments. Nevertheless, the method has a high computational cost and researchers that employ it usually rely on approximations to achieve real-time. We hypothesize that by using a graphics processing unit (GPU), the computing time of the method can be significantly reduced. Throughout the implementation process, we developed a general framework for processing dynamically-sized sliding windows on a GPU. The framework can be applied to other problems that require similar computation. Experiments were performed with a stereo camera and an RGB-D sensor, where the GPU implementations were compared to multi-core and single-core CPU implementations. The results show a significant gain in the computational performance, i.e. in a particular instance, a GPU implementation is almost 90 times faster than a single-core one.


1979 ◽  
Vol 44 ◽  
pp. 41-47
Author(s):  
Donald A. Landman

This paper describes some recent results of our quiescent prominence spectrometry program at the Mees Solar Observatory on Haleakala. The observations were made with the 25 cm coronagraph/coudé spectrograph system using a silicon vidicon detector. This detector consists of 500 contiguous channels covering approximately 6 or 80 Å, depending on the grating used. The instrument is interfaced to the Observatory’s PDP 11/45 computer system, and has the important advantages of wide spectral response, linearity and signal-averaging with real-time display. Its principal drawback is the relatively small target size. For the present work, the aperture was about 3″ × 5″. Absolute intensity calibrations were made by measuring quiet regions near sun center.


Author(s):  
Alan S. Rudolph ◽  
Ronald R. Price

We have employed cryoelectron microscopy to visualize events that occur during the freeze-drying of artificial membranes by employing real time video capture techniques. Artificial membranes or liposomes which are spherical structures within internal aqueous space are stabilized by water which provides the driving force for spontaneous self-assembly of these structures. Previous assays of damage to these structures which are induced by freeze drying reveal that the two principal deleterious events that occur are 1) fusion of liposomes and 2) leakage of contents trapped within the liposome [1]. In the past the only way to access these events was to examine the liposomes following the dehydration event. This technique allows the event to be monitored in real time as the liposomes destabilize and as water is sublimed at cryo temperatures in the vacuum of the microscope. The method by which liposomes are compromised by freeze-drying are largely unknown. This technique has shown that cryo-protectants such as glycerol and carbohydrates are able to maintain liposomal structure throughout the drying process.


Sign in / Sign up

Export Citation Format

Share Document