automatic target detection
Recently Published Documents


TOTAL DOCUMENTS

143
(FIVE YEARS 18)

H-INDEX

13
(FIVE YEARS 4)

Author(s):  
Dounia Daghouj ◽  
Marwa Abdellaoui ◽  
Mohammed Fattah ◽  
Said Mazer ◽  
Youness Balboul ◽  
...  

<span>The pulse ultra-wide band (UWB) radar consists of switching of energy of very short duration in an ultra-broadband emission chain, and the UWB signal emitted is an ultrashort pulse, of the order of nanoseconds, without a carrier. These systems can indicate the presence and distances of a distant object, call a target, and determine its size, shape, speed, and trajectory. In this paper, we present a UWB radar system allowing the detection of the presence of a target and its localization in a road environment based on the principle of correlation of the reflected signal with the reference and the determination of its correlation peak.</span>


2022 ◽  
Vol 2022 ◽  
pp. 1-8
Author(s):  
Junfang Song ◽  
Yao Fan ◽  
Huansheng Song ◽  
Haili Zhao

In traffic scenarios, vehicle trajectories can provide almost all the dynamic information of moving vehicles. Analyzing the vehicle trajectory in the monitoring scene can grasp the dynamic road traffic information. Cross-camera association of vehicle trajectories in multiple cameras can break the isolation of target information between single cameras and obtain the overall road operation conditions in a large-scale video surveillance area, which helps road traffic managers to conduct traffic analysis, prediction, and control. Based on the framework of DBT automatic target detection, this paper proposes a cross-camera vehicle trajectory correlation matching method based on the Euclidean distance metric correlation of trajectory points. For the multitarget vehicle trajectory acquired in a single camera, we first perform 3D trajectory reconstruction based on the combined camera calibration in the overlapping area and then complete the similarity association between the cross-camera trajectories and the cross-camera trajectory update, and complete the trajectory transfer of the vehicle between adjacent cameras. Experiments show that the method in this paper can well solve the problem that the current tracking technology is difficult to match the vehicle trajectory under different cameras in complex traffic scenes and essentially achieves long-term and long-distance continuous tracking and trajectory acquisition of multiple targets across cameras.


Author(s):  
Alan Paul A

Our paper consists of the idea of improving target detection and shooting systems in an effective and low-cost manner. We have reviewed other systems in the past and come up with a multi-mode system that has multiple modes with its advantages and applications. Our system is much cost effective. We have used easy to get parts to function. it also has a modular design.


Author(s):  
S. Ban ◽  
T. Kim

Abstract. Recently, with increasing use of unmanned aerial vehicle (UAV), radiometric calibration of UAV images has become an important pre-processing step for application such as vegetation mapping, crop field monitoring, etc. In order to obtain accurate spectral reflectance, some UAVs measure irradiance at the time of image acquisition. However, most of UAV systems do not have such irradiance sensors. In these cases, vicarious radiometric correction method has to be used. Digital numbers (DNs) of imaged ground reflectance targets are measured and spectral reflectance is acquired from with known reflectance values of the targets. For automated vicarious calibration, a technique for automatically detecting image location of ground reflectance targets has been developed. In this study, we report an improved version of automated reflectance target detection and a new semi-automatic reflectance target detection developed. Test results showed that among the 14 reflectance targets, 13 targets were detected with the automatic target detection method. The undetected target was extracted by the proposed semi-automatic target detect method. Additional test was conducted on the remaining targets to confirm the applicability of our semi-automatic target detection method. As a result, other targets were also detected. The proposed automated and semi-automated target detection method can be used for automated vicarious calibration of UAV images.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3853
Author(s):  
Fei Qin ◽  
Xiangxi Bu ◽  
Yunlong Liu ◽  
Xingdong Liang ◽  
Jihao Xin

Foreign Object Debris (FOD) refers to any foreign material on the airfield that may injure and threaten the aircraft and airport system. Due to the complex background on the airfield pavement and weak target echoes in long-distance monitoring, it is not easy to detect objects of various types and sizes. The existing FOD radar system’s detection method has a short effective range, and the detectable objects’ radar cross-section intensity is no less than −20 dBsm. In this paper, we propose an integrated FOD automatic target detection algorithm for millimeter-wave (MMW) surveillance radar to improve small target detection under long-range conditions of over 660 m. The signal form of FOD and a clutter model of ground clutter received by millimeter-wave radar are primarily utilized and established theoretically. The runway edge detection means that it is employed based on the in-continuity features as the runway region of interest during the automatic extraction step. Following the clutter map constant false alarm detection algorithm, we utilize a time-domain algorithm that functions as the vital detection processor. Moreover, an explicit definition of the FOD detection performance is developed in a characteristic quantitative way. This criterion involves an absolute reference value for all FOD radar systems. The well-designed FOD frequency-modulated continuous-wave MMW surveillance radar is utilized, and actual experiments are carried out in a real airport in Beijing, China. The results validate the proposed method’s effectiveness and the superior performance of FOD target detection in long-range situations.


2021 ◽  
Author(s):  
B Janakiramiaha ◽  
Kalyani G ◽  
Karuna A ◽  
Narasimha Prasad L V ◽  
Krishna M

Abstract Automatic target detection plays a major role in automated war operations. The key concept behind automated target detection is military objects recognition from the captured images. For object recognition in the given image, Convolutional Neural Network (CNN) is a powerful classification network. But in general CNNs are trained for general object recognition. But, the performance of CNN depends mainly on the size of the training set. The size of the training data is generally available in less proportion for military objects due to its operational and security issues. Hence the performance of CNN may degrade sharply. To address the issue of military objects, a relatively new neural network architecture called Capsule Network (CapsNet) is introduced. Hence, in this article, a variant of CapsNet called Multi-level CapsNet framework is projected for military object recognition under the case of small training set. The introduced framework of this paper is validated on a dataset of military objects which are collected from the internet. The dataset contains particularly five military objects and the similar civil ones. The proposed framework demonstrates a large improvement of 96.54% of accuracy for military object recognition. Experiments demonstrate that the proposed framework can accomplish a high recognition precision, superior to many other algorithms such as conventional Support Vector Machines and transfer learning based CNNs.


Author(s):  
T. Sieberth

Abstract. Photogrammetric processes such as camera calibration, feature and target detection and referencing are assumed to strongly depend on the quality of the images that are provided for the process. Consequently, motion and optically blurred images are usually excluded from photogrammetric processes to supress their negative influence. To evaluate how much optical blur is acceptable and how large the influence of optical blur is on photogrammetric procedures a variety of test environments were established. These were based upon previous motion blur research and included test fields for the analysis of camera calibration. For the evaluation, a DSLR camera as well as Lytro Illum light field camera were used. The results show that optical blur has a negative influence on photogrammetric procedures, mostly automatic target detection. With the intervention of an experienced operator and the use of semi-automatic tools, acceptable results can be established.


This chapter reviews the optical satellite data around the tracking of MH370 debris. To this end, limited optical sensors are involved in Gafen-1, Worldview-2, Thaichote, and Pleiades-1A satellite data. Moreover, Google Earth data is also implemented to define debris that likely belongs to MH370. In doing so, automatic target detection based on its spectral signature is implemented to recognize any segment of MH370 debris. Consequently, most of the debris that has shown on satellite images does not belong to MH370. Needless to say, bright spots perhaps belong to the scattering of garbage floating in ocean waters or clouds.


Sign in / Sign up

Export Citation Format

Share Document