scholarly journals Detection Strategies for High Slew Rate, Low SNR Star Tracking

2021 ◽  
Author(s):  
Laila Kazemi

This research is aimed to improve star tracker performance in presence of dynamic conditions. It offers an assessment of various image thresholding and centroiding algorithms to improve star tracker centroiding accuracy at moderate slew rates (< 10 0=s). Star trackers generally have arc-second accuracy in stationary conditions, however their accuracy degrades as slew rate increases. In dynamic conditions, blur effects add to the challenges of star detection. This work presents an image processing algorithm for star images that preserves star tracker detection accuracy and is able to detect dim stars up to slew rates less than 10 0=s. A number of algorithms from literature were evaluated and their performance in motion and simulations were measured. The primary performance metrics are false positive ratio, and false negative ratio of star pixels. This Work introduced a new algorithm for star acquisition in moderate slew rates that combines positive features of existing algorithms.

2021 ◽  
Author(s):  
Laila Kazemi

This research is aimed to improve star tracker performance in presence of dynamic conditions. It offers an assessment of various image thresholding and centroiding algorithms to improve star tracker centroiding accuracy at moderate slew rates (< 10 0=s). Star trackers generally have arc-second accuracy in stationary conditions, however their accuracy degrades as slew rate increases. In dynamic conditions, blur effects add to the challenges of star detection. This work presents an image processing algorithm for star images that preserves star tracker detection accuracy and is able to detect dim stars up to slew rates less than 10 0=s. A number of algorithms from literature were evaluated and their performance in motion and simulations were measured. The primary performance metrics are false positive ratio, and false negative ratio of star pixels. This Work introduced a new algorithm for star acquisition in moderate slew rates that combines positive features of existing algorithms.


Author(s):  
Adigun Oyeranmi ◽  
Babatunde Ronke ◽  
Rufai Mohammed ◽  
Aigbokhan Edwin

Fractured bone detection and categorization is currently receiving research attention in computer aided diagnosis system because of the ease it has brought to doctors in classification and interpretation of X-ray images.  The choice of an efficient algorithm or combination of algorithms is paramount to accurately detect and categorize fractures in X-ray images, which is the first stage of diagnosis in treatment and correction of damaged bones for patients. This is what this research seeks to address. The research design involves data collection, preprocessing, segmentation, feature extraction, classification and evaluation of the proposed method. The sample dataset were x-ray images collected from the Department of Radiology, National Orthopedic Hospital, Igbobi-Lagos, Nigeria as well as Open Access Medical Image Repositories. The image preprocessing involves the conversion of images in RGB format to grayscale, sharpening and smoothing using Unsharp Masking Tool.  The segmentation of the preprocessed image was carried out by adopting the Entropy method in the first stage and Canny edge method in the second stage while feature extraction was performed using Hough Transformation. Detection and classification of fracture image employed a combination of two algorithms;  K-Nearest Neighbor (KNN) and Support Vector Machine (SVM) for detecting fracture locations based on four classification types: (normal, comminute, oblique and transverse).Two performance assessment methods were employed to evaluate the developed system. The first evaluation was based on confusion matrix which evaluates fracture and non-fracture on the basis of TP (True Positive), TN (True negative), FP (False Positive) and FN (False Negative). The second appraisal was based on Kappa Statistics which evaluates the type of fracture by determining the accuracy of the categorized fracture bone type. The result of first assessment for fracture detection shows that 26 out of 40 preprocessed images were fractured, resulting to the following three values of performance metrics: accuracy value of 90%, sensitivity of 87% and specificity of 100%. The Kappa coefficient error assessment produced accuracy of 83% during classification. The proposed method can find suitable use in categorization of fracture types on different bone images based on the results obtained from the experiment.


Circulation ◽  
2020 ◽  
Vol 141 (Suppl_1) ◽  
Author(s):  
Åke Olsson ◽  
Magnus Samulesson

Background: Automatic ECG algorithms using only RR-variability in ECG to detect AF have shown high false positive rates. By including P-wave presence in the algorithm, research has shown that it can increase detection accuracy for AF. Methods: A novel RR- and P-wave based automatic detection algorithm implemented in the Coala Heart Monitor ("Coala", Coala Life AB, Sweden) was evaluated for detection accuracy by the comparison to blinded manual ECG interpretation based on real-world data. Evaluation was conducted on 100 consecutive anonymous printouts of chest- and thumb-ECG waveforms, where the algorithm had detected both irregular RR-rhythms and strong P-waves in either chest or thumb recording (non-AF episodes classified by algorithm as Category 12).The recordings, without exclusions, were generated from 5,512 real-world data recordings from actual Coala users in Sweden (both OTC and Rx users) during the period of March 5 to March 22, 2019, with no control or influence by the researchers or any other organization or individual. The prevalence of cardiac conditions in the user population was unknown.The blinded recordings were each manually interpreted by a trained cardiologist. The manual interpretation was compared with the automatic analysis performed by the detection algorithm to determine the number of additional false negative indications for AF as presented to the user. Results: The trained cardiologist manually interpreted 0 of the 100 recordings as AF. Manual interpretation showed that the novel automatic AF algorithm yielded 0 % False Negative error and 100 % Negative Predictive Value (NPV) for detection of AF. Irregular RR-rhythms were detected in 569 recordings (10 % of a total of 5,512 recordings). The 100 non-AF recordings containing both irregular RR-rhythms and strong P-waves constituted 18% of all recordings with irregular RR-rhythms. Respiratory sinus arrhythmia was the single most prevalent condition and was found in 47% of irregular RR-rhythms with strong P-waves. Conclusion: The novel, P-wave based automatic ECG algorithm used in the Coala, showed a zero percent False Negative error rate for AF detection in ECG recordings with RR-variability but presence of P-waves, as compared to manual interpretation by a cardiologist.


Author(s):  
Rama Mercy Sam Sigamani

The cyber physical system safety and security is the major concern on the incorporated components with interface standards, communication protocols, physical operational characteristics, and real-time sensing. The seamless integration of computational and distributed physical components with intelligent mechanisms increases the adaptability, autonomy, efficiency, functionality, reliability, safety, and usability of cyber-physical systems. In IoT-enabled cyber physical systems, cyber security is an essential challenge due to IoT devices in industrial control systems. Computational intelligence algorithms have been proposed to detect and mitigate the cyber-attacks in cyber physical systems, smart grids, power systems. The various machine learning approaches towards securing CPS is observed based on the performance metrics like detection accuracy, average classification rate, false negative rate, false positive rate, processing time per packet. A unique feature of CPS is considered through structural adaptation which facilitates a self-healing CPS.


Author(s):  
Akinboro Solomon ◽  
Emmanuel Olajubu ◽  
Ibrahim Ogundoyin ◽  
Ganiyu Aderounmu

This study designed, simulated and evaluated the performance of a conceptual framework for ambient ad hoc home network. This was with a view to detecting malicious nodes and securing the home devices against attacks. The proposed framework, called mobile ambient social trust consists of mobile devices and mobile ad hoc network as communication channel. The trust model for the device attacks is Adaptive Neuro Fuzzy (ANF) that considered global reputation of the direct and indirect communication of home devices and remote devices. The model was simulated using Matlab 7.0. In the simulation, NSL-KDD dataset was used as input packets, the artificial neural network for packet classification and ANF system for the global trust computation. The proposed model was benchmarked with an existing Eigen Trust (ET) model using detection accuracy and convergence time as performance metrics. The simulation results using the above parameters revealed a better performance of the ANF over ET model. The framework will secure the home network against unforeseen network disruption and node misbehavior.


2020 ◽  
Vol 12 (14) ◽  
pp. 2229
Author(s):  
Haojie Liu ◽  
Hong Sun ◽  
Minzan Li ◽  
Michihisa Iida

Maize plant detection was conducted in this study with the goals of target fertilization and reduction of fertilization waste in weed spots and gaps between maize plants. The methods used included two types of color featuring and deep learning (DL). The four color indices used were excess green (ExG), excess red (ExR), ExG minus ExR, and the hue value from the HSV (hue, saturation, and value) color space, while the DL methods used were YOLOv3 and YOLOv3_tiny. For practical application, this study focused on performance comparison in detection accuracy, robustness to complex field conditions, and detection speed. Detection accuracy was evaluated by the resulting images, which were divided into three categories: true positive, false positive, and false negative. The robustness evaluation was performed by comparing the average intersection over union of each detection method across different sub–datasets—namely original subset, blur processing subset, increased brightness subset, and reduced brightness subset. The detection speed was evaluated by the indicator of frames per second. Results demonstrated that the DL methods outperformed the color index–based methods in detection accuracy and robustness to complex conditions, while they were inferior to color feature–based methods in detection speed. This research shows the application potential of deep learning technology in maize plant detection. Future efforts are needed to improve the detection speed for practical applications.


2009 ◽  
Vol 297 (2) ◽  
pp. E538-E544 ◽  
Author(s):  
Peter Y. Liu ◽  
Daniel M. Keenan ◽  
Petra Kok ◽  
Vasantha Padmanabhan ◽  
Kevin T. O'Byrne ◽  
...  

Quantifying pulsatile secretion from serial hormone concentration measurements (deconvolution analysis) requires automated, objective, and accurate detection of pulse times to ensure valid estimation of secretion and elimination parameters. Lack of validated pulse identification constitutes a major deficiency in the deconvolution field, because individual pulse size and number reflect regulated processes that are critical for the function and response of secretory glands. To evaluate deconvolution pulse detection accuracy, four empirical models of true-positive markers of pituitary (LH) pulses were used. 1) Sprague-Dawley rats had recordings of hypothalamic arcuate nucleus multiunit electrical activity, 2) ovariectomized ewes underwent sampling of hypothalamo-pituitary gonadotropin-releasing hormone (GnRH pulses), 3) healthy young men were infused with trains of biosynthetic LH pulses after GnRH receptor blockade, and 4) computer simulations of pulsatile LH profiles were constructed. Outcomes comprised sensitivity, specificity, and receiver-operating characteristic curves. Sensitivity and specificity were 0.93 and 0.97, respectively, for combined empirical data in the rat, sheep, and human ( n = 156 pulses) and 0.94 and 0.92, respectively, for computer simulations ( n = 1,632 pulses). For simulated data, pulse-set selection by the Akaike information criterion yielded slightly higher sensitivity than by the Bayesian information criterion, and the reverse was true for specificity. False-positive errors occurred primarily at low-pulse amplitude, and false-negative errors occurred principally with close pulse proximity. Random variability (noise), sparse sampling, and rapid pulse frequency reduced pulse detection sensitivity more than specificity. We conclude that an objective automated pulse detection deconvolution procedure has high sensitivity and specificity, thus offering a platform for quantitative neuroendocrine analyses.


Electronics ◽  
2020 ◽  
Vol 9 (9) ◽  
pp. 1500
Author(s):  
Mohammad Manzurul Islam ◽  
Gour Karmakar ◽  
Joarder Kamruzzaman ◽  
Manzur Murshed

Internet of Things (IoT) image sensors, social media, and smartphones generate huge volumes of digital images every day. Easy availability and usability of photo editing tools have made forgery attacks, primarily splicing and copy–move attacks, effortless, causing cybercrimes to be on the rise. While several models have been proposed in the literature for detecting these attacks, the robustness of those models has not been investigated when (i) a low number of tampered images are available for model building or (ii) images from IoT sensors are distorted due to image rotation or scaling caused by unwanted or unexpected changes in sensors’ physical set-up. Moreover, further improvement in detection accuracy is needed for real-word security management systems. To address these limitations, in this paper, an innovative image forgery detection method has been proposed based on Discrete Cosine Transformation (DCT) and Local Binary Pattern (LBP) and a new feature extraction method using the mean operator. First, images are divided into non-overlapping fixed size blocks and 2D block DCT is applied to capture changes due to image forgery. Then LBP is applied to the magnitude of the DCT array to enhance forgery artifacts. Finally, the mean value of a particular cell across all LBP blocks is computed, which yields a fixed number of features and presents a more computationally efficient method. Using Support Vector Machine (SVM), the proposed method has been extensively tested on four well known publicly available gray scale and color image forgery datasets, and additionally on an IoT based image forgery dataset that we built. Experimental results reveal the superiority of our proposed method over recent state-of-the-art methods in terms of widely used performance metrics and computational time and demonstrate robustness against low availability of forged training samples.


Author(s):  
M. Kamaladevi ◽  
V. Venkatraman

In recent years, imbalanced data classification are utilized in several domains including, detecting fraudulent activities in banking sector, disease prediction in healthcare sector and so on. To solve the Imbalanced classification problem at data level, strategy such as undersampling or oversampling are widely used. Sampling technique pose a challenge of significant information loss. The proposed method involves two processes namely, undersampling and classification. First, undersampling is performed by means of Tversky Similarity Indexive Regression model. Here, regression along with the Tversky similarity index is used in analyzing the relationship between two instances from the dataset. Next, Gaussian Kernelized Decision stump AdaBoosting is used for classifying the instances into two classes. Here, the root node in the Decision Stump takes a decision on the basis of the Gaussian Kernel function, considering average of neighboring points accordingly the results is obtained at the leaf node. Weights are also adjusted to minimizing the training errors occurring during classification to find the best classifier. Experimental assessment is performed with two different imbalanced dataset (Pima Indian diabetes and Hepatitis dataset). Various performance metrics such as precision, recall, AUC under ROC score and F1-score are compared with the existing undersampling methods. Experimental results showed that prediction accuracy of minority class has improved and therefore minimizing false positive and false negative.


2021 ◽  
Author(s):  
Harvineet Singh ◽  
Vishwali Mhasawade ◽  
Rumi Chunara

Importance: Modern predictive models require large amounts of data for training and evaluation which can result in building models that are specific to certain locations, populations in them and clinical practices. Yet, best practices and guidelines for clinical risk prediction models have not yet considered such challenges to generalizability. Objectives: To investigate changes in measures of predictive discrimination, calibration, and algorithmic fairness when transferring models for predicting in-hospital mortality across ICUs in different populations. Also, to study the reasons for the lack of generalizability in these measures. Design, Setting, and Participants: In this multi-center cross-sectional study, electronic health records from 179 hospitals across the US with 70,126 hospitalizations were analyzed. Time of data collection ranged from 2014 to 2015. Main Outcomes and Measures: The main outcome is in-hospital mortality. Generalization gap, defined as difference between model performance metrics across hospitals, is computed for discrimination and calibration metrics, namely area under the receiver operating characteristic curve (AUC) and calibration slope. To assess model performance by race variable, we report differences in false negative rates across groups. Data were also analyzed using a causal discovery algorithm "Fast Causal Inference" (FCI) that infers paths of causal influence while identifying potential influences associated with unmeasured variables. Results: In-hospital mortality rates differed in the range of 3.9%-9.3% (1st-3rd quartile) across hospitals. When transferring models across hospitals, AUC at the test hospital ranged from 0.777 to 0.832 (1st to 3rd quartile; median 0.801); calibration slope from 0.725 to 0.983 (1st to 3rd quartile; median 0.853); and disparity in false negative rates from 0.046 to 0.168 (1st to 3rd quartile; median 0.092). When transferring models across geographies, AUC ranged from 0.795 to 0.813 (1st to 3rd quartile; median 0.804); calibration slope from 0.904 to 1.018 (1st to 3rd quartile; median 0.968); and disparity in false negative rates from 0.018 to 0.074 (1st to 3rd quartile; median 0.040). Distribution of all variable types (demography, vitals, and labs) differed significantly across hospitals and regions. Shifts in the race variable distribution and some clinical (vitals, labs and surgery) variables by hospital or region. Race variable also mediates differences in the relationship between clinical variables and mortality, by hospital/region. Conclusions and Relevance: Group-specific metrics should be assessed during generalizability checks to identify potential harms to the groups. In order to develop methods to improve and guarantee performance of prediction models in new environments for groups and individuals, better understanding and provenance of health processes as well as data generating processes by sub-group are needed to identify and mitigate sources of variation.


Sign in / Sign up

Export Citation Format

Share Document