global threshold
Recently Published Documents


TOTAL DOCUMENTS

86
(FIVE YEARS 25)

H-INDEX

10
(FIVE YEARS 1)

Author(s):  
Cheng Chen ◽  
Hyungjoon Seo ◽  
ChangHyun Jun ◽  
Yang Zhao

AbstractIn this paper, a potential crack region method is proposed to detect road pavement cracks by using the adaptive threshold. To reduce the noises of the image, the pre-treatment algorithm was applied according to the following steps: grayscale processing, histogram equalization, filtering traffic lane. From the image segmentation methods, the algorithm combines the global threshold and the local threshold to segment the image. According to the grayscale distribution characteristics of the crack image, the sliding window is used to obtain the window deviation, and then, the deviation image is segmented based on the maximum inter-class deviation. Obtain a potential crack region and then perform a local threshold-based segmentation algorithm. Real images of pavement surface were used at the Su Tong Li road in Suzhou, China. It was found that the proposed approach could give a more explicit description of pavement cracks in images. The method was tested on 509 images of the German asphalt pavement distress (Gap) dataset: The test results were found to be promising (precision = 0.82, recall = 0.81, F1 score = 0.83).


2022 ◽  
Vol 17 (01) ◽  
pp. C01026
Author(s):  
C. Bacchi ◽  
A. Dawiec ◽  
F. Orsini

Abstract It is now been over 15 years since Hybrid Photon Counting Detectors (HPCD) became one of the standard position-sensitive detectors for synchrotron light sources and X-ray detection applications. This is mainly due to their single-photon sensitivity over a high dynamic energy range and electronic noise suppression thanks to energy thresholding. To reach those performances, all HPCD pixels must feature the same electrical response against photons of the same energy. From the analysis of a monochromatic beam, in case of an ideal HPCD detector, it would be sufficient to apply a fixed voltage threshold among all pixels, positioned at half of the mean pulse amplitude to count every photon above the threshold. However, in practical cases, it must be considered that noise baselines from all pixels are not always strictly located at the same voltage level but can be spread over some voltage ranges. To address this kind of issue, most of all HPCDs apply a conventional threshold equalization method, that mainly relies on three steps; the setting of a global threshold at an arbitrary value, the identification of pixels noise baseline around that global threshold through an in-pixel threshold trimmer, and the computation of the required threshold offsets for setting all pixels at their own noise baseline at the same time. However, in case of a first-time use of an HPCD prototype, the threshold equalization might be biased by parameters that are wrongly set. Those biases can sometimes be characterized by the inability to localize some pixel noise baselines, which could be outside the voltage range of the threshold trimmer. The recovery of those biased pixels could be performed by changing the position of the global threshold, or by increasing the voltage range of the threshold trimmer. Unfortunately, both solutions could be time consuming due to the lack of information on the required steps for recovering all noise baselines. In order to overcome this issue in a reasonable time, this work introduces a pragmatic method that can be applied to HPCDs for an early and effective identification of appropriate pixels’ parameters, avoiding the need to test a high number of pixels configurations. The application of this method, at the early stage of the HPCD calibration, may drastically reduce the investigation time for finding the optimal operating parameters of HPCD prototypes.


Geophysics ◽  
2021 ◽  
pp. 1-76
Author(s):  
Siyuan Chen ◽  
Siyuan Cao ◽  
Yaoguang Sun

In the process of separating blended data, conventional methods based on sparse inversion assume that the primary source is coherent and the secondary source is randomized. The L1-norm, the commonly used regularization term, uses a global threshold to process the sparse spectrum in the transform domain; however, when the threshold is relatively high, more high-frequency information from the primary source will be lost. For this reason, we analyze the generation principle of blended data based on the convolution theory and then conclude that the blended data is only randomly distributed in the spatial domain. Taking the slope-constrained frequency-wavenumber ( f- k) transform as an example, we propose a frequency-dependent threshold, which reduces the high-frequency loss during the deblending process. Then we propose to use a structure weighted threshold in which the energy from the primary source is concentrated along the wavenumber direction. The combination of frequency and structure-weighted thresholds effectively improves the deblending performance. Model and field data show that the proposed frequency-structure weighted threshold has better frequency preservation than the global threshold. The weighted threshold can better retain the high-frequency information of the primary source, and the similarity between other frequency-band data and the unblended data has been improved.


2021 ◽  
Vol 11 (23) ◽  
pp. 11420
Author(s):  
Theresa Lehner ◽  
Dietmar Pum ◽  
Judith M. Rollinger ◽  
Benjamin Kirchweger

The small and transparent nematode Caenorhabditis elegans is increasingly employed for phenotypic in vivo chemical screens. The influence of compounds on worm body fat stores can be assayed with Nile red staining and imaging. Segmentation of C. elegans from fluorescence images is hereby a primary task. In this paper, we present an image-processing workflow that includes machine-learning-based segmentation of C. elegans directly from fluorescence images and quantifies their Nile red lipid-derived fluorescence. The segmentation is based on a J48 classifier using pixel entropies and is refined by size-thresholding. The accuracy of segmentation was >90% in our external validation. Binarization with a global threshold set to the brightness of the vehicle control group worms of each experiment allows a robust and reproducible quantification of worm fluorescence. The workflow is available as a script written in the macro language of imageJ, allowing the user additional manual control of classification results and custom specification settings for binarization. Our approach can be easily adapted to the requirements of other fluorescence image-based experiments with C. elegans.


Author(s):  
Su Jia ◽  
Jeremy Karp ◽  
R. Ravi ◽  
Sridhar Tayur

Problem definition: Omnichannel retailing has led to the use of traditional stores as fulfillment centers for online orders. Omnichannel fulfillment problems have two components: (1) accepting a certain number of online orders prior to seeing store demands and (2) satisfying (or filling) some of these accepted online demands as efficiently as possible with any leftover inventory after store demands have been met. Hence, there is a fundamental trade-off between store cancellations of accepted online orders and potentially increased profits because of more acceptances of online orders. We study this joint problem of online order acceptance and fulfillment (including cancellations) to minimize total costs, including shipping charges and cancellation penalties in single-period and limited multiperiod settings. Academic/practical relevance: Despite the growing importance of omnichannel fulfillment via online orders, our work provides the first study incorporating cancellation penalties along with fulfillment costs. Methodology: We build a two-stage stochastic model. In the first stage, the retailer sets a policy specifying which online orders it will accept. The second stage represents the process of fulfilling online orders after the uncertain quantities of in-store purchases are revealed. We analyze threshold policies that accept online orders as long as the inventories are above a global threshold, a local threshold per region, or a hybrid. Results: For a single period, total costs are unimodal as a function of the global threshold and unimodal as a function of a single local threshold holding all other local thresholds at constant values, motivating a gradient search algorithm. Reformulating as an appropriate linear program with network flow structure, we estimate the derivative (using infinitesimal perturbation analysis) of the total cost as a function of the thresholds. We validate the performance of the threshold policies empirically using data from a high-end North American retailer. Our two-location experiments demonstrate that local thresholds perform better than global thresholds in a wide variety of settings. Conversely, in a narrow region with negatively correlated online demand between locations and very low shipping costs, global threshold outperforms local thresholds. A hybrid policy only marginally improves on the better of the two. In multiple periods, we study one- and two-location models and provide insights into effective solution methods for the general case. Managerial implications: Our methods provide effective algorithms to manage fulfillment costs for online orders, demonstrating a significant reduction over policies that treat each location separately and reflecting the significant advantage of incorporating shipping in computing thresholds. Numerical studies provide insights as to why local thresholds perform well in a wide variety of situations.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Guoai Xu ◽  
Jiangtao Yuan ◽  
Guosheng Xu ◽  
Zhongkai Dang

Multipartite secret sharing schemes are those that have multipartite access structures. The set of the participants in those schemes is divided into several parts, and all the participants in the same part play the equivalent role. One type of such access structure is the compartmented access structure, and the other is the hierarchical access structure. We propose an efficient compartmented multisecret sharing scheme based on the linear homogeneous recurrence (LHR) relations. In the construction phase, the shared secrets are hidden in some terms of the linear homogeneous recurrence sequence. In the recovery phase, the shared secrets are obtained by solving those terms in which the shared secrets are hidden. When the global threshold is t , our scheme can reduce the computational complexity of the compartmented secret sharing schemes from the exponential time to polynomial time. The security of the proposed scheme is based on Shamir’s threshold scheme, i.e., our scheme is perfect and ideal. Moreover, it is efficient to share the multisecret and to change the shared secrets in the proposed scheme.


Author(s):  
Claudio Garuti

This paper has two main objectives. The first objective is to provide a mathematically grounded technique to construct local and global thresholds using the well-known rate of change method. The next objective, which is secondary, is to show the relevance and possibilities of applying the AHP/ANP in absolute measurement (AM) compared to the relative measurement (RM) mode, which is currently widely used in the AHP/ANP community. The ability to construct a global threshold would help increase the use of AHP/ANP in the AM mode (rating mode) in the AHP/ANP community. Therefore, if the first specific objective is achieved, it would facilitate reaching the second, more general objective.   For this purpose, a real-life example based on the construction of a multi-criteria index and threshold will be described. The index measures the degree of lag of a neighborhood through the Urban and Social Deterioration Index (USDI) based on an AHP risks model. The global threshold represents the tolerable lag value for the specific neighborhood. The difference or gap between the neighborhood’s current status (actual USDI value) and this threshold represents the level of neighborhood deterioration that must be addressed to close the gap from a social and urban standpoint. The global threshold value is a composition of 45 terminal criteria with their own local threshold that must be evaluated for the specific neighborhood. This example is the most recent in a large list of AHP applications in AM mode in vastly different decision making fields, such as risk disaster assessment, environmental assessment, the problem of medical diagnoses, social responsibility problems, BOCR analysis for the evolution of nuclear energy in Chile in the next 20 years and many others. (See list of projects in Appendix).


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Nikolaos Andreakos ◽  
Shigang Yue ◽  
Vassilis Cutsuridis

AbstractMemory, the process of encoding, storing, and maintaining information over time to influence future actions, is very important in our lives. Losing it, it comes with a great cost. Deciphering the biophysical mechanisms leading to recall improvement should thus be of outmost importance. In this study, we embarked on the quest to improve computationally the recall performance of a bio-inspired microcircuit model of the mammalian hippocampus, a brain region responsible for the storage and recall of short-term declarative memories. The model consisted of excitatory and inhibitory cells. The cell properties followed closely what is currently known from the experimental neurosciences. Cells’ firing was timed to a theta oscillation paced by two distinct neuronal populations exhibiting highly regular bursting activity, one tightly coupled to the trough and the other to the peak of theta. An excitatory input provided to excitatory cells context and timing information for retrieval of previously stored memory patterns. Inhibition to excitatory cells acted as a non-specific global threshold machine that removed spurious activity during recall. To systematically evaluate the model’s recall performance against stored patterns, pattern overlap, network size, and active cells per pattern, we selectively modulated feedforward and feedback excitatory and inhibitory pathways targeting specific excitatory and inhibitory cells. Of the different model variations (modulated pathways) tested, ‘model 1’ recall quality was excellent across all conditions. ‘Model 2’ recall was the worst. The number of ‘active cells’ representing a memory pattern was the determining factor in improving the model’s recall performance regardless of the number of stored patterns and overlap between them. As ‘active cells per pattern’ decreased, the model’s memory capacity increased, interference effects between stored patterns decreased, and recall quality improved.


Author(s):  
Soufiane Bentout ◽  
Salih Djilali ◽  
Abdenasser Chekroun

We consider in this research an age-structured alcoholism model. The global behavior of the model is investigated. It is proved that the system has a threshold dynamics in terms of the basic reproduction number (BRN), where we obtained that alcohol-free equilibrium (AFE) is globally asymptotically stable (GAS) in the case [Formula: see text], but for [Formula: see text] we found that the system persists and the nontrivial equilibrium (EE) is GAS. Furthermore, the effects of the susceptible drinkers rate and the repulse rate of the recovers to alcoholics are investigated, which allow us to provide a proper strategy for reducing the spread of alcohol use in the studied populations. The obtained mathematical results are tested numerically next to its biological relevance.


Sign in / Sign up

Export Citation Format

Share Document