counting process
Recently Published Documents


TOTAL DOCUMENTS

330
(FIVE YEARS 58)

H-INDEX

23
(FIVE YEARS 3)

2022 ◽  
Vol 22 (1) ◽  
Author(s):  
Gilma Hernández-Herrera ◽  
David Moriña ◽  
Albert Navarro

Abstract Background When dealing with recurrent events in observational studies it is common to include subjects who became at risk before follow-up. This phenomenon is known as left censoring, and simply ignoring these prior episodes can lead to biased and inefficient estimates. We aimed to propose a statistical method that performs well in this setting. Methods Our proposal was based on the use of models with specific baseline hazards. In this, the number of prior episodes were imputed when unknown and stratified according to whether the subject had been at risk of presenting the event before t = 0. A frailty term was also used. Two formulations were used for this “Specific Hazard Frailty Model Imputed” based on the “counting process” and “gap time.” Performance was then examined in different scenarios through a comprehensive simulation study. Results The proposed method performed well even when the percentage of subjects at risk before follow-up was very high. Biases were often below 10% and coverages were around 95%, being somewhat conservative. The gap time approach performed better with constant baseline hazards, whereas the counting process performed better with non-constant baseline hazards. Conclusions The use of common baseline methods is not advised when knowledge of prior episodes experienced by a participant is lacking. The approach in this study performed acceptably in most scenarios in which it was evaluated and should be considered an alternative in this context. It has been made freely available to interested researchers as R package miRecSurv.


Mathematics ◽  
2021 ◽  
Vol 9 (20) ◽  
pp. 2573
Author(s):  
Davide Cocco ◽  
Massimiliano Giona

This paper addresses the generalization of counting processes through the age formalism of Lévy Walks. Simple counting processes are introduced and their properties are analyzed: Poisson processes or fractional Poisson processes can be recovered as particular cases. The stationarity assumption in the renewal mechanism characterizing simple counting processes can be modified in different ways, leading to the definition of generalized counting processes. In the case that the transition mechanism of a counting process depends on the environmental conditions—i.e., the parameters describing the occurrence of new events are themselves stochastic processes—the counting processes is said to be influenced by environmental stochasticity. The properties of this class of processes are analyzed, providing several examples and applications and showing the occurrence of new phenomena related to the modulation of the long-term scaling exponent by environmental noise.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Muhammad Aslam

Abstract Background The data obtained from the counting process is known as the count data. In practice, the counting can be done at the same time or the time of the count is not the same. To test either the K counts are differed significantly or not, the Chi-square test for K counts is applied. Results The paper presents the Chi-square tests for K counts under neutrosophic statistics. The test statistic of the proposed test when K counts are recorded at the same time and different time are proposed. The testing procedure of the proposed test is explained with the help of pulse count data. Conclusions From the analysis of pulse count data, it can be concluded that the proposed test suggests the cardiologists use different treatment methods on patients. In addition, the proposed test gives more information than the traditional test under uncertainty.


2021 ◽  
Vol 11 (14) ◽  
pp. 6543
Author(s):  
Thomas Haugland Johansen ◽  
Steffen Aagaard Sørensen ◽  
Kajsa Møllersen ◽  
Fred Godtliebsen

Foraminifera are single-celled marine organisms that construct shells that remain as fossils in the marine sediments. Classifying and counting these fossils are important in paleo-oceanographic and -climatological research. However, the identification and counting process has been performed manually since the 1800s and is laborious and time-consuming. In this work, we present a deep learning-based instance segmentation model for classifying, detecting, and segmenting microscopic foraminifera. Our model is based on the Mask R-CNN architecture, using model weight parameters that have learned on the COCO detection dataset. We use a fine-tuning approach to adapt the parameters on a novel object detection dataset of more than 7000 microscopic foraminifera and sediment grains. The model achieves a (COCO-style) average precision of 0.78 on the classification and detection task, and 0.80 on the segmentation task. When the model is evaluated without challenging sediment grain images, the average precision for both tasks increases to 0.84 and 0.86, respectively. Prediction results are analyzed both quantitatively and qualitatively and discussed. Based on our findings we propose several directions for future work and conclude that our proposed model is an important step towards automating the identification and counting of microscopic foraminifera.


Risks ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 109
Author(s):  
Sharifah Farah Syed Yusoff Alhabshi ◽  
Zamira Hasanah Zamzuri ◽  
Siti Norafidah Mohd Ramli

The widely used Poisson count process in insurance claims modeling is no longer valid if the claims occurrences exhibit dispersion. In this paper, we consider the aggregate discounted claims of an insurance risk portfolio under Weibull counting process to allow for dispersed datasets. A copula is used to define the dependence structure between the interwaiting time and its subsequent claims amount. We use a Monte Carlo simulation to compute the higher-order moments of the risk portfolio, the premiums and the value-at-risk based on the New Zealand catastrophe historical data. The simulation outcomes under the negative dependence parameter θ, shows the highest value of moments when claims experience exhibit overdispersion. Conversely, the underdispersed scenario yields the highest value of moments when θ is positive. These results lead to higher premiums being charged and more capital requirements to be set aside to cope with unfavorable events borne by the insurers.


Author(s):  
Thomas Haugland Johansen ◽  
Steffen Aagaard Sørensen ◽  
Kajsa Møllersen ◽  
Fred Godtliebsen

Foraminifera are single-celled marine organisms that construct shells that remain as fossils in the marine sediments. Classifying and counting these fossils are important in e.g. paleo-oceanographic and -climatological research. However, the identification and counting process has been performed manually since the 1800s and is laborious and time-consuming. In this work, we present a deep learning-based instance segmentation model for classifying, detecting, and segmenting microscopic foraminifera. Our model is based on the Mask R-CNN architecture, using model weight parameters that have learned on the COCO detection dataset. We use a fine-tuning approach to adapt the parameters on a novel object detection dataset of more than 7000 microscopic foraminifera and sediment grains. The model achieves a (COCO-style) average precision of 0.78±0.00 on the classification and detection task, and 0.80±0.00 on the segmentation task. When the model is evaluated without challenging sediment grain images, the average precision for both tasks increases to 0.84±0.00 and 0.86±0.00, respectively. Prediction results are analyzed both quantitatively and qualitatively and discussed. Based on our findings we propose several directions for future work, and conclude that our proposed model is an important step towards automating the identification and counting of microscopic foraminifera.


2021 ◽  
Vol 18 (2) ◽  
pp. 209-230
Author(s):  
Ratheesan K. ◽  
Anilkumar P.
Keyword(s):  

2021 ◽  
Vol 2 (2) ◽  
Author(s):  
Hanna Alifia Putri Riyanto

The sorter and counting process of sheets paper still manually in several companies. So, in this study designed a tool that can automatically sorter and count paper based of paper quality. Paper is said to be suitable for use when there are no stains on a clean white sheet of paper, and on the other hand, the paper is said to be unfit for use when the paper is stained with ink. In this research, the LDR sensor functions as a sorting sensor, the resistance value received on the paper will be calculated. When the LDR captures a light resistance value less than 75, the paper will be categorized as a good paper quality, and vice versa when the LDR captures a light resistance of more than 75, the paper will be categorized as a dirty paper quality. next process the paper will be pushed to each shelf, this process involves a photodiode as a paper counter. When the paper passes through the photodiode, the process will be counted as one cycle or one paper. Furthermore, the paper will be separated into different racks according to the paper quality using the MG955 servo motor. In the servo motor test, the angular movement is 10º for good paper racks and 60º for dirty paper racks. After going through the process, the overall results of the paper data will be sent to NodeMCU ESP8266 and displayed on the blynk application. Keywords: LDR, Photodioda, Motor Servo, NodeMCU ESP8266.


Sign in / Sign up

Export Citation Format

Share Document