sequential approach
Recently Published Documents


TOTAL DOCUMENTS

590
(FIVE YEARS 200)

H-INDEX

31
(FIVE YEARS 5)

2022 ◽  
Vol 22 (1) ◽  
Author(s):  
Maria DeYoreo ◽  
Carolyn M. Rutter ◽  
Jonathan Ozik ◽  
Nicholson Collier

Abstract Background Microsimulation models are mathematical models that simulate event histories for individual members of a population. They are useful for policy decisions because they simulate a large number of individuals from an idealized population, with features that change over time, and the resulting event histories can be summarized to describe key population-level outcomes. Model calibration is the process of incorporating evidence into the model. Calibrated models can be used to make predictions about population trends in disease outcomes and effectiveness of interventions, but calibration can be challenging and computationally expensive. Methods This paper develops a technique for sequentially updating models to take full advantage of earlier calibration results, to ultimately speed up the calibration process. A Bayesian approach to calibration is used because it combines different sources of evidence and enables uncertainty quantification which is appealing for decision-making. We develop this method in order to re-calibrate a microsimulation model for the natural history of colorectal cancer to include new targets that better inform the time from initiation of preclinical cancer to presentation with clinical cancer (sojourn time), because model exploration and validation revealed that more information was needed on sojourn time, and that the predicted percentage of patients with cancers detected via colonoscopy screening was too low. Results The sequential approach to calibration was more efficient than recalibrating the model from scratch. Incorporating new information on the percentage of patients with cancers detected upon screening changed the estimated sojourn time parameters significantly, increasing the estimated mean sojourn time for cancers in the colon and rectum, providing results with more validity. Conclusions A sequential approach to recalibration can be used to efficiently recalibrate a microsimulation model when new information becomes available that requires the original targets to be supplemented with additional targets.


2022 ◽  
Vol 2161 (1) ◽  
pp. 012028
Author(s):  
Karamjeet Kaur ◽  
Sudeshna Chakraborty ◽  
Manoj Kumar Gupta

Abstract In bioinformatics, sequence alignment is very important task to compare and find similarity between biological sequences. Smith Waterman algorithm is most widely used for alignment process but it has quadratic time complexity. This algorithm is using sequential approach so if the no. of biological sequences is increasing then it takes too much time to align sequences. In this paper, parallel approach of Smith Waterman algorithm is proposed and implemented according to the architecture of graphic processing unit using CUDA in which features of GPU is combined with CPU in such a way that alignment process is three times faster than sequential implementation of Smith Waterman algorithm and helps in accelerating the performance of sequence alignment using GPU. This paper describes the parallel implementation of sequence alignment using GPU and this intra-task parallelization strategy reduces the execution time. The results show significant runtime savings on GPU.


Vaccines ◽  
2021 ◽  
Vol 10 (1) ◽  
pp. 26
Author(s):  
Jessica Carter ◽  
Shannon Rutherford ◽  
Erika Borkoles

Vaccine uptake in younger Australian women living in rural and regional communities is poorly understood. This research explored factors affecting their decision making in the context of social determinants of health. A mixed methods design applying an explanatory sequential approach commenced with an online questionnaire followed by in-depth interviews with a sample of the same participants. The majority (56%) of participants indicated a positive intention to be vaccinated against COVID-19, but a substantially high proportion (44%) were uncertain or had no intention to be vaccinated. Significant factors affecting vaccine uptake included inadequate and sometimes misleading information leading to poor perceptions of vaccine safety. The personal benefits of vaccination—such as reduced social restrictions and increased mobility—were perceived more positively than health benefits. Additionally, access issues created a structural barrier affecting uptake among those with positive or uncertain vaccination intentions. Understanding factors affecting vaccine uptake allows for more targeted, equitable and effective vaccination campaigns, essential given the importance of widespread COVID-19 vaccination coverage for public health. The population insights emerging from the study hold lessons and relevance for rural and female populations globally.


2021 ◽  
Author(s):  
TS Subramanian ◽  
Ibrahim Al Awadhi

Abstract Passive fire protection (PFP) is applied to steel structures in process plants to delay temperature rise and maintain structural integrity until active firefighting methods are deployed and fire is contained. Our largest gas plant was developed in several phases spanning over 25years with fireproofing designed and applied as per existing philosophy during respective execution phases. During recent Risk Management Survey, potential gaps in fireproofing were observed and survey recommended a campaign to review and identify similar gaps across entire Plant. This paper highlights the approach for gap identification, assessment and optimal recommendations which ensure safety and asset integrity while avoiding high OPEX. Fire hazard evaluation is carried out based on risk assessment of fire and hydrocarbon leakage scenarios in process plant, and recommendations for fire prevention, protection and firefighting measures are provided. Requirement of fire protection is dependent on fire source and resulting fire influence zone (fireproofing zone drawings, FPZ). Structures which are located within the FPZ are then evaluated as per identified criteria in a sequential approach (e.g. whether sudden collapse will cause significant damage, structure supports equipment containing toxic material etc.). Further detailed assessment of structural members and their impact on overall structural stability and integrity is carried out for identified structures to determine fireproofing needs. Based on the outcome, fireproofing is applied for identified members. The scope involved assessment of structural steel fireproofing in the entire complex comprising of over 40 numbers process units and 12 numbers utility units. Several teams conducted physical site survey to identify the actual fireproofing based on zone drawings across the entire plant. Desktop assessment and identification of gaps were carried out primarily based on Project fireproofing specifications, fireproofing zone drawings, fireproofing location drawings, fireproofing schedule, structural design calculations and 3-D models wherever available for respective areas. Study revealed that actual fireproofing at site in each phase of plant is consistent within all process units installed as part of that particular project, however inconsistencies were observed when compared across the different phases, probably due to different interpretation of requirements. To ensure consistency a common criteria was established considering fire source, equipment supported by structure, criticality of member and industry standards. Optimized solutions was recommended to avoid high OPEX while ensuring asset integrity and safety. Fireproofing criteria are general guidelines susceptible to various interpretations by respective users. Establishment of common criteria and elimination of ambiguities in specifications enables consistent application of fireproofing, resulting in optimization while ensuring asset safety and integrity. The approach adopted by ADNOC Gas Processing can be shared with other group companies to enable each organization be prepared to justify the actions in case of any external / internal audits.


Author(s):  
Anastasia Katsiampoura ◽  
Syed Hamza Mufarrih ◽  
Aidan Sharkey ◽  
Ruma Bose ◽  
Sohail K. Mahboobi ◽  
...  
Keyword(s):  

Author(s):  
Md Sipon Miah ◽  
Michael Schukat ◽  
Enda Barrett

AbstractSpectrum sensing in a cognitive radio network involves detecting when a primary user vacates their licensed spectrum, to enable secondary users to broadcast on the same band. Accurately sensing the absence of the primary user ensures maximum utilization of the licensed spectrum and is fundamental to building effective cognitive radio networks. In this paper, we address the issues of enhancing sensing gain, average throughput, energy consumption, and network lifetime in a cognitive radio-based Internet of things (CR-IoT) network using the non-sequential approach. As a solution, we propose a Dempster–Shafer theory-based throughput analysis of an energy-efficient spectrum sensing scheme for a heterogeneous CR-IoT network using the sequential approach, which utilizes firstly the signal-to-noise ratio (SNR) to evaluate the degree of reliability and secondly the time slot of reporting to merge as a flexible time slot of sensing to more efficiently assess spectrum sensing. Before a global decision is made on the basis of both the soft decision fusion rule like the Dempster–Shafer theory and hard decision fusion rule like the “n-out-of-k” rule at the fusion center, a flexible time slot of sensing is added to adjust its measuring result. Using the proposed Dempster–Shafer theory, evidence is aggregated during the time slot of reporting and then a global decision is made at the fusion center. In addition, the throughput of the proposed scheme using the sequential approach is analyzed based on both the soft decision fusion rule and hard decision fusion rule. Simulation results indicate that the new approach improves primary user sensing accuracy by $$13\%$$ 13 % over previous approaches, while concurrently increasing detection probability and decreasing false alarm probability. It also improves overall throughput, reduces energy consumption, prolongs expected lifetime, and reduces global error probability compared to the previous approaches under any condition [part of this paper was presented at the EuCAP2018 conference (Md. Sipon Miah et al. 2018)].


2021 ◽  
Vol 3 ◽  
Author(s):  
Filippo Marchione ◽  
Konrad Hungerbuehler ◽  
Stavros Papadokonstantakis

Mass integration has been used for reducing the amount of process waste and environmental impact. Despite its long history, new challenges constantly arise with the use of process simulation tools offering platforms for rigorous process models. Therefore, the typical mass integration framework requires modifications to accurately account for the process performance. In this work, a novel sequential methodology is presented to realize a recycle network with rigorous process models. Initially, under the hypothesis of constant compositions of the process sources, an optimal ranking of the process sinks is determined. The optimal recycling network thus obtained is then used for a sequential methodology considering rigorous process models. The violations of process constraints are handled at each sequential step through the concept of “tightening constant”. The application of the sequential methodology to two case studies proves its ability to provide good approximations of the global optima with low computational effort.


Sign in / Sign up

Export Citation Format

Share Document