scholarly journals Analytic Fusion for Essential Indicators of the Opioid Epidemic

2019 ◽  
Vol 11 (1) ◽  
Author(s):  
Howard Burkom ◽  
Joseph Downs ◽  
Raghav Ramachandran ◽  
Wayne Loschen ◽  
Laurel Boyd ◽  
...  

ObjectiveIn a partnership between the Public Health Division of the Oregon Health Authority (OHA) and the Johns Hopkins Applied Physics Laboratory (APL), our objective was develop an analytic fusion tool using streaming data and report-based evidence to improve the targeting and timing of evidence-based interventions in the ongoing opioid overdose epidemic. The tool is intended to enable practical situational awareness in the ESSENCE biosurveillance system to target response programs at the county and state levels. Threats to be monitored include emerging events and gradual trends of overdoses in three categories: all prescription and illicit opioids, heroin, and especially high-mortality synthetic drugs such as fentanyl and its analogues. Traditional sources included emergency department (ED) visits and emergency management services (EMS) call records. Novel sources included poison center calls, death records, and report-based information such as bad batch warnings on social media. Using available data and requirements analyses thus far, we applied and compared Bayesian networks, decision trees, and other machine learning approaches to derive robust tools to reveal emerging overdose threats and identify at-risk subpopulations.IntroductionUnlike other health threats of recent concern for which widespread mortality was hypothetical, the high fatality burden of opioid overdose crisis is present, steadily growing, and affecting young and old, rural and urban, military and civilian subpopulations. While the background of many public health monitors is mainly infectious disease surveillance, these epidemiologists seek to collaborate with behavioral health and injury prevention programs and with law enforcement and emergency medical services to combat the opioid crisis. Recent efforts have produced key terms and phrases in available data sources and numerous user-friendly dashboards allowing inspection of hundreds of plots. The current effort seeks to distill and present combined fusion alerts of greatest concern from numerous stratified data outputs. Near-term plans are to implement best-performing fusion methods as an ESSENCE module for the benefit of OHA staff and other user groups.MethodsBy analyzing historical OHA data, we formed features to monitor in each data source to adapt diagnosis codes and text strings suggested by CDC’s injury prevention division, published EMS criteria [Reference 1], and generic product codes from CDC toxicologists, with guidance from OHA Emergency Services Director David Lehrfeld and from Oregon Poison Center Director Sandy Giffen. These features included general and specific opioid abuse indicators such as daily counts of records labelled with the “poisoning” subcategory and containing “fentanyl” or other keywords in the free-text. Matrices of corresponding time series were formed for each of 36 counties and the entire state as inputs to region-specific fusion algorithms.To obtain truth data for detection, the OHA staff provided guidance and design help to generate plausible overdose threat scenarios that were quantified as realistic data distributions of monitored features accounting for time delays and historical distributions of counts in each data source. We sampled these distributions to create 1000 target sets for detection based on the event duration and affected counties for each event scenario.We used these target datasets to compare the detection performance of fusion detection algorithms. Tested algorithms included Bayesian Networks formed with the R package gRain, and also random forest, logistic regression, and support vector machine models implemented with the Python scikit-learn package using default settings. The first 800 days of the data were used for model training, and the last 400 days for testing. Model results were evaluated with the metrics:Sensitivity = (number of target event days signaled) / (all event days) andPositive predictive value (PPV) = (number of target event days signaled) / (all days signaled).These metrics were combined with specificity regarded as the expected fusion alert rate calculated from the historical dataset with no simulated cases injected.ResultsThe left half of Figure 1 illustrates a threat scenario along Oregon’s I5 corridor in which string of fentanyl overdoses with a few fatalities affects the monitored data streams in three counties over a seven-day period. The right half of the figure charts the performance metrics for random forest and Bayesian network machine learning methods applied to both training and test datasets assuming total case counts of 50, 20, and 10 overdoses. Sensitivity values were encouraging, especially for the Bayesian networks and even for the 10-case scenario. Computed PPV levels suggested a manageable public health investigation burden.ConclusionsThe detection results were promising for a threat scenario of particular concern to OHA based on a data scenario deemed plausible and realistic based on historical data. Trust and acceptance from public health surveillance of outputs from supervised machine learning methods beyond traditional statistical methods will require user experience and similar evaluation with additional threat scenarios and authentic event data.Credible truth data can be generated for testing and evaluation of analytic fusion methods with the advantages of several years of historical data from multiple sources and the expertise of experienced monitors. The collaborative generation process may be standardized and extended to other threat types and data environments.Next steps include the addition to the analytic fusion capability of report-based data that can influence data interpretation, including mainstream and social media reports, events in neighboring regions, and law enforcement data.References1. Rhode Island Enhanced State Opioid Overdose Surveillance (ESOOS) Case Definition for Emergency Medical Services (EMS), http://www.health.ri.gov/publications/guidelines/ESOOSCaseDefinitionForEMS.pdf, last accessed: Sept. 9, 2018.

2019 ◽  
Vol 14 (3) ◽  
pp. 178-189 ◽  
Author(s):  
Xiaoyang Jing ◽  
Qimin Dong ◽  
Ruqian Lu ◽  
Qiwen Dong

Background:Protein inter-residue contacts prediction play an important role in the field of protein structure and function research. As a low-dimensional representation of protein tertiary structure, protein inter-residue contacts could greatly help de novo protein structure prediction methods to reduce the conformational search space. Over the past two decades, various methods have been developed for protein inter-residue contacts prediction.Objective:We provide a comprehensive and systematic review of protein inter-residue contacts prediction methods.Results:Protein inter-residue contacts prediction methods are roughly classified into five categories: correlated mutations methods, machine-learning methods, fusion methods, templatebased methods and 3D model-based methods. In this paper, firstly we describe the common definition of protein inter-residue contacts and show the typical application of protein inter-residue contacts. Then, we present a comprehensive review of the three main categories for protein interresidue contacts prediction: correlated mutations methods, machine-learning methods and fusion methods. Besides, we analyze the constraints for each category. Furthermore, we compare several representative methods on the CASP11 dataset and discuss performances of these methods in detail.Conclusion:Correlated mutations methods achieve better performances for long-range contacts, while the machine-learning method performs well for short-range contacts. Fusion methods could take advantage of the machine-learning and correlated mutations methods. Employing more effective fusion strategy could be helpful to further improve the performances of fusion methods.


2012 ◽  
pp. 704-723
Author(s):  
Albert Ali Salah

Biometrics aims at reliable and robust identification of humans from their personal traits, mainly for security and authentication purposes, but also for identifying and tracking the users of smarter applications. Frequently considered modalities are fingerprint, face, iris, palmprint and voice, but there are many other possible biometrics, including gait, ear image, retina, DNA, and even behaviours. This chapter presents a survey of machine learning methods used for biometrics applications, and identifies relevant research issues. The author focuses on three areas of interest: offline methods for biometric template construction and recognition, information fusion methods for integrating multiple biometrics to obtain robust results, and methods for dealing with temporal information. By introducing exemplary and influential machine learning approaches in the context of specific biometrics applications, the author hopes to provide the reader with the means to create novel machine learning solutions to challenging biometrics problems.


Author(s):  
Sankhadeep Chatterjee ◽  
Sarbartha Sarkar ◽  
Nilanjan Dey ◽  
Amira S. Ashour ◽  
Soumya Sen

Water pollution due to industrial and domestic reasons is highly affecting the water quality. In undeveloped and developed countries, it has become a major reason behind a number of water borne diseases. Poor public health is putting an extra economic liability in order to deploy precautionary measures against these diseases. Recent research works have been directed toward more sustainable solutions to this problem. It has been revealed that good quality of water supply can not only improve the public health, it also accelerates economic growth of a geographical location as well. Water quality prediction using machine learning methods is still at its primitive stage. Besides, most of the studies did not follow any national or international standard for water quality prediction. In the current work, both the problems have been addressed. First, advanced machine learning methods, namely Artificial Neural Networks (ANNs) supported by a well-known multi-objective optimization algorithm called the Non-dominated Sorting Genetic Algorithm-II (NSGA-II) has been used to classify the water samples into two different classes. Secondly, Indian national standard for water quality (IS 10500:2012) has been utilized for this classification task. The hybrid NN-NSGA-II model is compared with another two well-known meta-heuristic supported ANN classifiers, namely ANN trained by Genetic Algorithm (NN-GA) and by Particle Swarm Optimization (NN-PSO). Apart from that, the support vector machine (SVM) has also been included in the comparative study. Besides analysing the performance based on several performance measuring methods, the statistical significance of the results obtained by NN-NSGA-II has been judged by performing Wilcoxon rank sum test with 5% confidence level. Results have indicated the ingenuity of the proposed NN-NSGA-II model over the other classifiers under current study.


2020 ◽  
Author(s):  
Juan David Gutiérrez

Abstract Background: Previous authors have evidenced the relationship between air pollution-aerosols and meteorological variables with the occurrence of pneumonia. Forecasting the number of attentions of pneumonia cases may be useful to optimize the allocation of healthcare resources and support public health authorities to implement emergency plans to face an increase in patients. The purpose of this study is to implement four machine-learning methods to forecast the number of attentions of pneumonia cases in the five largest cities of Colombia by using air pollution-aerosols, and meteorological and admission data.Methods: The number of attentions of pneumonia cases in the five most populated Colombian cities was provided by public health authorities between January 2009 and December 2019. Air pollution-aerosols and meteorological data were obtained from remote sensors. Four machine-learning methods were implemented for each city. We selected the machine-learning methods with the best performance in each city and implemented two techniques to identify the most relevant variables in the forecasting developed by the best-performing machine-learning models. Results: According to R2 metric, random forest was the machine-learning method with the best performance for Bogotá, Medellín and Cali; whereas for Barranquilla, the best performance was obtained from the Bayesian adaptive regression trees, and for Cartagena, extreme gradient boosting had the best performance. The most important variables for the forecasting were related to the admission data.Conclusions: The results obtained from this study suggest that machine learning can be used to efficiently forecast the number of attentions of pneumonia cases, and therefore, it can be a useful decision-making tool for public health authorities.


2016 ◽  
Author(s):  
Thomas Stegnicki

<p>Opioid overdose has become a public health epidemic, and the use of naloxone by law enforcement personnel has recently become a controversial policy issue. This pilot research project addresses the question of attitudes regarding addiction, overdose, naloxone administration training, and the expanding role of law enforcement in naloxone administration by law enforcement personnel who have been trained in the administration of naloxone to those experiencing an opioid overdose. A comprehensive literature review was conducted relating to the topic of opioid use and overdose and the use of naloxone by law enforcement. The Theory of Planned Behavior was the theoretical framework chosen to guide this project. The methodology used was an exploratory qualitative approach with individual face-to-face interviews as the data collection method. The results are presented and analyzed including findings of a need for “hands-on” naloxone training, perception of empowerment by some officers since being trained to administer naloxone, and perception of empathy for those who overdose, especially toward the younger victims. Recommendations and implications for nursing practice, policy, research, and leadership are presented including a plan for dissemination to nursing, interprofessional stakeholders, and policy makers. </p>


2020 ◽  
Vol 19 (2) ◽  
pp. 111-132
Author(s):  
Wan Agusti

Protection and law enforcement in the field of health for the people of Pekanbaru City is clearly still lacking, many people complain about the protection of health. So that in this study will be discussed about how the legal protection of public health services in the city of Pekanbaru based on Law Number 36 of 2009 concerning Health. This type of research is sociological, so the data source used is primary data from interviews, secondary data from libraries and tertiary data from dictionaries, media, and encyclopedias. Data collection techniques are done by observation, interviews, and literature review.


2018 ◽  
Vol 10 (1) ◽  
Author(s):  
Catherine Ordun ◽  
Jessica Bonnie ◽  
Jung Byun ◽  
Daewoo Chong ◽  
Richard Latham

ObjectiveA team of data scientists from Booz Allen competed in an opioid hackathon and developed a prototype opioid surveillance system using data science methods. This presentation intends to 1) describe the positives and negatives of our data science approach, 2) demo the prototype applications built, and 3) discuss next steps for local implementation of a similar capability.IntroductionAt the Governor’s Opioid Addiction Crisis Datathon in September 2017, a team of Booz Allen data scientists participated in a two-day hackathon to develop a prototype surveillance system for business users to locate areas of high risk across multiple indicators in the State of Virginia. We addressed 1) how different geographic regions experience the opioid overdose epidemic differently by clustering similar counties by socieconomic indicators, and 2) facilitating better data sharing between health care providers and law enforcement. We believe this inexpensive, open source, surveillance approach could be applied for states across the nation, particularly those with high rates of death due to drug overdoses and those with significant increases in death.MethodsThe Datathon provided a combination of publicly available data and State of Virginia datasets consisting of crime data, treatment center data, funding data, mortality and morbidity data for opioid, prescription drugs (i.e. oxycodone, fentanyl), and heroin cases, where dates started as early as 2010. The team focused on three data sources: U.S. Census Bureau (American Community Survey), State of Virginia Opioid Mortality and Overdose Data, and State of Virginia Department of Corrections Data. All data was cleaned and mapped to county-levels using FIPS codes. The prototype system allowed users to cluster similar counties together based on socioeconomic indicators so that underlying demographic patterns like food stamp usage and poverty levels might be revealed as indicative of mortality and overdose rates. This was important because neighboring counties like Goochland and Henrico Counties, while sharing a border, do not necessarily share similar behavioral and population characteristics. As a result, counties in close proximity may require different approaches for community messaging, law enforcement, and treatment infrastructure. The prototype also ingests crime and mortality data at the county-level for dynamic data exploration across multiple time and geographic parameters, a potential vehicle for data exchange in real-time.ResultsThe team wrote an agglomerative algorithm similar to k-means clustering in Python, with a Flask API back-end, and visualized using FIPS county codes in R Shiny. Users were allowed to select 2 to 5 clusters for visualization. The second part of the prototype featured two dashboards built in ElasticSearch and Kibana, open source software built on a noSQL database designed for information retrieval. Annual data on number of criminal commits and major offenses and mortality and overdose data on opioid usage were ingested and displayed using multiple descriptive charts and basic NLP. The clustering algorithm indicated that when using five clusters, counties in the east of Virginia are more dissimilar to each other, than counties in the west. The farther west, the more socioeconomically homogenous counties become, which may explain why counties in the west have greater rates of opioid overdose than in the east which involve more recreational use of non-prescription drugs. The dashboards indicated that between 2011 and 2017, the majority of crimes associated with heavy-use of drugs included Larceny/Fraud, Drug Sales, Assault, Burglary, Drug Possession, and Sexual Assault. Filtering by year, county, and offense, allowed for very focused analysis at the county level.ConclusionsData science methods using geospatial analytics, unsupervised machine learning, and leverage of noSQL databases for unstructured data, offer powerful and inexpensive ways for local officials to develop their own opioid surveillance system. Our approach of using clustering algorithms could be advanced by including several dozen socioeconomic features, tied to a potential risk score that the group was considering calculating. Further, as the team became more familiar with the data, they considered building a supervised machine learning to not only predict overdoses in each county, but more so, to extract from the model which features would be most predictive county-to-county. Next, because of the fast-paced nature of an overnight hackathon, a variety of open source applications were used to build solutions quickly. The team recommends generating a single architecture that would seamlessly tie together Python, R Shiny, and ElasticSearch/Kibana into one system. Ultimately, the goal of the entire prototype is to ingest and update the models with real-time data dispatched by police, public health, emergency departments, and medical examiners.Referenceshttps://data.virginia.gov/datathon-2017/https://vimeo.com/236131006?ref=tw-sharehttps://vimeo.com/236131182?ref=tw-share


2019 ◽  
Vol 11 (1) ◽  
Author(s):  
Jyllisa Mabion

ObjectiveTo improve Texas Syndromic Surveillance by integrating data from the Texas Poison Center and Emergency Medical Services for opioid overdose surveillance.IntroductionIn recent years, the number of deaths from illicit and prescription opioids has increased significantly resulting in a national and local public health crisis. According to the Texas Center for Health Statistics, there were 1340 opioid related deaths in 2015.1 In 2005, by comparison, there were 913 opioid related deaths. Syndromic surveillance can be used to monitor overdose trends in near real-time and provide much needed information to public health officials. Texas Syndromic Surveillance (TxS2) is the statewide syndromic surveillance system hosted by the Texas Department of State Health Services (DSHS). To enhance the capabilities of TxS2 and to better understand the opioid epidemic, DSHS is integrating both Texas Poison Center (TPC) data and Emergency Medical Services (EMS) data into the system.Much of the data collected at public health organizations can be several years old by the time it is released for public use. As a result, there have been major efforts to integrate more real-time data sources for a variety of surveillance needs and during emergency response activities.MethodsGuided by the Oregon Public Health Division’s successful integration of poison data into Oregon ESSENCE, DSHS has followed a similar path.2 DSHS already receives TPC data from the Commission on State Emergency Communication (CSEC), hence copying and routing that data into TxS2 requires a Memorandum of Understanding (MOU) with CSEC, which is charged with administering the implementation of the Texas Poison Control Network.EMS records are currently received by the DSHS Office of Injury Prevention (OIP) via file upload and extracted from web services as an XML file. Regional and Local Health Operations, the division where the syndromic surveillance program is located, and OIP, are both sections within DSHS. Therefore, it is not necessary to have a formal MOU in place. Both parties would operate under the rules and regulations that are established for data under the Community Health Improvement Division.CSEC and EMS will push data extracts to a DSHS SFTP folder location for polling by Rhapsody in Amazon Web Services. The message data will be extracted and transformed into the ESSENCE database format. Data are received at least once every 24 hours.ResultsTxS2 will now include TPC and EMS data, giving system users the ability to analyze and overlay real-time data for opioid overdose surveillance in one application. The integration of these data sources in TxS2 can be used for both routine surveillance and for unexpected public health events. This effort has led to discussions on how different sections within DSHS can collaborate by using syndromic surveillance data, and has generated interest in incorporating additional data streams into TxS2 in the future.ConclusionsWhile this venture is still a work in progress, it is anticipated that adding TPC and EMS data to TxS2 will be beneficial in surveilling not just opioid overdoses but other conditions and illnesses, as well as capturing disaster related injuries.References1. Texas Health Data, Center for Health Statistics [Internet]. Austin (TX): Department of State Health Services. Available from: http://healthdata.dshs.texas.gov/Opioids/Deaths2. Laing R, Powell M. Integrating Poison Center Data into Oregon ESSENCE using a Low-Cost Solution. OJPHI. 2017 May 1; 9(1).


Author(s):  
Grant Baldwin ◽  
Jan L. Losby ◽  
Wesley M. Sargent ◽  
Jamie Mells ◽  
Sarah Bacon

Prescription drug monitoring programs (PDMPs) are secure, online, state-based databases that contain information about controlled substance prescriptions written by clinicians and dispensed by pharmacists within a jurisdiction. In this chapter, current and future trends impacting PDMPs are reviewed and the implication of these trends for the future development of even more effective PDMPs is discussed. Uses of PDMPs by public health partners are also reviewed. For example, law enforcement officials may use data collected by PDMPs when investigating unusual prescribing patterns. Law enforcement officials may also use PMDP data in drug courts and other criminal diversion programs. Medical licensing boards use PMDP data to assess aberrant prescribing practices. Health systems, insurers, and public health officials use aggregated PDMP data as part of their efforts to evaluate a quality improvement initiative, an opioid stewardship program to improve opioid prescribing system-wide, or broad changes to prescribing patterns across a city, county, or state.


2021 ◽  
Vol 11 (2) ◽  
pp. 150
Author(s):  
Hasan Aykut Karaboga ◽  
Aslihan Gunel ◽  
Senay Vural Korkut ◽  
Ibrahim Demir ◽  
Resit Celik

Clinical diagnosis of amyotrophic lateral sclerosis (ALS) is difficult in the early period. But blood tests are less time consuming and low cost methods compared to other methods for the diagnosis. The ALS researchers have been used machine learning methods to predict the genetic architecture of disease. In this study we take advantages of Bayesian networks and machine learning methods to predict the ALS patients with blood plasma protein level and independent personal features. According to the comparison results, Bayesian Networks produced best results with accuracy (0.887), area under the curve (AUC) (0.970) and other comparison metrics. We confirmed that sex and age are effective variables on the ALS. In addition, we found that the probability of onset involvement in the ALS patients is very high. Also, a person’s other chronic or neurological diseases are associated with the ALS disease. Finally, we confirmed that the Parkin level may also have an effect on the ALS disease. While this protein is at very low levels in Parkinson’s patients, it is higher in the ALS patients than all control groups.


Sign in / Sign up

Export Citation Format

Share Document