scholarly journals NBIC and DTRA, An Interagency Partnership to Integrate Analyst Capabilities

Author(s):  
Wai-Ling Mui ◽  
Edward P. Argenta ◽  
Teresa Quitugua ◽  
Christopher Kiley

ObjectiveThe National Biosurveillance Integration Center (NBIC) andthe Defense Threat Reduction Agency’s Chemical and BiologicalTechnologies Department (DTRA J9 CB) have partnered to co-develop the Biosurveillance Ecosystem (BSVE), an emergingcapability that aims to provide a virtual, customizable analystworkbench that integrates health and non-health data. This partnershippromotes engagement between diverse health surveillance entities toincrease awareness and improve decision-making capabilities.IntroductionNBIC collects, analyzes, and shares key biosurveillanceinformation to support the nation’s response to biological events ofconcern. Integration of this information enables early warning andshared situational awareness to inform critical decision making, anddirect response and recovery efforts.DTRA J9 CB leads DoD S&T to anticipate, defend, and safeguardagainst chemical and biological threats for the warfighter and thenation.These agencies have partnered to meet the evolving needs of thebiosurveillance community and address gaps in technology and datasharing capabilities. High-profile events such as the 2009 H1N1pandemic, the West African Ebola outbreak, and the recent emergenceof Zika virus disease have underscored the need for integration ofdisparate biosurveillance systems to provide a more functionalinfrastructure. This allows analysts and others in the communityto collect, analyze, and share relevant data across organizationssecurely and efficiently. Leveraging existing biosurveillance effortsprovides the federal public health community, and its partners, witha comprehensive interagency platform that enables engagement anddata sharing.MethodsNBIC and DTRA are leveraging existing biosurveillance projectsto share data feeds, work processes, resources, and lessons learned.A multi-stakeholder Agile process was implemented to representthe interests of NBIC, DTRA, and their respective partners. Systemrequirements generated by both agencies were combined to form asingle backlog of prioritized needs. Functional requirements fromNBIC support the development of the prototype by refining systemcapabilities and providing an operational perspective. DTRA’stechnical expertise and research and development (R&D) portfolioensures robust analytic applications are embedded within a secure,scalable system architecture.Integration of analyst validated data from the NBIC Biofeedssystem serves as a gold-standard to improve analytic developmentin machine learning and natural language processing. Additionally,working groups are formed using NBIC and DTRA extendedpartnerships with academia and private industry to expand R&Dpossibilities. These expansions include leveraging existing ontologyefforts for improved system functionality and integrating social mediaalgorithms for improved topic analysis output.ResultsThe combined efforts of these two agencies to develop theBSVE and improve overall biosurveillance processes across thefederal government has enhanced understanding of the needs ofthe community in a variety of mission spaces. To date, co-creation ofproducts, joint analysis, and sharing of data feeds has become a majorpriority for both partners to advance biosurveillance outcomes. Withinthe larger efforts of system development, possible coordination withother agencies such as the Department of Veterans Affairs (VA) andthe US Geological Survey (USGS) could expand reach of the systemto ensure fulfillment of health surveillance requirements as a whole.ConclusionsThe NBIC and DTRA partnership has demonstrated value inimproving biosurveillance capabilities for each agency and theirpartners. BSVE will provide NBIC analysts with a collaborativetool that can leverage use of applications that visualize near real-time global epidemic and outbreak data from a range of unique andtrusted sources. The continued collaboration means ongoing accessto new data streams and analytic processes for all analysts, as wellas advanced machine learning algorithms that increase capabilitiesfor joint analysis, rapid product creation, and continuous interagencycommunication.

Author(s):  
Hazlina Shariff Et.al

A key aspect of software quality is when the software has been operated functionally and meets user needs. A primary concern with non-functional requirements is that they always being neglected because their information is hidden in the documents. NFR is a tacit knowledge about the system and as a human, a user usually hardly know how to describe NFR. Hence, affect the NFR to be absent during the elicitation process. The software engineer has to act proactively to demand the software quality criteria from the user so the objective of requirements can be achieved. In order to overcome these problems, we use machine learning to detect the indicator term of NFR in textual requirements so we can remind the software engineer to elicit the missing NFR.We developed a prototype tool to support our approach to classify the textual requirements and using supervised machine learning algorithms. Survey wasdone toevaluate theeffectiveness of the prototype tool in detecting the NFR.


2020 ◽  
Vol 114 ◽  
pp. 242-245
Author(s):  
Jootaek Lee

The term, Artificial Intelligence (AI), has changed since it was first coined by John MacCarthy in 1956. AI, believed to have been created with Kurt Gödel's unprovable computational statements in 1931, is now called deep learning or machine learning. AI is defined as a computer machine with the ability to make predictions about the future and solve complex tasks, using algorithms. The AI algorithms are enhanced and become effective with big data capturing the present and the past while still necessarily reflecting human biases into models and equations. AI is also capable of making choices like humans, mirroring human reasoning. AI can help robots to efficiently repeat the same labor intensive procedures in factories and can analyze historic and present data efficiently through deep learning, natural language processing, and anomaly detection. Thus, AI covers a spectrum of augmented intelligence relating to prediction, autonomous intelligence relating to decision making, automated intelligence for labor robots, and assisted intelligence for data analysis.


2021 ◽  
Vol 11 (8) ◽  
pp. 3296
Author(s):  
Musarrat Hussain ◽  
Jamil Hussain ◽  
Taqdir Ali ◽  
Syed Imran Ali ◽  
Hafiz Syed Muhammad Bilal ◽  
...  

Clinical Practice Guidelines (CPGs) aim to optimize patient care by assisting physicians during the decision-making process. However, guideline adherence is highly affected by its unstructured format and aggregation of background information with disease-specific information. The objective of our study is to extract disease-specific information from CPG for enhancing its adherence ratio. In this research, we propose a semi-automatic mechanism for extracting disease-specific information from CPGs using pattern-matching techniques. We apply supervised and unsupervised machine-learning algorithms on CPG to extract a list of salient terms contributing to distinguishing recommendation sentences (RS) from non-recommendation sentences (NRS). Simultaneously, a group of experts also analyzes the same CPG and extract the initial patterns “Heuristic Patterns” using a group decision-making method, nominal group technique (NGT). We provide the list of salient terms to the experts and ask them to refine their extracted patterns. The experts refine patterns considering the provided salient terms. The extracted heuristic patterns depend on specific terms and suffer from the specialization problem due to synonymy and polysemy. Therefore, we generalize the heuristic patterns to part-of-speech (POS) patterns and unified medical language system (UMLS) patterns, which make the proposed method generalize for all types of CPGs. We evaluated the initial extracted patterns on asthma, rhinosinusitis, and hypertension guidelines with the accuracy of 76.92%, 84.63%, and 89.16%, respectively. The accuracy increased to 78.89%, 85.32%, and 92.07% with refined machine-learning assistive patterns, respectively. Our system assists physicians by locating disease-specific information in the CPGs, which enhances the physicians’ performance and reduces CPG processing time. Additionally, it is beneficial in CPGs content annotation.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Alan Brnabic ◽  
Lisa M. Hess

Abstract Background Machine learning is a broad term encompassing a number of methods that allow the investigator to learn from the data. These methods may permit large real-world databases to be more rapidly translated to applications to inform patient-provider decision making. Methods This systematic literature review was conducted to identify published observational research of employed machine learning to inform decision making at the patient-provider level. The search strategy was implemented and studies meeting eligibility criteria were evaluated by two independent reviewers. Relevant data related to study design, statistical methods and strengths and limitations were identified; study quality was assessed using a modified version of the Luo checklist. Results A total of 34 publications from January 2014 to September 2020 were identified and evaluated for this review. There were diverse methods, statistical packages and approaches used across identified studies. The most common methods included decision tree and random forest approaches. Most studies applied internal validation but only two conducted external validation. Most studies utilized one algorithm, and only eight studies applied multiple machine learning algorithms to the data. Seven items on the Luo checklist failed to be met by more than 50% of published studies. Conclusions A wide variety of approaches, algorithms, statistical software, and validation strategies were employed in the application of machine learning methods to inform patient-provider decision making. There is a need to ensure that multiple machine learning approaches are used, the model selection strategy is clearly defined, and both internal and external validation are necessary to be sure that decisions for patient care are being made with the highest quality evidence. Future work should routinely employ ensemble methods incorporating multiple machine learning algorithms.


2021 ◽  
Vol 9 (5) ◽  
pp. 538
Author(s):  
Jinwan Park ◽  
Jung-Sik Jeong

According to the statistics of maritime collision accidents over the last five years (2016–2020), 95% of the total maritime collision accidents are caused by human factors. Machine learning algorithms are an emerging approach in judging the risk of collision among vessels and supporting reliable decision-making prior to any behaviors for collision avoidance. As the result, it can be a good method to reduce errors caused by navigators’ carelessness. This article aims to propose an enhanced machine learning method to estimate ship collision risk and to support more reliable decision-making for ship collision risk. In order to estimate the ship collision risk, the conventional support vector machine (SVM) was applied. Regardless of the advantage of the SVM to resolve the uncertainty problem by using the collected ships’ parameters, it has inherent weak points. In this study, the relevance vector machine (RVM), which can present reliable probabilistic results based on Bayesian theory, was applied to estimate the collision risk. The proposed method was compared with the results of applying the SVM. It showed that the estimation model using RVM is more accurate and efficient than the model using SVM. We expect to support the reasonable decision-making of the navigator through more accurate risk estimation, thus allowing early evasive actions.


2021 ◽  
Author(s):  
Vidya Samadi ◽  
Rakshit Pally

<p>Floods are among the most destructive natural hazard that affect millions of people across the world leading to severe loss of life and damage to property, critical infrastructure, and agriculture. Internet of Things (IoTs), machine learning (ML), and Big Data are exceptionally valuable tools for collecting the catastrophic readiness and countless actionable data. The aim of this presentation is to introduce Flood Analytics Information System (FAIS) as a data gathering and analytics system.  FAIS application is smartly designed to integrate crowd intelligence, ML, and natural language processing of tweets to provide warning with the aim to improve flood situational awareness and risk assessment. FAIS has been Beta tested during major hurricane events in US where successive storms made extensive damage and disruption. The prototype successfully identifies a dynamic set of at-risk locations/communities using the USGS river gauge height readings and geotagged tweets intersected with watershed boundary. The list of prioritized locations can be updated, as the river monitoring system and condition change over time (typically every 15 minutes).  The prototype also performs flood frequency analysis (FFA) using various probability distributions with the associated uncertainty estimation to assist engineers in designing safe structures. This presentation will discuss about the FAIS functionalities and real-time implementation of the prototype across south and southeast USA. This research is funded by the US National Science Foundation (NSF).</p>


2020 ◽  
Vol 7 (10) ◽  
pp. 380-389
Author(s):  
Asogwa D.C ◽  
Anigbogu S.O ◽  
Anigbogu G.N ◽  
Efozia F.N

Author's age prediction is the task of determining the author's age by studying the texts written by them. The prediction of author’s age can be enlightening about the different trends, opinions social and political views of an age group. Marketers always use this to encourage a product or a service to an age group following their conveyed interests and opinions. Methodologies in natural language processing have made it possible to predict author’s age from text by examining the variation of linguistic characteristics. Also, many machine learning algorithms have been used in author’s age prediction. However, in social networks, computational linguists are challenged with numerous issues just as machine learning techniques are performance driven with its own challenges in realistic scenarios. This work developed a model that can predict author's age from text with a machine learning algorithm (Naïve Bayes) using three types of features namely, content based, style based and topic based. The trained model gave a prediction accuracy of 80%.


Author(s):  
Soundariya R.S. ◽  
◽  
Tharsanee R.M. ◽  
Vishnupriya B ◽  
Ashwathi R ◽  
...  

Corona virus disease (Covid - 19) has started to promptly spread worldwide from April 2020 till date, leading to massive death and loss of lives of people across various countries. In accordance to the advices of WHO, presently the diagnosis is implemented by Reverse Transcription Polymerase Chain Reaction (RT- PCR) testing, that incurs four to eight hours’ time to process test samples and adds 48 hours to categorize whether the samples are positive or negative. It is obvious that laboratory tests are time consuming and hence a speedy and prompt diagnosis of the disease is extremely needed. This can be attained through several Artificial Intelligence methodologies for prior diagnosis and tracing of corona diagnosis. Those methodologies are summarized into three categories: (i) Predicting the pandemic spread using mathematical models (ii) Empirical analysis using machine learning models to forecast the global corona transition by considering susceptible, infected and recovered rate. (iii) Utilizing deep learning architectures for corona diagnosis using the input data in the form of X-ray images and CT scan images. When X-ray and CT scan images are taken into account, supplementary data like medical signs, patient history and laboratory test results can also be considered while training the learning model and to advance the testing efficacy. Thus the proposed investigation summaries the several mathematical models, machine learning algorithms and deep learning frameworks that can be executed on the datasets to forecast the traces of COVID-19 and detect the risk factors of coronavirus.


Author(s):  
Rashida Ali ◽  
Ibrahim Rampurawala ◽  
Mayuri Wandhe ◽  
Ruchika Shrikhande ◽  
Arpita Bhatkar

Internet provides a medium to connect with individuals of similar or different interests creating a hub. Since a huge hub participates on these platforms, the user can receive a high volume of messages from different individuals creating a chaos and unwanted messages. These messages sometimes contain a true information and sometimes false, which leads to a state of confusion in the minds of the users and leads to first step towards spam messaging. Spam messages means an irrelevant and unsolicited message sent by a known/unknown user which may lead to a sense of insecurity among users. In this paper, the different machine learning algorithms were trained and tested with natural language processing (NLP) to classify whether the messages are spam or ham.


Sign in / Sign up

Export Citation Format

Share Document