scholarly journals Closing the Gap in Surveillance and Audit of Invasive Mold Diseases for Antifungal Stewardship Using Machine Learning

2019 ◽  
Vol 8 (9) ◽  
pp. 1390 ◽  
Author(s):  
Diva Baggio ◽  
Trisha Peel ◽  
Anton Y. Peleg ◽  
Sharon Avery ◽  
Madhurima Prayaga ◽  
...  

Clinical audit of invasive mold disease (IMD) in hematology patients is inefficient due to the difficulties of case finding. This results in antifungal stewardship (AFS) programs preferentially reporting drug cost and consumption rather than measures that actually reflect quality of care. We used machine learning-based natural language processing (NLP) to non-selectively screen chest tomography (CT) reports for pulmonary IMD, verified by clinical review against international definitions and benchmarked against key AFS measures. NLP screened 3014 reports from 1 September 2008 to 31 December 2017, generating 784 positives that after review, identified 205 IMD episodes (44% probable-proven) in 185 patients from 50,303 admissions. Breakthrough-probable/proven-IMD on antifungal prophylaxis accounted for 60% of episodes with serum monitoring of voriconazole or posaconazole in the 2 weeks prior performed in only 53% and 69% of episodes, respectively. Fiberoptic bronchoscopy within 2 days of CT scan occurred in only 54% of episodes. The average turnaround of send-away bronchoalveolar galactomannan of 12 days (range 7–22) was associated with high empiric liposomal amphotericin consumption. A random audit of 10% negative reports revealed two clinically significant misses (0.9%, 2/223). This is the first successful use of applied machine learning for institutional IMD surveillance across an entire hematology population describing process and outcome measures relevant to AFS. Compared to current methods of clinical audit, semi-automated surveillance using NLP is more efficient and inclusive by avoiding restrictions based on any underlying hematologic condition, and has the added advantage of being potentially scalable.

Biology ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 453
Author(s):  
Petar Tonkovic ◽  
Slobodan Kalajdziski ◽  
Eftim Zdravevski ◽  
Petre Lameski ◽  
Roberto Corizzo ◽  
...  

Applied machine learning in bioinformatics is growing as computer science slowly invades all research spheres. With the arrival of modern next-generation DNA sequencing algorithms, metagenomics is becoming an increasingly interesting research field as it finds countless practical applications exploiting the vast amounts of generated data. This study aims to scope the scientific literature in the field of metagenomic classification in the time interval 2008–2019 and provide an evolutionary timeline of data processing and machine learning in this field. This study follows the scoping review methodology and PRISMA guidelines to identify and process the available literature. Natural Language Processing (NLP) is deployed to ensure efficient and exhaustive search of the literary corpus of three large digital libraries: IEEE, PubMed, and Springer. The search is based on keywords and properties looked up using the digital libraries’ search engines. The scoping review results reveal an increasing number of research papers related to metagenomic classification over the past decade. The research is mainly focused on metagenomic classifiers, identifying scope specific metrics for model evaluation, data set sanitization, and dimensionality reduction. Out of all of these subproblems, data preprocessing is the least researched with considerable potential for improvement.


Author(s):  
Sumit Kaur

Abstract- Deep learning is an emerging research area in machine learning and pattern recognition field which has been presented with the goal of drawing Machine Learning nearer to one of its unique objectives, Artificial Intelligence. It tries to mimic the human brain, which is capable of processing and learning from the complex input data and solving different kinds of complicated tasks well. Deep learning (DL) basically based on a set of supervised and unsupervised algorithms that attempt to model higher level abstractions in data and make it self-learning for hierarchical representation for classification. In the recent years, it has attracted much attention due to its state-of-the-art performance in diverse areas like object perception, speech recognition, computer vision, collaborative filtering and natural language processing. This paper will present a survey on different deep learning techniques for remote sensing image classification. 


2017 ◽  
Author(s):  
Sabrina Jaeger ◽  
Simone Fulle ◽  
Samo Turk

Inspired by natural language processing techniques we here introduce Mol2vec which is an unsupervised machine learning approach to learn vector representations of molecular substructures. Similarly, to the Word2vec models where vectors of closely related words are in close proximity in the vector space, Mol2vec learns vector representations of molecular substructures that are pointing in similar directions for chemically related substructures. Compounds can finally be encoded as vectors by summing up vectors of the individual substructures and, for instance, feed into supervised machine learning approaches to predict compound properties. The underlying substructure vector embeddings are obtained by training an unsupervised machine learning approach on a so-called corpus of compounds that consists of all available chemical matter. The resulting Mol2vec model is pre-trained once, yields dense vector representations and overcomes drawbacks of common compound feature representations such as sparseness and bit collisions. The prediction capabilities are demonstrated on several compound property and bioactivity data sets and compared with results obtained for Morgan fingerprints as reference compound representation. Mol2vec can be easily combined with ProtVec, which employs the same Word2vec concept on protein sequences, resulting in a proteochemometric approach that is alignment independent and can be thus also easily used for proteins with low sequence similarities.


Author(s):  
Rohan Pandey ◽  
Vaibhav Gautam ◽  
Ridam Pal ◽  
Harsh Bandhey ◽  
Lovedeep Singh Dhingra ◽  
...  

BACKGROUND The COVID-19 pandemic has uncovered the potential of digital misinformation in shaping the health of nations. The deluge of unverified information that spreads faster than the epidemic itself is an unprecedented phenomenon that has put millions of lives in danger. Mitigating this ‘Infodemic’ requires strong health messaging systems that are engaging, vernacular, scalable, effective and continuously learn the new patterns of misinformation. OBJECTIVE We created WashKaro, a multi-pronged intervention for mitigating misinformation through conversational AI, machine translation and natural language processing. WashKaro provides the right information matched against WHO guidelines through AI, and delivers it in the right format in local languages. METHODS We theorize (i) an NLP based AI engine that could continuously incorporate user feedback to improve relevance of information, (ii) bite sized audio in the local language to improve penetrance in a country with skewed gender literacy ratios, and (iii) conversational but interactive AI engagement with users towards an increased health awareness in the community. RESULTS A total of 5026 people who downloaded the app during the study window, among those 1545 were active users. Our study shows that 3.4 times more females engaged with the App in Hindi as compared to males, the relevance of AI-filtered news content doubled within 45 days of continuous machine learning, and the prudence of integrated AI chatbot “Satya” increased thus proving the usefulness of an mHealth platform to mitigate health misinformation. CONCLUSIONS We conclude that a multi-pronged machine learning application delivering vernacular bite-sized audios and conversational AI is an effective approach to mitigate health misinformation. CLINICALTRIAL Not Applicable


Author(s):  
Timnit Gebru

This chapter discusses the role of race and gender in artificial intelligence (AI). The rapid permeation of AI into society has not been accompanied by a thorough investigation of the sociopolitical issues that cause certain groups of people to be harmed rather than advantaged by it. For instance, recent studies have shown that commercial automated facial analysis systems have much higher error rates for dark-skinned women, while having minimal errors on light-skinned men. Moreover, a 2016 ProPublica investigation uncovered that machine learning–based tools that assess crime recidivism rates in the United States are biased against African Americans. Other studies show that natural language–processing tools trained on news articles exhibit societal biases. While many technical solutions have been proposed to alleviate bias in machine learning systems, a holistic and multifaceted approach must be taken. This includes standardization bodies determining what types of systems can be used in which scenarios, making sure that automated decision tools are created by people from diverse backgrounds, and understanding the historical and political factors that disadvantage certain groups who are subjected to these tools.


Data ◽  
2021 ◽  
Vol 6 (7) ◽  
pp. 71
Author(s):  
Gonçalo Carnaz ◽  
Mário Antunes ◽  
Vitor Beires Nogueira

Criminal investigations collect and analyze the facts related to a crime, from which the investigators can deduce evidence to be used in court. It is a multidisciplinary and applied science, which includes interviews, interrogations, evidence collection, preservation of the chain of custody, and other methods and techniques of investigation. These techniques produce both digital and paper documents that have to be carefully analyzed to identify correlations and interactions among suspects, places, license plates, and other entities that are mentioned in the investigation. The computerized processing of these documents is a helping hand to the criminal investigation, as it allows the automatic identification of entities and their relations, being some of which difficult to identify manually. There exists a wide set of dedicated tools, but they have a major limitation: they are unable to process criminal reports in the Portuguese language, as an annotated corpus for that purpose does not exist. This paper presents an annotated corpus, composed of a collection of anonymized crime-related documents, which were extracted from official and open sources. The dataset was produced as the result of an exploratory initiative to collect crime-related data from websites and conditioned-access police reports. The dataset was evaluated and a mean precision of 0.808, recall of 0.722, and F1-score of 0.733 were obtained with the classification of the annotated named-entities present in the crime-related documents. This corpus can be employed to benchmark Machine Learning (ML) and Natural Language Processing (NLP) methods and tools to detect and correlate entities in the documents. Some examples are sentence detection, named-entity recognition, and identification of terms related to the criminal domain.


2021 ◽  
Vol 45 (10) ◽  
Author(s):  
Inés Robles Mendo ◽  
Gonçalo Marques ◽  
Isabel de la Torre Díez ◽  
Miguel López-Coronado ◽  
Francisco Martín-Rodríguez

AbstractDespite the increasing demand for artificial intelligence research in medicine, the functionalities of his methods in health emergency remain unclear. Therefore, the authors have conducted this systematic review and a global overview study which aims to identify, analyse, and evaluate the research available on different platforms, and its implementations in healthcare emergencies. The methodology applied for the identification and selection of the scientific studies and the different applications consist of two methods. On the one hand, the PRISMA methodology was carried out in Google Scholar, IEEE Xplore, PubMed ScienceDirect, and Scopus. On the other hand, a review of commercial applications found in the best-known commercial platforms (Android and iOS). A total of 20 studies were included in this review. Most of the included studies were of clinical decisions (n = 4, 20%) or medical services or emergency services (n = 4, 20%). Only 2 were focused on m-health (n = 2, 10%). On the other hand, 12 apps were chosen for full testing on different devices. These apps dealt with pre-hospital medical care (n = 3, 25%) or clinical decision support (n = 3, 25%). In total, half of these apps are based on machine learning based on natural language processing. Machine learning is increasingly applicable to healthcare and offers solutions to improve the efficiency and quality of healthcare. With the emergence of mobile health devices and applications that can use data and assess a patient's real-time health, machine learning is a growing trend in the healthcare industry.


Sign in / Sign up

Export Citation Format

Share Document