scholarly journals Optimizing clinical trials recruitment via deep learning

2019 ◽  
Vol 26 (11) ◽  
pp. 1195-1202 ◽  
Author(s):  
Jelena Gligorijevic ◽  
Djordje Gligorijevic ◽  
Martin Pavlovski ◽  
Elizabeth Milkovits ◽  
Lucas Glass ◽  
...  

Abstract Objective Clinical trials, prospective research studies on human participants carried out by a distributed team of clinical investigators, play a crucial role in the development of new treatments in health care. This is a complex and expensive process where investigators aim to enroll volunteers with predetermined characteristics, administer treatment(s), and collect safety and efficacy data. Therefore, choosing top-enrolling investigators is essential for efficient clinical trial execution and is 1 of the primary drivers of drug development cost. Materials and Methods To facilitate clinical trials optimization, we propose DeepMatch (DM), a novel approach that builds on top of advances in deep learning. DM is designed to learn from both investigator and trial-related heterogeneous data sources and rank investigators based on their expected enrollment performance on new clinical trials. Results Large-scale evaluation conducted on 2618 studies provides evidence that the proposed ranking-based framework improves the current state-of-the-art by up to 19% on ranking investigators and up to 10% on detecting top/bottom performers when recruiting investigators for new clinical trials. Discussion The extensive experimental section suggests that DM can provide substantial improvement over current industry standards in several regards: (1) the enrollment potential of the investigator list, (2) the time it takes to generate the list, and (3) data-informed decisions about new investigators. Conclusion Due to the great significance of the problem at hand, related research efforts are set to shift the paradigm of how investigators are chosen for clinical trials, thereby optimizing and automating them and reducing the cost of new therapies.

1982 ◽  
Vol 382 (1 Sudden Corona) ◽  
pp. 411-422 ◽  
Author(s):  
Robert I. Levy ◽  
Edward J. Sondik
Keyword(s):  

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Bangtong Huang ◽  
Hongquan Zhang ◽  
Zihong Chen ◽  
Lingling Li ◽  
Lihua Shi

Deep learning algorithms are facing the limitation in virtual reality application due to the cost of memory, computation, and real-time computation problem. Models with rigorous performance might suffer from enormous parameters and large-scale structure, and it would be hard to replant them onto embedded devices. In this paper, with the inspiration of GhostNet, we proposed an efficient structure ShuffleGhost to make use of the redundancy in feature maps to alleviate the cost of computations, as well as tackling some drawbacks of GhostNet. Since GhostNet suffers from high computation of convolution in Ghost module and shortcut, the restriction of downsampling would make it more difficult to apply Ghost module and Ghost bottleneck to other backbone. This paper proposes three new kinds of ShuffleGhost structure to tackle the drawbacks of GhostNet. The ShuffleGhost module and ShuffleGhost bottlenecks are utilized by the shuffle layer and group convolution from ShuffleNet, and they are designed to redistribute the feature maps concatenated from Ghost Feature Map and Primary Feature Map. Besides, they eliminate the gap of them and extract the features. Then, SENet layer is adopted to reduce the computation cost of group convolution, as well as evaluating the importance of the feature maps which concatenated from Ghost Feature Maps and Primary Feature Maps and giving proper weights for the feature maps. This paper conducted some experiments and proved that the ShuffleGhostV3 has smaller trainable parameters and FLOPs with the ensurance of accuracy. And with proper design, it could be more efficient in both GPU and CPU side.


Symmetry ◽  
2020 ◽  
Vol 12 (5) ◽  
pp. 705
Author(s):  
Po-Chou Shih ◽  
Chun-Chin Hsu ◽  
Fang-Chih Tien

Silicon wafer is the most crucial material in the semiconductor manufacturing industry. Owing to limited resources, the reclamation of monitor and dummy wafers for reuse can dramatically lower the cost, and become a competitive edge in this industry. However, defects such as void, scratches, particles, and contamination are found on the surfaces of the reclaimed wafers. Most of the reclaimed wafers with the asymmetric distribution of the defects, known as the “good (G)” reclaimed wafers, can be re-polished if their defects are not irreversible and if their thicknesses are sufficient for re-polishing. Currently, the “no good (NG)” reclaimed wafers must be first screened by experienced human inspectors to determine their re-usability through defect mapping. This screening task is tedious, time-consuming, and unreliable. This study presents a deep-learning-based reclaimed wafers defect classification approach. Three neural networks, multilayer perceptron (MLP), convolutional neural network (CNN) and Residual Network (ResNet), are adopted and compared for classification. These networks analyze the pattern of defect mapping and determine not only the reclaimed wafers are suitable for re-polishing but also where the defect categories belong. The open source TensorFlow library was used to train the MLP, CNN, and ResNet networks using collected wafer images as input data. Based on the experimental results, we found that the system applying CNN networks with a proper design of kernels and structures gave fast and superior performance in identifying defective wafers owing to its deep learning capability, and the ResNet averagely exhibited excellent accuracy, while the large-scale MLP networks also acquired good results with proper network structures.


2021 ◽  
Author(s):  
Grace P. Ahlqvist ◽  
Catherine P. McGeough ◽  
Chris Senanayake ◽  
Joseph D. Armstrong ◽  
Ajay Yadaw ◽  
...  

<div> <div> <div> <p>Molnupiravir (MK-4482, EIDD-2801) is a promising orally bioavailable drug candidate for treatment of COVID-19. Herein we describe a supply-centered and chromatography-free synthesis of molnupiravir from cytidine, consisting of two steps: a selective enzymatic acylation followed by transamination to yield the final drug product. Both steps have been successfully performed on decagram scale: the first step at 200 g, and the second step at 80 g. Overall, molnupiravir has been obtained in a 41% overall isolated yield compared to a maximum 17% isolated yield in the patented route. This route provides many advantages to the initial route described in the patent literature and would decrease the cost of this pharmaceutical should it prove safe and efficacious in ongoing clinical trials.</p> </div> </div> </div>


2021 ◽  
Author(s):  
Werner Rammer ◽  
Rupert Seidl

&lt;p&gt;In times of rapid global change, the ability to faithfully predict the development of vegetation on larger scales is of key relevance to society. However, ecosystem models that incorporate enough process understanding for being applicable under future and non-analog conditions are often restricted to finer spatial scales due to data and computational constraints. Recent breakthroughs in machine learning, particularly in the field of deep learning, allow bridging this scale mismatch by providing new means for analyzing data, e.g., in remote sensing, but also new modelling approaches. We here present a novel approach for Scaling Vegetation Dynamics (SVD) which uses a deep neural network for predicting large-scale vegetation development. In a first step, the network learns its representation of vegetation dynamics as a function of current vegetation state and environmental drivers from process-based models and empirical data. The trained model is then used within of a dynamic simulation on large spatial scales. In this contribution we introduce the conceptual approach of SVD and show results for example applications in Europe and the US. More broadly we discuss aspects of applying deep learning in the context of ecological modeling.&lt;/p&gt;


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Tangbo Bai ◽  
Jianwei Yang ◽  
Lixiang Duan ◽  
Yanxue Wang

Large-scale mechanical equipment monitoring involves various kinds and quantities of information, and the present research on multisensor information fusion may face problems of information conflicts and modeling complexity. This paper proposes an analysis method combining correlation analysis and deep learning. According to the characteristics of monitoring data, three types of correlation coefficients between sensors in different states are obtained, and a new composite correlation analytical matrix is established to fuse the multisource heterogeneous data. The matrix represents fault feature information of different equipment states and helps further image generation. Meanwhile, a convolutional neural network-based deep learning method is developed to process the matrix and to discover the relationship between results and equipment states for fault diagnosis. To verify the method of this paper, experimental and field case studies are performed. The results show that it can accurately identify fault states and has higher diagnostic efficiency and accuracy than traditional methods.


2020 ◽  
Vol 9 (5) ◽  
pp. 293 ◽  
Author(s):  
Wouter B. Verschoof-van der Vaart ◽  
Karsten Lambers ◽  
Wojtek Kowalczyk ◽  
Quentin P.J. Bourgeois

This paper presents WODAN2.0, a workflow using Deep Learning for the automated detection of multiple archaeological object classes in LiDAR data from the Netherlands. WODAN2.0 is developed to rapidly and systematically map archaeology in large and complex datasets. To investigate its practical value, a large, random test dataset—next to a small, non-random dataset—was developed, which better represents the real-world situation of scarce archaeological objects in different types of complex terrain. To reduce the number of false positives caused by specific regions in the research area, a novel approach has been developed and implemented called Location-Based Ranking. Experiments show that WODAN2.0 has a performance of circa 70% for barrows and Celtic fields on the small, non-random testing dataset, while the performance on the large, random testing dataset is lower: circa 50% for barrows, circa 46% for Celtic fields, and circa 18% for charcoal kilns. The results show that the introduction of Location-Based Ranking and bagging leads to an improvement in performance varying between 17% and 35%. However, WODAN2.0 does not reach or exceed general human performance, when compared to the results of a citizen science project conducted in the same research area.


2019 ◽  
Vol 20 (1) ◽  
Author(s):  
Asma Ben Abacha ◽  
Dina Demner-Fushman

Abstract Background One of the challenges in large-scale information retrieval (IR) is developing fine-grained and domain-specific methods to answer natural language questions. Despite the availability of numerous sources and datasets for answer retrieval, Question Answering (QA) remains a challenging problem due to the difficulty of the question understanding and answer extraction tasks. One of the promising tracks investigated in QA is mapping new questions to formerly answered questions that are “similar”. Results We propose a novel QA approach based on Recognizing Question Entailment (RQE) and we describe the QA system and resources that we built and evaluated on real medical questions. First, we compare logistic regression and deep learning methods for RQE using different kinds of datasets including textual inference, question similarity, and entailment in both the open and clinical domains. Second, we combine IR models with the best RQE method to select entailed questions and rank the retrieved answers. To study the end-to-end QA approach, we built the MedQuAD collection of 47,457 question-answer pairs from trusted medical sources which we introduce and share in the scope of this paper. Following the evaluation process used in TREC 2017 LiveQA, we find that our approach exceeds the best results of the medical task with a 29.8% increase over the best official score. Conclusions The evaluation results support the relevance of question entailment for QA and highlight the effectiveness of combining IR and RQE for future QA efforts. Our findings also show that relying on a restricted set of reliable answer sources can bring a substantial improvement in medical QA.


Symmetry ◽  
2020 ◽  
Vol 12 (12) ◽  
pp. 1939
Author(s):  
Jun Wei Chen ◽  
Xanno K. Sigalingging ◽  
Jenq-Shiou Leu ◽  
Jun-Ichi Takada

In recent years, Chinese has become one of the most popular languages globally. The demand for automatic Chinese sentence correction has gradually increased. This research can be adopted to Chinese language learning to reduce the cost of learning and feedback time, and help writers check for wrong words. The traditional way to do Chinese sentence correction is to check if the word exists in the predefined dictionary. However, this kind of method cannot deal with semantic error. As deep learning becomes popular, an artificial neural network can be applied to understand the sentence’s context to correct the semantic error. However, there are still many issues that need to be discussed. For example, the accuracy and the computation time required to correct a sentence are still lacking, so maybe it is still not the time to adopt the deep learning based Chinese sentence correction system to large-scale commercial applications. Our goal is to obtain a model with better accuracy and computation time. Combining recurrent neural network and Bidirectional Encoder Representations from Transformers (BERT), a recently popular model, known for its high performance and slow inference speed, we introduce a hybrid model which can be applied to Chinese sentence correction, improving the accuracy and also the inference speed. Among the results, BERT-GRU has obtained the highest BLEU Score in all experiments. The inference speed of the transformer-based original model can be improved by 1131% in beam search decoding in the 128-word experiment, and greedy decoding can also be improved by 452%. The longer the sequence, the larger the improvement.


2021 ◽  
Vol 17 (3) ◽  
pp. 1-33
Author(s):  
Beilun Wang ◽  
Jiaqi Zhang ◽  
Yan Zhang ◽  
Meng Wang ◽  
Sen Wang

Recently, the Internet of Things (IoT) receives significant interest due to its rapid development. But IoT applications still face two challenges: heterogeneity and large scale of IoT data. Therefore, how to efficiently integrate and process these complicated data becomes an essential problem. In this article, we focus on the problem that analyzing variable dependencies of data collected from different edge devices in the IoT network. Because data from different devices are heterogeneous and the variable dependencies can be characterized into a graphical model, we can focus on the problem that jointly estimating multiple, high-dimensional, and sparse Gaussian Graphical Models for many related tasks (edge devices). This is an important goal in many fields. Many IoT networks have collected massive multi-task data and require the analysis of heterogeneous data in many scenarios. Past works on the joint estimation are non-distributed and involve computationally expensive and complex non-smooth optimizations. To address these problems, we propose a novel approach: Multi-FST. Multi-FST can be efficiently implemented on a cloud-server-based IoT network. The cloud server has a low computational load and IoT devices use asynchronous communication with the server, leading to efficiency. Multi-FST shows significant improvement, over baselines, when tested on various datasets.


Sign in / Sign up

Export Citation Format

Share Document