scholarly journals Multiple target drug cocktail design for attacking the core network markers of four cancers using ligand-based and structure-based virtual screening methods

2015 ◽  
Vol 8 (S4) ◽  
Author(s):  
Yung-Hao Wong ◽  
Chih-Lung Lin ◽  
Ting-Shou Chen ◽  
Chien-An Chen ◽  
Pei-Shin Jiang ◽  
...  
2018 ◽  
Vol 10 (1) ◽  
pp. 190
Author(s):  
Ulfa Ivonie ◽  
Arry Yanuar ◽  
Firdayani .

Objective: This study performed a virtual screening of the Indonesian Herbal Database for the core protein allosteric modulator of the hepatitis Bvirus (HBV) using AutoDock and AutoDock Vina software, to discover novel safe drugs for patients.Methods: The method was validated using the parameters enrichment factor (EF), receiver operating characteristics, and area under the curve (AUC).The grid box size used in virtual screening with AutoDock was 55 × 55 × 55 with EF10% of 0.7652 and AUC of 0.6709, whereas that used in virtualscreening with AutoDock Vina was 20.625 × 20.625 × 20.625 with EF5% of 0.5075 and AUC of 0.7832.Results: The top 10 compounds from virtual screening with AutoDock at G levels −11.74–−10.31 kcal/mol were yuehchukene, lansionic acid, stigmast-4-en-3-one, myrtillin, sanggenol O, lanosterol, erycrista-gallin, alpha-spinasterol, cyanidin 3-arabinoside, and cathasterone and with AutoDock Vinaat G levels −12.1 to −10.7 kcal/mol were sanggenol O, cucumerin A, yuehchukene, palmarumycin CP1, dehydrocycloguanandin, myrtillin, liriodenine,myricetin 3-alpha-L-arabinopyranoside, myricetin 3-galactoside, and cassameridine.Conclusion: Three compounds were in top list of both virtual screening methods against Cp allosteric modulator of HBV are myrtillin, sanggenol O,and yuehchukene have a prospect to be investigated futher for anti HBV.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Bakhe Nleya ◽  
Philani Khumalo ◽  
Andrew Mutsvangwa

AbstractHeterogeneous IoT-enabled networks generally accommodate both jitter tolerant and intolerant traffic. Optical Burst Switched (OBS) backbone networks handle the resultant volumes of such traffic by transmitting it in huge size chunks called bursts. Because of the lack of or limited buffering capabilities within the core network, burst contentions may frequently occur and thus affect overall supportable quality of service (QoS). Burst contention(s) in the core network is generally characterized by frequent burst losses as well as differential delays especially when traffic levels surge. Burst contention can be resolved in the core network by way of partial buffering using fiber delay lines (FDLs), wavelength conversion using wavelength converters (WCs) or deflection routing. In this paper, we assume that burst contention is resolved by way of deflecting contending bursts to other less congested paths even though this may lead to differential delays incurred by bursts as they traverse the network. This will contribute to undesirable jitter that may ultimately compromise overall QoS. Noting that jitter is mostly caused by deflection routing which itself is a result of poor wavelength and routing assigning, the paper proposes a controlled deflection routing (CDR) and wavelength assignment based scheme that allows the deflection of bursts to alternate paths only after controller buffer preset thresholds are surpassed. In this way, bursts (or burst fragments) intended for a common destination are always most likely to be routed on the same or least cost path end-to-end. We describe the scheme as well as compare its performance to other existing approaches. Overall, both analytical and simulation results show that the proposed scheme does lower both congestion (on deflection routes) as well as jitter, thus also improving throughput as well as avoiding congestion on deflection paths.


2021 ◽  
pp. 0271678X2110029
Author(s):  
Mitsouko van Assche ◽  
Elisabeth Dirren ◽  
Alexia Bourgeois ◽  
Andreas Kleinschmidt ◽  
Jonas Richiardi ◽  
...  

After stroke restricted to the primary motor cortex (M1), it is uncertain whether network reorganization associated with recovery involves the periinfarct or more remote regions. We studied 16 patients with focal M1 stroke and hand paresis. Motor function and resting-state MRI functional connectivity (FC) were assessed at three time points: acute (<10 days), early subacute (3 weeks), and late subacute (3 months). FC correlates of recovery were investigated at three spatial scales, (i) ipsilesional non-infarcted M1, (ii) core motor network (M1, premotor cortex (PMC), supplementary motor area (SMA), and primary somatosensory cortex), and (iii) extended motor network including all regions structurally connected to the upper limb representation of M1. Hand dexterity was impaired only in the acute phase ( P = 0.036). At a small spatial scale, clinical recovery was more frequently associated with connections involving ipsilesional non-infarcted M1 (Odds Ratio = 6.29; P = 0.036). At a larger scale, recovery correlated with increased FC strength in the core network compared to the extended motor network (rho = 0.71; P = 0.006). These results suggest that FC changes associated with motor improvement involve the perilesional M1 and do not extend beyond the core motor network. Core motor regions, and more specifically ipsilesional non-infarcted M1, could hence become primary targets for restorative therapies.


2018 ◽  
Author(s):  
Shengchao Liu ◽  
Moayad Alnammi ◽  
Spencer S. Ericksen ◽  
Andrew F. Voter ◽  
Gene E. Ananiev ◽  
...  

AbstractVirtual (computational) high-throughput screening provides a strategy for prioritizing compounds for experimental screens, but the choice of virtual screening algorithm depends on the dataset and evaluation strategy. We consider a wide range of ligand-based machine learning and docking-based approaches for virtual screening on two protein-protein interactions, PriA-SSB and RMI-FANCM, and present a strategy for choosing which algorithm is best for prospective compound prioritization. Our workflow identifies a random forest as the best algorithm for these targets over more sophisticated neural network-based models. The top 250 predictions from our selected random forest recover 37 of the 54 active compounds from a library of 22,434 new molecules assayed on PriA-SSB. We show that virtual screening methods that perform well in public datasets and synthetic benchmarks, like multi-task neural networks, may not always translate to prospective screening performance on a specific assay of interest.


2020 ◽  
Author(s):  
Fergus Imrie ◽  
Anthony R. Bradley ◽  
Charlotte M. Deane

An essential step in the development of virtual screening methods is the use of established sets of actives and decoys for benchmarking and training. However, the decoy molecules in commonly used sets are biased meaning that methods often exploit these biases to separate actives and decoys, rather than learning how to perform molecular recognition. This fundamental issue prevents generalisation and hinders virtual screening method development. We have developed a deep learning method (DeepCoy) that generates decoys to a user’s preferred specification in order to remove such biases or construct sets with a defined bias. We validated DeepCoy using two established benchmarks, DUD-E and DEKOIS 2.0. For all DUD-E targets and 80 of the 81 DEKOIS 2.0 targets, our generated decoy molecules more closely matched the active molecules’ physicochemical properties while introducing no discernible additional risk of false negatives. The DeepCoy decoys improved the Deviation from Optimal Embedding (DOE) score by an average of 81% and 66%, respectively, decreasing from 0.163 to 0.032 for DUD-E and from 0.109 to 0.038 for DEKOIS 2.0. Further, the generated decoys are harder to distinguish than the original decoy molecules via docking with Autodock Vina, with virtual screening performance falling from an AUC ROC of 0.71 to 0.63. The code is available at https://github.com/oxpig/DeepCoy. Generated molecules can be downloaded from http://opig.stats.ox.ac.uk/resources.


Author(s):  
Magnus Olsson ◽  
Shabnam Sultana ◽  
Stefan Rommer ◽  
Lars Frid ◽  
Catherine Mulligan

Author(s):  
Peyakunta Bhargavi ◽  
Singaraju Jyothi

The moment we live in today demands the convergence of the cloud computing, fog computing, machine learning, and IoT to explore new technological solutions. Fog computing is an emerging architecture intended for alleviating the network burdens at the cloud and the core network by moving resource-intensive functionalities such as computation, communication, storage, and analytics closer to the end users. Machine learning is a subfield of computer science and is a type of artificial intelligence (AI) that provides machines with the ability to learn without explicit programming. IoT has the ability to make decisions and take actions autonomously based on algorithmic sensing to acquire sensor data. These embedded capabilities will range across the entire spectrum of algorithmic approaches that is associated with machine learning. Here the authors explore how machine learning methods have been used to deploy the object detection, text detection in an image, and incorporated for better fulfillment of requirements in fog computing.


2015 ◽  
Vol 36 (6) ◽  
pp. 2161-2173 ◽  
Author(s):  
Wei He ◽  
Marta I. Garrido ◽  
Paul F. Sowman ◽  
Jon Brock ◽  
Blake W. Johnson

Sign in / Sign up

Export Citation Format

Share Document