scholarly journals An Efficient Approach in Analysis of DNA Base Calling Using Neural Fuzzy Model

2017 ◽  
Vol 2017 ◽  
pp. 1-7
Author(s):  
Safa A. Hameed ◽  
Raed I. Hamed

This paper presented the issues of true representation and a reliable measure for analyzing the DNA base calling is provided. The method implemented dealt with the data set quality in analyzing DNA sequencing, it is investigating solution of the problem of using Neurofuzzy techniques for predicting the confidence value for each base in DNA base calling regarding collecting the data for each base in DNA, and the simulation model of designing the ANFIS contains three subsystems and main system; obtain the three features from the subsystems and in the main system and use the three features to predict the confidence value for each base. This is achieving effective results with high performance in employment.

2016 ◽  
Vol 2 (4) ◽  
pp. 424
Author(s):  
Raed Hamed ◽  
Safa A. Hameed

The paper proposes an efficient approach applied in DNA base calling, which concerns efficiency and sensitivity. We utilized the Neuro-Fuzzy model in the analysis issues to determine the confidence value prediction in DNA base calling, that is solved by several attempts applied in the MATLAB tool, the model is implemented for the collected data for each base in the DNA sequencing. The model is designed by using the ANFIS tool, which contains  three subsystems a main system. We obtain three features (peakness, height, and spacing) for each base from the three subsystems and in the main system use these three features as the input to predict the confidence value for each base in the DNA. This achieves a high accuracy in the obtained results with high-performance.


Author(s):  
C. Sauer ◽  
F. Bagusat ◽  
M.-L. Ruiz-Ripoll ◽  
C. Roller ◽  
M. Sauer ◽  
...  

AbstractThis work aims at the characterization of a modern concrete material. For this purpose, we perform two experimental series of inverse planar plate impact (PPI) tests with the ultra-high performance concrete B4Q, using two different witness plate materials. Hugoniot data in the range of particle velocities from 180 to 840 m/s and stresses from 1.1 to 7.5 GPa is derived from both series. Within the experimental accuracy, they can be seen as one consistent data set. Moreover, we conduct corresponding numerical simulations and find a reasonably good agreement between simulated and experimentally obtained curves. From the simulated curves, we derive numerical Hugoniot results that serve as a homogenized, mean shock response of B4Q and add further consistency to the data set. Additionally, the comparison of simulated and experimentally determined results allows us to identify experimental outliers. Furthermore, we perform a parameter study which shows that a significant influence of the applied pressure dependent strength model on the derived equation of state (EOS) parameters is unlikely. In order to compare the current results to our own partially reevaluated previous work and selected recent results from literature, we use simulations to numerically extrapolate the Hugoniot results. Considering their inhomogeneous nature, a consistent picture emerges for the shock response of the discussed concrete and high-strength mortar materials. Hugoniot results from this and earlier work are presented for further comparisons. In addition, a full parameter set for B4Q, including validated EOS parameters, is provided for the application in simulations of impact and blast scenarios.


2021 ◽  
pp. 016555152110184
Author(s):  
Gunjan Chandwani ◽  
Anil Ahlawat ◽  
Gaurav Dubey

Document retrieval plays an important role in knowledge management as it facilitates us to discover the relevant information from the existing data. This article proposes a cluster-based inverted indexing algorithm for document retrieval. First, the pre-processing is done to remove the unnecessary and redundant words from the documents. Then, the indexing of documents is done by the cluster-based inverted indexing algorithm, which is developed by integrating the piecewise fuzzy C-means (piFCM) clustering algorithm and inverted indexing. After providing the index to the documents, the query matching is performed for the user queries using the Bhattacharyya distance. Finally, the query optimisation is done by the Pearson correlation coefficient, and the relevant documents are retrieved. The performance of the proposed algorithm is analysed by the WebKB data set and Twenty Newsgroups data set. The analysis exposes that the proposed algorithm offers high performance with a precision of 1, recall of 0.70 and F-measure of 0.8235. The proposed document retrieval system retrieves the most relevant documents and speeds up the storing and retrieval of information.


2018 ◽  
Vol 10 (8) ◽  
pp. 80
Author(s):  
Lei Zhang ◽  
Xiaoli Zhi

Convolutional neural networks (CNN for short) have made great progress in face detection. They mostly take computation intensive networks as the backbone in order to obtain high precision, and they cannot get a good detection speed without the support of high-performance GPUs (Graphics Processing Units). This limits CNN-based face detection algorithms in real applications, especially in some speed dependent ones. To alleviate this problem, we propose a lightweight face detector in this paper, which takes a fast residual network as backbone. Our method can run fast even on cheap and ordinary GPUs. To guarantee its detection precision, multi-scale features and multi-context are fully exploited in efficient ways. Specifically, feature fusion is used to obtain semantic strongly multi-scale features firstly. Then multi-context including both local and global context is added to these multi-scale features without extra computational burden. The local context is added through a depthwise separable convolution based approach, and the global context by a simple global average pooling way. Experimental results show that our method can run at about 110 fps on VGA (Video Graphics Array)-resolution images, while still maintaining competitive precision on WIDER FACE and FDDB (Face Detection Data Set and Benchmark) datasets as compared with its state-of-the-art counterparts.


2021 ◽  
Author(s):  
Oliver Stenzel ◽  
Robin Thor ◽  
Martin Hilchenbach

<p>Orbital Laser altimeters deliver a plethora of data that is used to map planetary surfaces [1] and to understand interiors of solar system bodies [2]. Accuracy and precision of laser altimetry measurements depend on the knowledge of spacecraft position and pointing and on the instrument. Both are important for the retrieval of tidal parameters. In order to assess the quality of the altimeter retrievals, we are training and implementing an artificial neural network (ANN) to identify and exclude scans from analysis which yield erroneous data. The implementation is based on the PyTorch framework [3]. We are presenting our results for the MESSENGER Mercury Laser Altimeter (MLA) data set [4], but also in view of future analysis of the BepiColombo Laser Altimeter (BELA) data, which will arrive in orbit around Mercury in 2025 on board the Mercury Planetary Orbiter [5,6]. We further explore conventional methods of error identification and compare these with the machine learning results. Short periods of large residuals or large variation of residuals are identified and used to detect erroneous measurements. Furthermore, long-period systematics, such as those caused by slow variations in instrument pointing, can be modelled by including additional parameters.<br>[1] Zuber, Maria T., David E. Smith, Roger J. Phillips, Sean C. Solomon, Gregory A. Neumann, Steven A. Hauck, Stanton J. Peale, et al. ‘Topography of the Northern Hemisphere of Mercury from MESSENGER Laser Altimetry’. Science 336, no. 6078 (13 April 2012): 217–20. https://doi.org/10.1126/science.1218805.<br>[2] Thor, Robin N., Reinald Kallenbach, Ulrich R. Christensen, Philipp Gläser, Alexander Stark, Gregor Steinbrügge, and Jürgen Oberst. ‘Determination of the Lunar Body Tide from Global Laser Altimetry Data’. Journal of Geodesy 95, no. 1 (23 December 2020): 4. https://doi.org/10.1007/s00190-020-01455-8.<br>[3] Paszke, Adam, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, et al. ‘PyTorch: An Imperative Style, High-Performance Deep Learning Library’. Advances in Neural Information Processing Systems 32 (2019): 8026–37.<br>[4] Cavanaugh, John F., James C. Smith, Xiaoli Sun, Arlin E. Bartels, Luis Ramos-Izquierdo, Danny J. Krebs, Jan F. McGarry, et al. ‘The Mercury Laser Altimeter Instrument for the MESSENGER Mission’. Space Science Reviews 131, no. 1 (1 August 2007): 451–79. https://doi.org/10.1007/s11214-007-9273-4.<br>[5] Thomas, N., T. Spohn, J. -P. Barriot, W. Benz, G. Beutler, U. Christensen, V. Dehant, et al. ‘The BepiColombo Laser Altimeter (BELA): Concept and Baseline Design’. Planetary and Space Science 55, no. 10 (1 July 2007): 1398–1413. https://doi.org/10.1016/j.pss.2007.03.003.<br>[6] Benkhoff, Johannes, Jan van Casteren, Hajime Hayakawa, Masaki Fujimoto, Harri Laakso, Mauro Novara, Paolo Ferri, Helen R. Middleton, and Ruth Ziethe. ‘BepiColombo—Comprehensive Exploration of Mercury: Mission Overview and Science Goals’. Planetary and Space Science, Comprehensive Science Investigations of Mercury: The scientific goals of the joint ESA/JAXA mission BepiColombo, 58, no. 1 (1 January 2010): 2–20. https://doi.org/10.1016/j.pss.2009.09.020.</p>


2013 ◽  
Vol 06 (02) ◽  
pp. 165-174 ◽  
Author(s):  
Omniyah G. Mohammed ◽  
Khaled T. Assaleh ◽  
Ghaleb A. Husseini ◽  
Amin F. Majdalawieh ◽  
Scott R. Woodward

2021 ◽  
Vol 4 ◽  
Author(s):  
Stefano Markidis

Physics-Informed Neural Networks (PINN) are neural networks encoding the problem governing equations, such as Partial Differential Equations (PDE), as a part of the neural network. PINNs have emerged as a new essential tool to solve various challenging problems, including computing linear systems arising from PDEs, a task for which several traditional methods exist. In this work, we focus first on evaluating the potential of PINNs as linear solvers in the case of the Poisson equation, an omnipresent equation in scientific computing. We characterize PINN linear solvers in terms of accuracy and performance under different network configurations (depth, activation functions, input data set distribution). We highlight the critical role of transfer learning. Our results show that low-frequency components of the solution converge quickly as an effect of the F-principle. In contrast, an accurate solution of the high frequencies requires an exceedingly long time. To address this limitation, we propose integrating PINNs into traditional linear solvers. We show that this integration leads to the development of new solvers whose performance is on par with other high-performance solvers, such as PETSc conjugate gradient linear solvers, in terms of performance and accuracy. Overall, while the accuracy and computational performance are still a limiting factor for the direct use of PINN linear solvers, hybrid strategies combining old traditional linear solver approaches with new emerging deep-learning techniques are among the most promising methods for developing a new class of linear solvers.


2009 ◽  
Author(s):  
Minyou Chen ◽  
Yongjian Wan ◽  
Fan Wu ◽  
Kaigui Xie ◽  
Mingyu Wang ◽  
...  

2020 ◽  
pp. 865-874
Author(s):  
Enrico Santus ◽  
Tal Schuster ◽  
Amir M. Tahmasebi ◽  
Clara Li ◽  
Adam Yala ◽  
...  

PURPOSE Literature on clinical note mining has highlighted the superiority of machine learning (ML) over hand-crafted rules. Nevertheless, most studies assume the availability of large training sets, which is rarely the case. For this reason, in the clinical setting, rules are still common. We suggest 2 methods to leverage the knowledge encoded in pre-existing rules to inform ML decisions and obtain high performance, even with scarce annotations. METHODS We collected 501 prostate pathology reports from 6 American hospitals. Reports were split into 2,711 core segments, annotated with 20 attributes describing the histology, grade, extension, and location of tumors. The data set was split by institutions to generate a cross-institutional evaluation setting. We assessed 4 systems, namely a rule-based approach, an ML model, and 2 hybrid systems integrating the previous methods: a Rule as Feature model and a Classifier Confidence model. Several ML algorithms were tested, including logistic regression (LR), support vector machine (SVM), and eXtreme gradient boosting (XGB). RESULTS When training on data from a single institution, LR lags behind the rules by 3.5% (F1 score: 92.2% v 95.7%). Hybrid models, instead, obtain competitive results, with Classifier Confidence outperforming the rules by +0.5% (96.2%). When a larger amount of data from multiple institutions is used, LR improves by +1.5% over the rules (97.2%), whereas hybrid systems obtain +2.2% for Rule as Feature (97.7%) and +2.6% for Classifier Confidence (98.3%). Replacing LR with SVM or XGB yielded similar performance gains. CONCLUSION We developed methods to use pre-existing handcrafted rules to inform ML algorithms. These hybrid systems obtain better performance than either rules or ML models alone, even when training data are limited.


Sign in / Sign up

Export Citation Format

Share Document