INSTRUCTION TOOLS FOR SIGNAL PROCESSING AND MACHINE LEARNING FOR ION-CHANNEL SENSORS

Ion Channel sensors have several applications including DNA sequencing, biothreat detection, and medical applications. Ion-channel sensors mimic the selective transport mechanism of cell membranes and can detect a wide range of analytes at the molecule level. Analytes are sensed through changes in signal patterns. Papers in the literature have described different methods for ion channel signal analysis. In this paper, we describe a series of new graphical tools for ion channel signal analysis which can be used for research and education. The paper focuses on the utility of this tools in biosensor classes. Teaching signal processing and machine learning for ion channel sensors is challenging because of the multidisciplinary content and student backgrounds which include physics, chemistry, biology and engineering. The paper describes graphical ion channel analysis tools developed for an on-line simulation environment called J-DSP. The tools are integrated and assessed in a graduate bio-sensor course through computer laboratory exercises.

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 177782-177803
Author(s):  
Muhammad Wasimuddin ◽  
Khaled Elleithy ◽  
Abdel-Shakour Abuzneid ◽  
Miad Faezipour ◽  
Omar Abuzaghleh

2021 ◽  
Vol 4 (13) ◽  
pp. 01-14
Author(s):  
S. M. Debbal

Clinical analysis of the electromyogram is a powerful tool for diagnosis of neuromuscular diseases. There fore, the detection and the analysis of electromyogram signals has he attracted much attention over the years. Several methods based on modern signal Processing techniques such as temporal analysis, spectro-temporel analysis ..., have been investigated for electromyogram signal treatment. However, many of these analysis methods are not highly successful due to their complexity and non-stationarity. The aim of this study is to analyse the EMGs signals using nonlinear analysis. This analysis can provide a wide range of information’s related to the type of signal (normal and pathological).


2015 ◽  
Vol 2015 ◽  
pp. 1-7 ◽  
Author(s):  
Hao Lin ◽  
Wei Chen

In cells, ion channels are one of the most important classes of membrane proteins which allow inorganic ions to move across the membrane. A wide range of biological processes are involved and regulated by the opening and closing of ion channels. Ion channels can be classified into numerous classes and different types of ion channels exhibit different functions. Thus, the correct identification of ion channels and their types using computational methods will provide in-depth insights into their function in various biological processes. In this review, we will briefly introduce and discuss the recent progress in ion channel prediction using machine learning methods.


2021 ◽  
Vol 1 (1) ◽  
pp. 30-45
Author(s):  
Siti Nashayu Omar

This paper reviewed the Application of Digital Signal Processing (DPS) and Machine Learning (ML) for Electromyography (EMG) by previous studies. There is a need of the DSP and ML application into the EMG study to classify the signal in order to minimize the EMG noise of signal and the EMG signal characteristic. The common techniques analysis of signal processing is disccussed and compared to identify the best techniques used in order to process from raw data of EMG signal info EMG signal analysis, then some types of machine learning is discussed to identify which types of machine learning have gave the best performance of EMG signal identification and signal characteristic with the highest percentage of the accuracy and efficiency. Digital signal processing and the technique of signal analysis and machine learning for classification method in order to provide the best method and classification for EMG signal.


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Blanes de Oliveira LA

The oil and gas sector seeks to adapt to changes in industry 4.0. Advances in computational processing and artificial intelligence have allowed machines to perform increasingly complex activities. However, the application of these advances to the activities of the oil industry still involves much speculation. While some areas show clear gains with the implementation of machine learning, the exploration and characterization of reservoirs still represent a challenge concerning this topic. As the primary information acquired in reservoirs, such as rock and fluid samples, well logs, and seismic data, presents a wide range of scales, the real gain from machine learning techniques would likely be integrating different databases in different scales. Such integration would improve geological and production models. The spread of information in these databases would also have the potential to decrease exploratory success. The joint efforts of oil and gas companies and research and education institutions will be essential to increase the oil and gas industry.


Author(s):  
W.J. de Ruijter ◽  
Peter Rez ◽  
David J. Smith

Digital computers are becoming widely recognized as standard accessories for electron microscopy. Due to instrumental innovations the emphasis in digital processing is shifting from off-line manipulation of electron micrographs to on-line image acquisition, analysis and microscope control. An on-line computer leads to better utilization of the instrument and, moreover, the flexibility of software control creates the possibility of a wide range of novel experiments, for example, based on temporal and spatially resolved acquisition of images or microdiffraction patterns. The instrumental resolution in electron microscopy is often restricted by a combination of specimen movement, radiation damage and improper microscope adjustment (where the settings of focus, objective lens stigmatism and especially beam alignment are most critical). We are investigating the possibility of proper microscope alignment based on computer induced tilt of the electron beam. Image details corresponding to specimen spacings larger than ∼20Å are produced mainly through amplitude contrast; an analysis based on geometric optics indicates that beam tilt causes a simple image displacement. Higher resolution detail is characterized by wave propagation through the optical system of the microscope and we find that beam tilt results in a dispersive image displacement, i.e. the displacement varies with spacing. This approach is valid for weak phase objects (such as amorphous thin films), where transfer is simply described by a linear filter (phase contrast transfer function) and for crystalline materials, where imaging is described in terms of dynamical scattering and non-linear imaging theory. In both cases beam tilt introduces image artefacts.


2018 ◽  
Author(s):  
Sherif Tawfik ◽  
Olexandr Isayev ◽  
Catherine Stampfl ◽  
Joseph Shapter ◽  
David Winkler ◽  
...  

Materials constructed from different van der Waals two-dimensional (2D) heterostructures offer a wide range of benefits, but these systems have been little studied because of their experimental and computational complextiy, and because of the very large number of possible combinations of 2D building blocks. The simulation of the interface between two different 2D materials is computationally challenging due to the lattice mismatch problem, which sometimes necessitates the creation of very large simulation cells for performing density-functional theory (DFT) calculations. Here we use a combination of DFT, linear regression and machine learning techniques in order to rapidly determine the interlayer distance between two different 2D heterostructures that are stacked in a bilayer heterostructure, as well as the band gap of the bilayer. Our work provides an excellent proof of concept by quickly and accurately predicting a structural property (the interlayer distance) and an electronic property (the band gap) for a large number of hybrid 2D materials. This work paves the way for rapid computational screening of the vast parameter space of van der Waals heterostructures to identify new hybrid materials with useful and interesting properties.


2020 ◽  
Author(s):  
Sina Faizollahzadeh Ardabili ◽  
Amir Mosavi ◽  
Pedram Ghamisi ◽  
Filip Ferdinand ◽  
Annamaria R. Varkonyi-Koczy ◽  
...  

Several outbreak prediction models for COVID-19 are being used by officials around the world to make informed-decisions and enforce relevant control measures. Among the standard models for COVID-19 global pandemic prediction, simple epidemiological and statistical models have received more attention by authorities, and they are popular in the media. Due to a high level of uncertainty and lack of essential data, standard models have shown low accuracy for long-term prediction. Although the literature includes several attempts to address this issue, the essential generalization and robustness abilities of existing models needs to be improved. This paper presents a comparative analysis of machine learning and soft computing models to predict the COVID-19 outbreak as an alternative to SIR and SEIR models. Among a wide range of machine learning models investigated, two models showed promising results (i.e., multi-layered perceptron, MLP, and adaptive network-based fuzzy inference system, ANFIS). Based on the results reported here, and due to the highly complex nature of the COVID-19 outbreak and variation in its behavior from nation-to-nation, this study suggests machine learning as an effective tool to model the outbreak. This paper provides an initial benchmarking to demonstrate the potential of machine learning for future research. Paper further suggests that real novelty in outbreak prediction can be realized through integrating machine learning and SEIR models.


2021 ◽  
Vol 15 ◽  
Author(s):  
Alhassan Alkuhlani ◽  
Walaa Gad ◽  
Mohamed Roushdy ◽  
Abdel-Badeeh M. Salem

Background: Glycosylation is one of the most common post-translation modifications (PTMs) in organism cells. It plays important roles in several biological processes including cell-cell interaction, protein folding, antigen’s recognition, and immune response. In addition, glycosylation is associated with many human diseases such as cancer, diabetes and coronaviruses. The experimental techniques for identifying glycosylation sites are time-consuming, extensive laboratory work, and expensive. Therefore, computational intelligence techniques are becoming very important for glycosylation site prediction. Objective: This paper is a theoretical discussion of the technical aspects of the biotechnological (e.g., using artificial intelligence and machine learning) to digital bioinformatics research and intelligent biocomputing. The computational intelligent techniques have shown efficient results for predicting N-linked, O-linked and C-linked glycosylation sites. In the last two decades, many studies have been conducted for glycosylation site prediction using these techniques. In this paper, we analyze and compare a wide range of intelligent techniques of these studies from multiple aspects. The current challenges and difficulties facing the software developers and knowledge engineers for predicting glycosylation sites are also included. Method: The comparison between these different studies is introduced including many criteria such as databases, feature extraction and selection, machine learning classification methods, evaluation measures and the performance results. Results and conclusions: Many challenges and problems are presented. Consequently, more efforts are needed to get more accurate prediction models for the three basic types of glycosylation sites.


Sign in / Sign up

Export Citation Format

Share Document