scholarly journals Recent Advances in Electrochemical Biosensors: Applications, Challenges, and Future Scope

Biosensors ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 336
Author(s):  
Anoop Singh ◽  
Asha Sharma ◽  
Aamir Ahmed ◽  
Ashok K. Sundramoorthy ◽  
Hidemitsu Furukawa ◽  
...  

The electrochemical biosensors are a class of biosensors which convert biological information such as analyte concentration that is a biological recognition element (biochemical receptor) into current or voltage. Electrochemical biosensors depict propitious diagnostic technology which can detect biomarkers in body fluids such as sweat, blood, feces, or urine. Combinations of suitable immobilization techniques with effective transducers give rise to an efficient biosensor. They have been employed in the food industry, medical sciences, defense, studying plant biology, etc. While sensing complex structures and entities, a large data is obtained, and it becomes difficult to manually interpret all the data. Machine learning helps in interpreting large sensing data. In the case of biosensors, the presence of impurity affects the performance of the sensor and machine learning helps in removing signals obtained from the contaminants to obtain a high sensitivity. In this review, we discuss different types of biosensors along with their applications and the benefits of machine learning. This is followed by a discussion on the challenges, missing gaps in the knowledge, and solutions in the field of electrochemical biosensors. This review aims to serve as a valuable resource for scientists and engineers entering the interdisciplinary field of electrochemical biosensors. Furthermore, this review provides insight into the type of electrochemical biosensors, their applications, the importance of machine learning (ML) in biosensing, and challenges and future outlook.

1999 ◽  
Vol 71 (12) ◽  
pp. 2333-2348 ◽  
Author(s):  
D. R. Thevenot ◽  
K. Tóth ◽  
R. A. Durst ◽  
G. S. Wilson

Two Divisions of the International Union of Pure and Applied Chemistry (IUPAC), namely Physical Chemistry (Commission I.7 on Biophysical Chemistry formerly Steering Committee on Biophysical Chemistry) and Analytical Chemistry (Commission V.5 on Electroanalytical Chemistry) have prepared recommendations on the definition, classification and nomenclature related to electrochemical biosensors; these recommendations could, in the future, be extended to other types of biosensors.An electrochemical biosensor is a self-contained integrated device, which is capable of providing specific quantitative or semi-quantitative analytical information using a biological recognition element (biochemical receptor) which is retained in direct spatial contact with an electrochemical transduction element. Because of their ability to be repeatedly calibrated, we recommend that a biosensor should be clearly distinguished from a bioanalytical system, which requires additional processing steps, such as reagent addition. A device which is both disposable after one measurement, i.e., single use, and unable to monitor the analyte concentration continuously or after rapid and reproducible regeneration should be designated a single use biosensor.Biosensors may be classified according to the biological specificity-conferring mechanism or, alternatively, to the mode of physico-chemical signal transduction. The biological recognition element may be based on a chemical reaction catalysed by, or on an equilibrium reaction with macromolecules that have been isolated, engineered or present in their original biological environment. In the latter cases, equilibrium is generally reached and there is no further, if any, net consumption of analyte(s) by the immobilized biocomplexing agent incorporated into the sensor. Biosensors may be further classified according to the analytes or reactions that they monitor: direct monitoring of analyte concentration or of reactions producing or consuming such analytes; alternatively, an indirect monitoring of inhibitor or activator of the biological recognition element (biochemical receptor) may be achieved. A rapid proliferation of biosensors and their diversity has led to a lack of rigour in defining their performance criteria. Although each biosensor can only truly be evaluated for a particular application, it is still useful to examine how standard protocols for performance criteria may be defined in accordance with standard IUPAC protocols or definitions. These criteria are recommended for authors, referees and educators and include calibration characteristics (sensitivity, operational and linear concentration range, detection and quantitative determination limits), selectivity, steady-state and transient response times, sample throughput, reproducibility, stability and lifetime.


2019 ◽  
Vol 21 (9) ◽  
pp. 662-669 ◽  
Author(s):  
Junnan Zhao ◽  
Lu Zhu ◽  
Weineng Zhou ◽  
Lingfeng Yin ◽  
Yuchen Wang ◽  
...  

Background: Thrombin is the central protease of the vertebrate blood coagulation cascade, which is closely related to cardiovascular diseases. The inhibitory constant Ki is the most significant property of thrombin inhibitors. Method: This study was carried out to predict Ki values of thrombin inhibitors based on a large data set by using machine learning methods. Taking advantage of finding non-intuitive regularities on high-dimensional datasets, machine learning can be used to build effective predictive models. A total of 6554 descriptors for each compound were collected and an efficient descriptor selection method was chosen to find the appropriate descriptors. Four different methods including multiple linear regression (MLR), K Nearest Neighbors (KNN), Gradient Boosting Regression Tree (GBRT) and Support Vector Machine (SVM) were implemented to build prediction models with these selected descriptors. Results: The SVM model was the best one among these methods with R2=0.84, MSE=0.55 for the training set and R2=0.83, MSE=0.56 for the test set. Several validation methods such as yrandomization test and applicability domain evaluation, were adopted to assess the robustness and generalization ability of the model. The final model shows excellent stability and predictive ability and can be employed for rapid estimation of the inhibitory constant, which is full of help for designing novel thrombin inhibitors.


2020 ◽  
Vol 6 ◽  
Author(s):  
Jaime de Miguel Rodríguez ◽  
Maria Eugenia Villafañe ◽  
Luka Piškorec ◽  
Fernando Sancho Caparrini

Abstract This work presents a methodology for the generation of novel 3D objects resembling wireframes of building types. These result from the reconstruction of interpolated locations within the learnt distribution of variational autoencoders (VAEs), a deep generative machine learning model based on neural networks. The data set used features a scheme for geometry representation based on a ‘connectivity map’ that is especially suited to express the wireframe objects that compose it. Additionally, the input samples are generated through ‘parametric augmentation’, a strategy proposed in this study that creates coherent variations among data by enabling a set of parameters to alter representative features on a given building type. In the experiments that are described in this paper, more than 150 k input samples belonging to two building types have been processed during the training of a VAE model. The main contribution of this paper has been to explore parametric augmentation for the generation of large data sets of 3D geometries, showcasing its problems and limitations in the context of neural networks and VAEs. Results show that the generation of interpolated hybrid geometries is a challenging task. Despite the difficulty of the endeavour, promising advances are presented.


Energies ◽  
2021 ◽  
Vol 14 (12) ◽  
pp. 3654
Author(s):  
Nastaran Gholizadeh ◽  
Petr Musilek

In recent years, machine learning methods have found numerous applications in power systems for load forecasting, voltage control, power quality monitoring, anomaly detection, etc. Distributed learning is a subfield of machine learning and a descendant of the multi-agent systems field. Distributed learning is a collaboratively decentralized machine learning algorithm designed to handle large data sizes, solve complex learning problems, and increase privacy. Moreover, it can reduce the risk of a single point of failure compared to fully centralized approaches and lower the bandwidth and central storage requirements. This paper introduces three existing distributed learning frameworks and reviews the applications that have been proposed for them in power systems so far. It summarizes the methods, benefits, and challenges of distributed learning frameworks in power systems and identifies the gaps in the literature for future studies.


2021 ◽  
Vol 13 (3) ◽  
pp. 531
Author(s):  
Caiwang Zheng ◽  
Amr Abd-Elrahman ◽  
Vance Whitaker

Measurement of plant characteristics is still the primary bottleneck in both plant breeding and crop management. Rapid and accurate acquisition of information about large plant populations is critical for monitoring plant health and dissecting the underlying genetic traits. In recent years, high-throughput phenotyping technology has benefitted immensely from both remote sensing and machine learning. Simultaneous use of multiple sensors (e.g., high-resolution RGB, multispectral, hyperspectral, chlorophyll fluorescence, and light detection and ranging (LiDAR)) allows a range of spatial and spectral resolutions depending on the trait in question. Meanwhile, computer vision and machine learning methodology have emerged as powerful tools for extracting useful biological information from image data. Together, these tools allow the evaluation of various morphological, structural, biophysical, and biochemical traits. In this review, we focus on the recent development of phenomics approaches in strawberry farming, particularly those utilizing remote sensing and machine learning, with an eye toward future prospects for strawberries in precision agriculture. The research discussed is broadly categorized according to strawberry traits related to (1) fruit/flower detection, fruit maturity, fruit quality, internal fruit attributes, fruit shape, and yield prediction; (2) leaf and canopy attributes; (3) water stress; and (4) pest and disease detection. Finally, we present a synthesis of the potential research opportunities and directions that could further promote the use of remote sensing and machine learning in strawberry farming.


2021 ◽  
Vol 37 (3) ◽  
pp. 585-617
Author(s):  
Teresa Bono ◽  
Karen Croxson ◽  
Adam Giles

Abstract The use of machine learning as an input into decision-making is on the rise, owing to its ability to uncover hidden patterns in large data and improve prediction accuracy. Questions have been raised, however, about the potential distributional impacts of these technologies, with one concern being that they may perpetuate or even amplify human biases from the past. Exploiting detailed credit file data for 800,000 UK borrowers, we simulate a switch from a traditional (logit) credit scoring model to ensemble machine-learning methods. We confirm that machine-learning models are more accurate overall. We also find that they do as well as the simpler traditional model on relevant fairness criteria, where these criteria pertain to overall accuracy and error rates for population subgroups defined along protected or sensitive lines (gender, race, health status, and deprivation). We do observe some differences in the way credit-scoring models perform for different subgroups, but these manifest under a traditional modelling approach and switching to machine learning neither exacerbates nor eliminates these issues. The paper discusses some of the mechanical and data factors that may contribute to statistical fairness issues in the context of credit scoring.


Author(s):  
Ihor Ponomarenko ◽  
Oleksandra Lubkovska

The subject of the research is the approach to the possibility of using data science methods in the field of health care for integrated data processing and analysis in order to optimize economic and specialized processes The purpose of writing this article is to address issues related to the specifics of the use of Data Science methods in the field of health care on the basis of comprehensive information obtained from various sources. Methodology. The research methodology is system-structural and comparative analyzes (to study the application of BI-systems in the process of working with large data sets); monograph (the study of various software solutions in the market of business intelligence); economic analysis (when assessing the possibility of using business intelligence systems to strengthen the competitive position of companies). The scientific novelty the main sources of data on key processes in the medical field. Examples of innovative methods of collecting information in the field of health care, which are becoming widespread in the context of digitalization, are presented. The main sources of data in the field of health care used in Data Science are revealed. The specifics of the application of machine learning methods in the field of health care in the conditions of increasing competition between market participants and increasing demand for relevant products from the population are presented. Conclusions. The intensification of the integration of Data Science in the medical field is due to the increase of digitized data (statistics, textual informa- tion, visualizations, etc.). Through the use of machine learning methods, doctors and other health professionals have new opportunities to improve the efficiency of the health care system as a whole. Key words: Data science, efficiency, information, machine learning, medicine, Python, healthcare.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Lei Li ◽  
Desheng Wu

PurposeThe infraction of securities regulations (ISRs) of listed firms in their day-to-day operations and management has become one of common problems. This paper proposed several machine learning approaches to forecast the risk at infractions of listed corporates to solve financial problems that are not effective and precise in supervision.Design/methodology/approachThe overall proposed research framework designed for forecasting the infractions (ISRs) include data collection and cleaning, feature engineering, data split, prediction approach application and model performance evaluation. We select Logistic Regression, Naïve Bayes, Random Forest, Support Vector Machines, Artificial Neural Network and Long Short-Term Memory Networks (LSTMs) as ISRs prediction models.FindingsThe research results show that prediction performance of proposed models with the prior infractions provides a significant improvement of the ISRs than those without prior, especially for large sample set. The results also indicate when judging whether a company has infractions, we should pay attention to novel artificial intelligence methods, previous infractions of the company, and large data sets.Originality/valueThe findings could be utilized to address the problems of identifying listed corporates' ISRs at hand to a certain degree. Overall, results elucidate the value of the prior infraction of securities regulations (ISRs). This shows the importance of including more data sources when constructing distress models and not only focus on building increasingly more complex models on the same data. This is also beneficial to the regulatory authorities.


2011 ◽  
Vol 16 (9) ◽  
pp. 1059-1067 ◽  
Author(s):  
Peter Horvath ◽  
Thomas Wild ◽  
Ulrike Kutay ◽  
Gabor Csucs

Imaging-based high-content screens often rely on single cell-based evaluation of phenotypes in large data sets of microscopic images. Traditionally, these screens are analyzed by extracting a few image-related parameters and use their ratios (linear single or multiparametric separation) to classify the cells into various phenotypic classes. In this study, the authors show how machine learning–based classification of individual cells outperforms those classical ratio-based techniques. Using fluorescent intensity and morphological and texture features, they evaluated how the performance of data analysis increases with increasing feature numbers. Their findings are based on a case study involving an siRNA screen monitoring nucleoplasmic and nucleolar accumulation of a fluorescently tagged reporter protein. For the analysis, they developed a complete analysis workflow incorporating image segmentation, feature extraction, cell classification, hit detection, and visualization of the results. For the classification task, the authors have established a new graphical framework, the Advanced Cell Classifier, which provides a very accurate high-content screen analysis with minimal user interaction, offering access to a variety of advanced machine learning methods.


2021 ◽  
Author(s):  
Yuki KATAOKA

Rationale: Currently available machine learning models for diagnosing COVID-19 based on computed tomography (CT) images are limited due to concerns regarding methodological flaws or underlying biases in the evaluation process. Objectives: We aimed to develop and externally validate a novel machine learning model that can classify CT image findings as positive or negative for SARS-CoV-2 reverse transcription polymerase chain reaction (RT-PCR).Methods: We used 3128 images from a wide variety of two-gate data sources for the development and ablation study of the machine learning model. A total of 633 COVID-19 cases and 2295 non-COVID-19 cases were included in the study. We randomly divided cases into a development set and ablation set at a ratio of 8:2. For the ablation study, we used another dataset including 150 cases of interstitial pneumonia among non-COVID-19 images. For external validation, we used 893 images from 740 consecutive patients at 11 acute care hospitals suspected of having COVID-19 at the time of diagnosis. The dataset included 343 COVID-19 patients. The reference standard was RT-PCR.Result: In ablation study, using interstitial pneumonia images, the specificity of the model were 0.986 for usual interstitial pneumonia pattern, 0.820 for non-specific interstitial pneumonia pattern, 0.400 for organizing pneumonia pattern. In the external validation study, the sensitivity and specificity of the model were 0.869 and 0.432, respectively, at the low-level cutoff, and 0.724 and 0.721, respectively, at the high-level cutoff.Conclusions: Our machine learning model exhibited a high sensitivity in external validation datasets and may assist physicians to rule out COVID-19 diagnosis in a timely manner. Further studies are warranted to improve model specificity.


Sign in / Sign up

Export Citation Format

Share Document