scholarly journals DESIGN OF BIOMETRIC PROTECTION AUTHENTIFICATION SYSTEM BASED ON K-AVERAGE METHOD

2021 ◽  
Vol 12 (4) ◽  
pp. 85-95
Author(s):  
Yaroslav Voznyi ◽  
Mariia Nazarkevych ◽  
Volodymyr Hrytsyk ◽  
Nataliia Lotoshynska ◽  
Bohdana Havrysh

The method of biometric identification, designed to ensure the protection of confidential information, is considered. The method of classification of biometric prints by means of machine learning is offered. One of the variants of the solution of the problem of identification of biometric images on the basis of the k-means algorithm is given. Marked data samples were created for learning and testing processes. Biometric fingerprint data were used to establish identity. A new fingerprint scan that belongs to a particular person is compared to the data stored for that person. If the measurements match, the statement that the person has been identified is true. Experimental results indicate that the k-means method is a promising approach to the classification of fingerprints. The development of biometrics leads to the creation of security systems with a better degree of recognition and with fewer errors than the security system on traditional media. Machine learning was performed using a number of samples from a known biometric database, and verification / testing was performed with samples from the same database that were not included in the training data set. Biometric fingerprint data based on the freely available NIST Special Database 302 were used to establish identity, and the learning outcomes were shown. A new fingerprint scan that belongs to a particular person is compared to the data stored for that person. If the measurements match, the statement that the person has been identified is true. The machine learning system is built on a modular basis, by forming combinations of individual modules scikit-learn library in a python environment.

Entropy ◽  
2020 ◽  
Vol 22 (12) ◽  
pp. 1436
Author(s):  
Hanna Loch-Olszewska ◽  
Janusz Szwabiński

The growing interest in machine learning methods has raised the need for a careful study of their application to the experimental single-particle tracking data. In this paper, we present the differences in the classification of the fractional anomalous diffusion trajectories that arise from the selection of the features used in random forest and gradient boosting algorithms. Comparing two recently used sets of human-engineered attributes with a new one, which was tailor-made for the problem, we show the importance of a thoughtful choice of the features and parameters. We also analyse the influence of alterations of synthetic training data set on the classification results. The trained classifiers are tested on real trajectories of G proteins and their receptors on a plasma membrane.


2021 ◽  
pp. 1-4
Author(s):  
Gusheng Tang ◽  
Xinyan Fu ◽  
Zhen Wang ◽  
Mingyi Chen

Morphological analysis of the bone marrow is an essential step in the diagnosis of hematological disease. The conventional analysis of bone marrow smears is performed under a manual microscope, which is labor-intensive and subject to interobserver variability. The morphological differential diagnosis of abnormal lymphocytes from normal lymphocytes is still challenging. The digital pathology methods integrated with advances in machine learning enable new diagnostic features/algorithms from digital bone marrow cell images in order to optimize classification, thus providing a robust and faster screening diagnostic tool. We have developed a machine learning system, Morphogo, based on algorithms to discriminate abnormal lymphocytes from normal lymphocytes using digital imaging analysis. We retrospectively reviewed 347 cases of bone marrow digital images. Among them, 53 cases had a clinical history and the diagnosis of marrow involvement with lymphoma was confirmed either by morphology or flow cytometry. We split the 53 cases into two groups for training and testing with 43 and 10 cases, respectively. The selected 15,353 cell images were reviewed by pathologists, based on morphological visual appearance, from 43 patients whose diagnosis was confirmed by complementary tests. To expand the range and the precision of recognizing the lymphoid cells in the marrow by automated digital microscopy systems, we developed an algorithm that incorporated color and texture in addition to geometrical cytological features of the variable lymphocyte images which were applied as the training data set. The selected images from the 10 patients were analyzed by the trained artificial intelligence-based recognition system and compared with the final diagnosis rendered by pathologists. The positive predictive value for the identification of the categories of reactive/normal lymphocytes and abnormal lymphoid cells was 99.04%. It seems likely that further training and improvement of the algorithms will facilitate further subclassification of specific lineage subset pathology, e.g., diffuse large B-cell lymphoma from chronic lymphocytic leukemia/small lymphocytic lymphoma, follicular lymphoma, mantle cell lymphoma or even hairy cell leukemia in cases of abnormal malignant lymphocyte classes in the future. This research demonstrated the feasibility of digital pathology and emerging machine learning approaches to automatically diagnose lymphoma cells in the bone marrow based on cytological-histological analyses.


2021 ◽  
Vol 1 ◽  
pp. 1183-1192
Author(s):  
Sebastian Bickel ◽  
Benjamin Schleich ◽  
Sandro Wartzack

AbstractData-driven methods from the field of Artificial Intelligence or Machine Learning are increasingly applied in mechanical engineering. This refers to the development of digital engineering in recent years, which aims to bring these methods into practice in order to realize cost and time savings. However, a necessary step towards the implementation of such methods is the utilization of existing data. This problem is essential because the mere availability of data does not automatically imply data usability. Therefore, this paper presents a method to automatically recognize symbols from principle sketches, which allows the generation of training data for machine learning algorithms. In this approach, the symbols are created randomly and their illustration varies with each generation. . A deep learning network from the field of computer vision is used to test the generated data set and thus to recognize symbols on principle sketches. This type of drawing is especially interesting because the cost-saving potential is very high due to the application in the early phases of the product development process.


Author(s):  
Ritu Khandelwal ◽  
Hemlata Goyal ◽  
Rajveer Singh Shekhawat

Introduction: Machine learning is an intelligent technology that works as a bridge between businesses and data science. With the involvement of data science, the business goal focuses on findings to get valuable insights on available data. The large part of Indian Cinema is Bollywood which is a multi-million dollar industry. This paper attempts to predict whether the upcoming Bollywood Movie would be Blockbuster, Superhit, Hit, Average or Flop. For this Machine Learning techniques (classification and prediction) will be applied. To make classifier or prediction model first step is the learning stage in which we need to give the training data set to train the model by applying some technique or algorithm and after that different rules are generated which helps to make a model and predict future trends in different types of organizations. Methods: All the techniques related to classification and Prediction such as Support Vector Machine(SVM), Random Forest, Decision Tree, Naïve Bayes, Logistic Regression, Adaboost, and KNN will be applied and try to find out efficient and effective results. All these functionalities can be applied with GUI Based workflows available with various categories such as data, Visualize, Model, and Evaluate. Result: To make classifier or prediction model first step is learning stage in which we need to give the training data set to train the model by applying some technique or algorithm and after that different rules are generated which helps to make a model and predict future trends in different types of organizations Conclusion: This paper focuses on Comparative Analysis that would be performed based on different parameters such as Accuracy, Confusion Matrix to identify the best possible model for predicting the movie Success. By using Advertisement Propaganda, they can plan for the best time to release the movie according to the predicted success rate to gain higher benefits. Discussion: Data Mining is the process of discovering different patterns from large data sets and from that various relationships are also discovered to solve various problems that come in business and helps to predict the forthcoming trends. This Prediction can help Production Houses for Advertisement Propaganda and also they can plan their costs and by assuring these factors they can make the movie more profitable.


2019 ◽  
Vol 9 (6) ◽  
pp. 1128 ◽  
Author(s):  
Yundong Li ◽  
Wei Hu ◽  
Han Dong ◽  
Xueyan Zhang

Using aerial cameras, satellite remote sensing or unmanned aerial vehicles (UAV) equipped with cameras can facilitate search and rescue tasks after disasters. The traditional manual interpretation of huge aerial images is inefficient and could be replaced by machine learning-based methods combined with image processing techniques. Given the development of machine learning, researchers find that convolutional neural networks can effectively extract features from images. Some target detection methods based on deep learning, such as the single-shot multibox detector (SSD) algorithm, can achieve better results than traditional methods. However, the impressive performance of machine learning-based methods results from the numerous labeled samples. Given the complexity of post-disaster scenarios, obtaining many samples in the aftermath of disasters is difficult. To address this issue, a damaged building assessment method using SSD with pretraining and data augmentation is proposed in the current study and highlights the following aspects. (1) Objects can be detected and classified into undamaged buildings, damaged buildings, and ruins. (2) A convolution auto-encoder (CAE) that consists of VGG16 is constructed and trained using unlabeled post-disaster images. As a transfer learning strategy, the weights of the SSD model are initialized using the weights of the CAE counterpart. (3) Data augmentation strategies, such as image mirroring, rotation, Gaussian blur, and Gaussian noise processing, are utilized to augment the training data set. As a case study, aerial images of Hurricane Sandy in 2012 were maximized to validate the proposed method’s effectiveness. Experiments show that the pretraining strategy can improve of 10% in terms of overall accuracy compared with the SSD trained from scratch. These experiments also demonstrate that using data augmentation strategies can improve mAP and mF1 by 72% and 20%, respectively. Finally, the experiment is further verified by another dataset of Hurricane Irma, and it is concluded that the paper method is feasible.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2503
Author(s):  
Taro Suzuki ◽  
Yoshiharu Amano

This paper proposes a method for detecting non-line-of-sight (NLOS) multipath, which causes large positioning errors in a global navigation satellite system (GNSS). We use GNSS signal correlation output, which is the most primitive GNSS signal processing output, to detect NLOS multipath based on machine learning. The shape of the multi-correlator outputs is distorted due to the NLOS multipath. The features of the shape of the multi-correlator are used to discriminate the NLOS multipath. We implement two supervised learning methods, a support vector machine (SVM) and a neural network (NN), and compare their performance. In addition, we also propose an automated method of collecting training data for LOS and NLOS signals of machine learning. The evaluation of the proposed NLOS detection method in an urban environment confirmed that NN was better than SVM, and 97.7% of NLOS signals were correctly discriminated.


Author(s):  
Jonas Austerjost ◽  
Robert Söldner ◽  
Christoffer Edlund ◽  
Johan Trygg ◽  
David Pollard ◽  
...  

Machine vision is a powerful technology that has become increasingly popular and accurate during the last decade due to rapid advances in the field of machine learning. The majority of machine vision applications are currently found in consumer electronics, automotive applications, and quality control, yet the potential for bioprocessing applications is tremendous. For instance, detecting and controlling foam emergence is important for all upstream bioprocesses, but the lack of robust foam sensing often leads to batch failures from foam-outs or overaddition of antifoam agents. Here, we report a new low-cost, flexible, and reliable foam sensor concept for bioreactor applications. The concept applies convolutional neural networks (CNNs), a state-of-the-art machine learning system for image processing. The implemented method shows high accuracy for both binary foam detection (foam/no foam) and fine-grained classification of foam levels.


2021 ◽  
Author(s):  
Eva van der Kooij ◽  
Marc Schleiss ◽  
Riccardo Taormina ◽  
Francesco Fioranelli ◽  
Dorien Lugt ◽  
...  

<p>Accurate short-term forecasts, also known as nowcasts, of heavy precipitation are desirable for creating early warning systems for extreme weather and its consequences, e.g. urban flooding. In this research, we explore the use of machine learning for short-term prediction of heavy rainfall showers in the Netherlands.</p><p>We assess the performance of a recurrent, convolutional neural network (TrajGRU) with lead times of 0 to 2 hours. The network is trained on a 13-year archive of radar images with 5-min temporal and 1-km spatial resolution from the precipitation radars of the Royal Netherlands Meteorological Institute (KNMI). We aim to train the model to predict the formation and dissipation of dynamic, heavy, localized rain events, a task for which traditional Lagrangian nowcasting methods still come up short.</p><p>We report on different ways to optimize predictive performance for heavy rainfall intensities through several experiments. The large dataset available provides many possible configurations for training. To focus on heavy rainfall intensities, we use different subsets of this dataset through using different conditions for event selection and varying the ratio of light and heavy precipitation events present in the training data set and change the loss function used to train the model.</p><p>To assess the performance of the model, we compare our method to current state-of-the-art Lagrangian nowcasting system from the pySTEPS library, like S-PROG, a deterministic approximation of an ensemble mean forecast. The results of the experiments are used to discuss the pros and cons of machine-learning based methods for precipitation nowcasting and possible ways to further increase performance.</p>


Author(s):  
Yanxiang Yu ◽  
◽  
Chicheng Xu ◽  
Siddharth Misra ◽  
Weichang Li ◽  
...  

Compressional and shear sonic traveltime logs (DTC and DTS, respectively) are crucial for subsurface characterization and seismic-well tie. However, these two logs are often missing or incomplete in many oil and gas wells. Therefore, many petrophysical and geophysical workflows include sonic log synthetization or pseudo-log generation based on multivariate regression or rock physics relations. Started on March 1, 2020, and concluded on May 7, 2020, the SPWLA PDDA SIG hosted a contest aiming to predict the DTC and DTS logs from seven “easy-to-acquire” conventional logs using machine-learning methods (GitHub, 2020). In the contest, a total number of 20,525 data points with half-foot resolution from three wells was collected to train regression models using machine-learning techniques. Each data point had seven features, consisting of the conventional “easy-to-acquire” logs: caliper, neutron porosity, gamma ray (GR), deep resistivity, medium resistivity, photoelectric factor, and bulk density, respectively, as well as two sonic logs (DTC and DTS) as the target. The separate data set of 11,089 samples from a fourth well was then used as the blind test data set. The prediction performance of the model was evaluated using root mean square error (RMSE) as the metric, shown in the equation below: RMSE=sqrt(1/2*1/m* [∑_(i=1)^m▒〖(〖DTC〗_pred^i-〖DTC〗_true^i)〗^2 + 〖(〖DTS〗_pred^i-〖DTS〗_true^i)〗^2 ] In the benchmark model, (Yu et al., 2020), we used a Random Forest regressor and conducted minimal preprocessing to the training data set; an RMSE score of 17.93 was achieved on the test data set. The top five models from the contest, on average, beat the performance of our benchmark model by 27% in the RMSE score. In the paper, we will review these five solutions, including preprocess techniques and different machine-learning models, including neural network, long short-term memory (LSTM), and ensemble trees. We found that data cleaning and clustering were critical for improving the performance in all models.


Sign in / Sign up

Export Citation Format

Share Document