scholarly journals Particle Classification through the Analysis of the Forward Scattered Signal in Optical Tweezers

Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6181
Author(s):  
Inês Alves Carvalho ◽  
Nuno Azevedo Silva ◽  
Carla C. Rosa ◽  
Luís C. C. Coelho ◽  
Pedro A. S. Jorge

The ability to select, isolate, and manipulate micron-sized particles or small clusters has made optical tweezers one of the emergent tools for modern biotechnology. In conventional setups, the classification of the trapped specimen is usually achieved through the acquired image, the scattered signal, or additional information such as Raman spectroscopy. In this work, we propose a solution that uses the temporal data signal from the scattering process of the trapping laser, acquired with a quadrant photodetector. Our methodology rests on a pre-processing strategy that combines Fourier transform and principal component analysis to reduce the dimension of the data and perform relevant feature extraction. Testing a wide range of standard machine learning algorithms, it is shown that this methodology allows achieving accuracy performances around 90%, validating the concept of using the temporal dynamics of the scattering signal for the classification task. Achieved with 500 millisecond signals and leveraging on methods of low computational footprint, the results presented pave the way for the deployment of alternative and faster classification methodologies in optical trapping technologies.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Sakthi Kumar Arul Prakash ◽  
Conrad Tucker

AbstractThis work investigates the ability to classify misinformation in online social media networks in a manner that avoids the need for ground truth labels. Rather than approach the classification problem as a task for humans or machine learning algorithms, this work leverages user–user and user–media (i.e.,media likes) interactions to infer the type of information (fake vs. authentic) being spread, without needing to know the actual details of the information itself. To study the inception and evolution of user–user and user–media interactions over time, we create an experimental platform that mimics the functionality of real-world social media networks. We develop a graphical model that considers the evolution of this network topology to model the uncertainty (entropy) propagation when fake and authentic media disseminates across the network. The creation of a real-world social media network enables a wide range of hypotheses to be tested pertaining to users, their interactions with other users, and with media content. The discovery that the entropy of user–user and user–media interactions approximate fake and authentic media likes, enables us to classify fake media in an unsupervised learning manner.


2020 ◽  
Author(s):  
Sakthi Kumar Arul Prakash ◽  
Conrad Tucker

Abstract This work investigates the ability to classify misinformation in online social media networks in a manner that avoids the need forground truth labels. Rather than approach the classification problem as a task for humans or machine learning algorithms, thiswork leverages user-user and user-media (i.e.,media likes) interactions to infer the type of information (fake vs. authentic) beingspread, without needing to know the actual details of the information itself. To study the inception and evolution of user-userand user-media interactions over time, we create an experimental platform that mimics the functionality of real world socialmedia networks. We develop a graphical model that considers the evolution of this network topology to model the uncertainty(entropy) propagation when fake and authentic media disseminates across the network. The creation of a real-world socialmedia network enables a wide range of hypotheses to be tested pertaining to users, their interactions with other users, andwith media content. The discovery that the entropy of user-user, and user-media interactions approximates fake and authenticmedia likes, enables us to classify fake media in an unsupervised learning manner.


PLoS ONE ◽  
2021 ◽  
Vol 16 (7) ◽  
pp. e0254181
Author(s):  
Kamila Lis ◽  
Mateusz Koryciński ◽  
Konrad A. Ciecierski

Data classification is one of the most commonly used applications of machine learning. The are many developed algorithms that can work in various environments and for different data distributions that perform this task with excellence. Classification algorithms, just like other machine learning algorithms have one thing in common: in order to operate on data, they must see the data. In the present world, where concerns about privacy, GDPR (General Data Protection Regulation), business confidentiality and security are growing bigger and bigger; this requirement to work directly on the original data might become, in some situations, a burden. In this paper, an approach to the classification of images that cannot be directly accessed during training has been made. It has been shown that one can train a deep neural network to create such a representation of the original data that i) without additional information, the original data cannot be restored, and ii) that this representation—called a masked form—can still be used for classification purposes. Moreover, it has been shown that classification of the masked data can be done using both classical and neural network-based classifiers.


Drones ◽  
2021 ◽  
Vol 5 (4) ◽  
pp. 104
Author(s):  
Zaide Duran ◽  
Kubra Ozcan ◽  
Muhammed Enes Atik

With the development of photogrammetry technologies, point clouds have found a wide range of use in academic and commercial areas. This situation has made it essential to extract information from point clouds. In particular, artificial intelligence applications have been used to extract information from point clouds to complex structures. Point cloud classification is also one of the leading areas where these applications are used. In this study, the classification of point clouds obtained by aerial photogrammetry and Light Detection and Ranging (LiDAR) technology belonging to the same region is performed by using machine learning. For this purpose, nine popular machine learning methods have been used. Geometric features obtained from point clouds were used for the feature spaces created for classification. Color information is also added to these in the photogrammetric point cloud. According to the LiDAR point cloud results, the highest overall accuracies were obtained as 0.96 with the Multilayer Perceptron (MLP) method. The lowest overall accuracies were obtained as 0.50 with the AdaBoost method. The method with the highest overall accuracy was achieved with the MLP (0.90) method. The lowest overall accuracy method is the GNB method with 0.25 overall accuracy.


1999 ◽  
Vol 73 (6) ◽  
pp. 1015-1028 ◽  
Author(s):  
Dhirendra K. Pandey ◽  
Christopher A. McRoberts ◽  
Manoj K. Pandit

The current classification of scleractinian corals based upon gross morphological features has been found unsatisfactory due to additional information from skeletal microarchitecture and microstructure. It is necessary to investigate microstructural details and limits in morphologic variation within and between different coral clades before a revised classification is constructed. Variations in morphologic characters and microstructural details from a population of Dimorpharaea de Fromentel, 1861 (Family Microsolenidae) from Upper Bathonian (Jumara Dome) strata in Kachchh are described. The data used include the diameter (D) and height (H) of the corallum, number of corallites in the colony (NC), number of septa in the mother corallite at the center of the colony (NS), minimum distance between centers of central corallite and corallite of the inner ring (C1), minimum distance between corallite centers of the outer ring (C2), septal density (DS) and trabecular density (DT). The principal components analysis reveals that most of the variation is explained by “size” related characters (D and H) while corallite density (NC and C1) and septal structures (DS and DT) contribute to the second and third principal component axes, respectively. The microarchitecture and distribution of characters observed in the Kachchh Dimorpharaea require a re-evaluation of familial-specific concepts and suggest that the population belongs to a single species, Dimorpharaea stellans Gregory, 1900, rather than four nominal species (D. stellans, D. distincta, D. continua and D. orbica) as has been assumed.


2020 ◽  
Vol 70 (2) ◽  
pp. 199-206
Author(s):  
Zh.T. Aituganova ◽  
◽  
B.A. Talpakova ◽  
B.K. Zhussipbek ◽  
◽  
...  

In the era of rapid technological development, the issue of information security is very relevant. The article discusses issues related to information security in computer systems due to the simple and quick copying of information through communication channels. The problem of information security covers a wide range of issues, from the legislature to a specific technical device. Program developers suggest the need for technical means of protection that provide a high level of information security. Experience has shown that software cannot guarantee information security. Therefore, any software must be supplemented by organizational measures that determine the rules for access and storage of information. In practice, software users face additional challenges when using security tools. This article presents a classification of guarantees and methods of self-defense, as well as methods of protection by requesting additional information.


Author(s):  
Anupam Agrawal ◽  

The paper describes a method of intrusion detection that keeps check of it with help of machine learning algorithms. The experiments have been conducted over KDD’99 cup dataset, which is an imbalanced dataset, cause of which recall of some classes coming drastically low as there were not enough instances of it in there. For Preprocessing of dataset One Hot Encoding and Label Encoding to make it machine readable. The dimensionality of dataset has been reduced using Principal Component Analysis and classification of dataset into classes viz. attack and normal is done by Naïve Bayes Classifier. Due to imbalanced nature, shift of focus was on recall and overall recall and compared with other models which have achieved great accuracy. Based on the results, using a self optimizing loop, model has achieved better geometric mean accuracy.


2017 ◽  
Vol 24 (2) ◽  
pp. 265-276 ◽  
Author(s):  
Marek Wodziński ◽  
Aleksandra Krzyżanowska

Abstract This paper presents an alternative approach to the sequential data classification, based on traditional machine learning algorithms (neural networks, principal component analysis, multivariate Gaussian anomaly detector) and finding the shortest path in a directed acyclic graph, using A* algorithm with a regression-based heuristic. Palm gestures were used as an example of the sequential data and a quadrocopter was the controlled object. The study includes creation of a conceptual model and practical construction of a system using the GPU to ensure the realtime operation. The results present the classification accuracy of chosen gestures and comparison of the computation time between the CPU- and GPU-based solutions.


2019 ◽  
Vol 4 (2) ◽  
pp. 17-22 ◽  
Author(s):  
Jameela Ali Alkrimi ◽  
Sherna Aziz Tome ◽  
Loay E. George

Principal component analysis (PCA) is based feature reduction that reduces the correlation of features. In this research, a novel approach is proposed by applying the PCA technique on various morphologies of red blood cells (RBCs). According to hematologists, this method successfully classified 40 different types of abnormal RBCs. The classification of RBCs into various distinct subtypes using three machine learning algorithms is important in clinical and laboratory tests for detecting blood diseases. The most common abnormal RBCs are considered as anemic. The RBC features are sufficient to identify the type of anemia and the disease that caused it. Therefore, we found that several features extracted from RBCs in the blood smear images are not significant for classification when observed independently but are significant when combined with other features. The number of feature vectors is reduced from 271 to 8 as time resuming in training and accuracy percentage increased to 98%.


Sign in / Sign up

Export Citation Format

Share Document