scholarly journals Recent Trends in Compressive Raman Spectroscopy Using DMD-Based Binary Detection

2018 ◽  
Vol 5 (1) ◽  
pp. 1 ◽  
Author(s):  
Derya Cebeci ◽  
Bharat Mankani ◽  
Dor Ben-Amotz

The collection of high-dimensional hyperspectral data is often the slowest step in the process of hyperspectral Raman imaging. Where the conventional array-based Raman spectroscopy acquiring of chemical images could take hours to even days. To increase the Raman collection speeds, a number of compressive detection (CD) strategies, which simultaneously sense and compress the spectral signal, have recently been demonstrated. This is opposed to conventional hyperspectral imaging, where full spectra are measured prior to post-processing and imaging CD increases the speed of data collection by making measurements in a low-dimensional space containing only the information of interest, thus enabling real-time imaging. The use of single channel detectors gives the key advantage to CD strategy using optical filter functions to obtain component intensities. In other words, the filter functions are simply the optimized patterns of wavelength combinations characteristic of component in the sample, and the intensity transmitted through each filter represents a direct measure of the associated score values. Essentially, compressive hyperspectral images consist of ‘score’ pixels (instead of ‘spectral’ pixels). This paper presents an overview of recent advances in compressive Raman detection designs and performance validations using a DMD based binary detection strategy.

Author(s):  
Samuel Melton ◽  
Sharad Ramanathan

Abstract Motivation Recent technological advances produce a wealth of high-dimensional descriptions of biological processes, yet extracting meaningful insight and mechanistic understanding from these data remains challenging. For example, in developmental biology, the dynamics of differentiation can now be mapped quantitatively using single-cell RNA sequencing, yet it is difficult to infer molecular regulators of developmental transitions. Here, we show that discovering informative features in the data is crucial for statistical analysis as well as making experimental predictions. Results We identify features based on their ability to discriminate between clusters of the data points. We define a class of problems in which linear separability of clusters is hidden in a low-dimensional space. We propose an unsupervised method to identify the subset of features that define a low-dimensional subspace in which clustering can be conducted. This is achieved by averaging over discriminators trained on an ensemble of proposed cluster configurations. We then apply our method to single-cell RNA-seq data from mouse gastrulation, and identify 27 key transcription factors (out of 409 total), 18 of which are known to define cell states through their expression levels. In this inferred subspace, we find clear signatures of known cell types that eluded classification prior to discovery of the correct low-dimensional subspace. Availability and implementation https://github.com/smelton/SMD. Supplementary information Supplementary data are available at Bioinformatics online.


Sensors ◽  
2020 ◽  
Vol 20 (4) ◽  
pp. 1217 ◽  
Author(s):  
Yuhua Li ◽  
Fengjie Wang ◽  
Ye Sun ◽  
Yingxu Wang

Accurate, rapid and non-destructive disease identification in the early stage of infection is essential to ensure the safe and efficient production of greenhouse cucumbers. Nevertheless, the effectiveness of most existing methods relies on the disease already exhibiting obvious symptoms in the middle to late stages of infection. Therefore, this paper presents an early identification method for cucumber diseases based on the techniques of hyperspectral imaging and machine learning, which consists of two procedures. First, reconstruction fidelity terms and graph constraints are constructed based on the decision criterion of the collaborative representation classifier and the desired spatial distribution of spectral curves (391 to 1044 nm) respectively. The former constrains the same-class and different-class reconstruction residuals while the latter constrains the weighted distances between spectral curves. They are further fused to steer the design of an offline algorithm. The algorithm aims to train a linear discriminative projection to transform the original spectral curves into a low dimensional space, where the projected spectral curves of different diseases own better separation trends. Then, the collaborative representation classifier is utilized to achieve online early diagnosis. Five experiments were performed on the hyperspectral data collected in the early infection stage of cucumber anthracnose and Corynespora cassiicola diseases. Experimental results demonstrated that the proposed method was feasible and effective, providing a maximal identification accuracy of 98.2% and an average online identification time of 0.65 ms. The proposed method has a promising future in practical production due to its high diagnostic accuracy and short diagnosis time.


2015 ◽  
Vol 26 (09) ◽  
pp. 1550103
Author(s):  
Yifang Ma ◽  
Zhiming Zheng

The evolution of networks or dynamic systems is controlled by many parameters in high-dimensional space, and it is crucial to extract the reduced and dominant ones in low-dimensional space. Here we consider the network ensemble, introduce a matrix resolvent scale function and apply it to a spectral approach to get the similarity relations between each pair of networks. The concept of Diffusion Maps is used to get the principal parameters, and we point out that the reduced dimensional principal parameters are captured by the low order eigenvectors of the diffusion matrix of the network ensemble. We validate our results by using two classical network ensembles and one dynamical network sequence via a cooperative Achlioptas growth process where an abrupt transition of the structures has been captured by our method. Our method provides a potential access to the pursuit of invisible control parameters of complex systems.


2009 ◽  
Vol 2009 ◽  
pp. 1-8 ◽  
Author(s):  
Eimad E. Abusham ◽  
E. K. Wong

A novel method based on the local nonlinear mapping is presented in this research. The method is called Locally Linear Discriminate Embedding (LLDE). LLDE preserves a local linear structure of a high-dimensional space and obtains a compact data representation as accurately as possible in embedding space (low dimensional) before recognition. For computational simplicity and fast processing, Radial Basis Function (RBF) classifier is integrated with the LLDE. RBF classifier is carried out onto low-dimensional embedding with reference to the variance of the data. To validate the proposed method, CMU-PIE database has been used and experiments conducted in this research revealed the efficiency of the proposed methods in face recognition, as compared to the linear and non-linear approaches.


2021 ◽  
Vol 12 ◽  
Author(s):  
Jianping Zhao ◽  
Na Wang ◽  
Haiyun Wang ◽  
Chunhou Zheng ◽  
Yansen Su

Dimensionality reduction of high-dimensional data is crucial for single-cell RNA sequencing (scRNA-seq) visualization and clustering. One prominent challenge in scRNA-seq studies comes from the dropout events, which lead to zero-inflated data. To address this issue, in this paper, we propose a scRNA-seq data dimensionality reduction algorithm based on a hierarchical autoencoder, termed SCDRHA. The proposed SCDRHA consists of two core modules, where the first module is a deep count autoencoder (DCA) that is used to denoise data, and the second module is a graph autoencoder that projects the data into a low-dimensional space. Experimental results demonstrate that SCDRHA has better performance than existing state-of-the-art algorithms on dimension reduction and noise reduction in five real scRNA-seq datasets. Besides, SCDRHA can also dramatically improve the performance of data visualization and cell clustering.


Author(s):  
Jing Wang ◽  
Jinglin Zhou ◽  
Xiaolu Chen

AbstractIndustrial data variables show obvious high dimension and strong nonlinear correlation. Traditional multivariate statistical monitoring methods, such as PCA, PLS, CCA, and FDA, are only suitable for solving the high-dimensional data processing with linear correlation. The kernel mapping method is the most common technique to deal with the nonlinearity, which projects the original data in the low-dimensional space to the high-dimensional space through appropriate kernel functions so as to achieve the goal of linear separability in the new space. However, the space projection from the low dimension to the high dimension is contradictory to the actual requirement of dimensionality reduction of the data. So kernel-based method inevitably increases the complexity of data processing.


Author(s):  
Michael Elmegaard ◽  
Jan Ru¨bel ◽  
Mizuho Inagaki ◽  
Atsushi Kawamoto ◽  
Jens Starke

Mechanical systems are typically described with finite element models resulting in high-dimensional dynamical systems. The high-dimensional space excludes the application of certain investigation methods like numerical continuation and bifurcation analysis to investigate the dynamical behaviour and its parameter dependence. Nevertheless, the dynamical behaviour usually lives on a low-dimensional manifold but typically no closed equations are available for the macroscopic quantities of interest. Therefore, an equation-free approach is suggested here to analyse and investigate the vibration behaviour of nonlinear rotating machinery. This allows then in the next step to optimize the rotor design specifications to reduce unbalance vibrations of a rotor-bearing system with nonlinear factors like the oil film dynamics. As an example we provide a simple model of a passenger car turbocharger where we investigate how the maximal vibration amplitude of the rotor depends on the viscosity of the oil used in the bearings.


Author(s):  
Jianhua Su ◽  
Rui Li ◽  
Hong Qiao ◽  
Jing Xu ◽  
Qinglin Ai ◽  
...  

Purpose The purpose of this paper is to develop a dual peg-in-hole insertion strategy. Dual peg-in-hole insertion is the most common task in manufacturing. Most of the previous work develop the insertion strategy in a two- or three-dimensional space, in which they suppose the initial yaw angle is zero and only concern the roll and pitch angles. However, in some case, the yaw angle could not be ignored due to the pose uncertainty of the peg on the gripper. Therefore, there is a need to design the insertion strategy in a higher-dimensional configuration space. Design/methodology/approach In this paper, the authors handle the insertion problem by converting it into several sub-problems based on the attractive region formed by the constraints. The existence of the attractive region in the high-dimensional configuration space is first discussed. Then, the construction of the high-dimensional attractive region with its sub-attractive region in the low-dimensional space is proposed. Therefore, the robotic insertion strategy can be designed in the subspace to eliminate some uncertainties between the dual pegs and dual holes. Findings Dual peg-in-hole insertion is realized without using of force sensors. The proposed strategy is also used to demonstrate the precision dual peg-in-hole insertion, where the clearance between the dual-peg and dual-hole is about 0.02 mm. Practical implications The sensor-less insertion strategy will not increase the cost of the assembly system and also can be used in the dual peg-in-hole insertion. Originality/value The theoretical and experimental analyses for dual peg-in-hole insertion are proposed without using of force sensor.


2004 ◽  
Vol 3 (2) ◽  
pp. 109-122 ◽  
Author(s):  
Alistair Morrison ◽  
Matthew Chalmers

The problem of exploring or visualising data of high dimensionality is central to many tools for information visualisation. Through representing a data set in terms of inter-object proximities, multidimensional scaling may be employed to generate a configuration of objects in low-dimensional space in such a way as to preserve high-dimensional relationships. An algorithm is presented here for a heuristic hybrid model for the generation of such configurations. Building on a model introduced in 2002, the algorithm functions by means of sampling, spring model and interpolation phases. The most computationally complex stage of the original algorithm involved the execution of a series of nearest-neighbour searches. In this paper, we describe how the complexity of this phase has been reduced by treating all high-dimensional relationships as a set of discretised distances to a constant number of randomly selected items: pivots. In improving this computational bottle-neck, the algorithmic complexity is reduced from O( N√N) to O( N5/4). As well as documenting this improvement, the paper describes evaluation with a data set of 108,000 13-dimensional items and a set of 23,141 17-dimensional items. Results illustrate that the reduction in complexity is reflected in significantly improved run times and that no negative impact is made upon the quality of layout produced.


Author(s):  
MIAO CHENG ◽  
BIN FANG ◽  
YUAN YAN TANG ◽  
HENGXIN CHEN

Many problems in pattern classification and feature extraction involve dimensionality reduction as a necessary processing. Traditional manifold learning algorithms, such as ISOMAP, LLE, and Laplacian Eigenmap, seek the low-dimensional manifold in an unsupervised way, while the local discriminant analysis methods identify the underlying supervised submanifold structures. In addition, it has been well-known that the intraclass null subspace contains the most discriminative information if the original data exist in a high-dimensional space. In this paper, we seek for the local null space in accordance with the null space LDA (NLDA) approach and reveal that its computational expense mainly depends on the quantity of connected edges in graphs, which may be still unacceptable if a great deal of samples are involved. To address this limitation, an improved local null space algorithm is proposed to employ the penalty subspace to approximate the local discriminant subspace. Compared with the traditional approach, the proposed method can achieve more efficiency so that the overload problem is avoided, while slight discriminant power is lost theoretically. A comparative study on classification shows that the performance of the approximative algorithm is quite close to the genuine one.


Sign in / Sign up

Export Citation Format

Share Document