scholarly journals Harnessing deep neural networks to solve inverse problems in quantum dynamics: machine-learned predictions of time-dependent optimal control fields

2020 ◽  
Vol 22 (40) ◽  
pp. 22889-22899
Author(s):  
Xian Wang ◽  
Anshuman Kumar ◽  
Christian R. Shelton ◽  
Bryan M. Wong

Deep neural networks are a cost-effective machine-learning approach for solving the inverse problem of constructing electromagnetic fields that enable desired transitions in quantum systems.

2020 ◽  
Author(s):  
Xian Wang ◽  
Anshuman Kumar ◽  
Christian Shelton ◽  
Bryan Wong

Inverse problems continue to garner immense interest in the physical sciences, particularly in the context of controlling desired phenomena in non-equilibrium systems. In this work, we utilize a series of deep neural networks for predicting time-dependent optimal control fields, <i>E(t)</i>, that enable desired electronic transitions in reduced-dimensional quantum dynamical systems. To solve this inverse problem, we investigated two independent machine learning approaches: (1) a feedforward neural network for predicting the frequency and amplitude content of the power spectrum in the frequency domain (i.e., the Fourier transform of <i>E(t)</i>), and (2) a cross-correlation neural network approach for directly predicting <i>E(t)</i> in the time domain. Both of these machine learning methods give complementary approaches for probing the underlying quantum dynamics and also exhibit impressive performance in accurately predicting both the frequency and strength of the optimal control field. We provide detailed architectures and hyperparameters for these deep neural networks as well as performance metrics for each of our machine-learned models. From these results, we show that machine learning approaches, particularly deep neural networks, can be employed as a cost-effective statistical approach for designing electromagnetic fields to enable desired transitions in these quantum dynamical systems.


2020 ◽  
Author(s):  
Xian Wang ◽  
Anshuman Kumar ◽  
Christian Shelton ◽  
Bryan Wong

Inverse problems continue to garner immense interest in the physical sciences, particularly in the context of controlling desired phenomena in non-equilibrium systems. In this work, we utilize a series of deep neural networks for predicting time-dependent optimal control fields, <i>E(t)</i>, that enable desired electronic transitions in reduced-dimensional quantum dynamical systems. To solve this inverse problem, we investigated two independent machine learning approaches: (1) a feedforward neural network for predicting the frequency and amplitude content of the power spectrum in the frequency domain (i.e., the Fourier transform of <i>E(t)</i>), and (2) a cross-correlation neural network approach for directly predicting <i>E(t)</i> in the time domain. Both of these machine learning methods give complementary approaches for probing the underlying quantum dynamics and also exhibit impressive performance in accurately predicting both the frequency and strength of the optimal control field. We provide detailed architectures and hyperparameters for these deep neural networks as well as performance metrics for each of our machine-learned models. From these results, we show that machine learning approaches, particularly deep neural networks, can be employed as a cost-effective statistical approach for designing electromagnetic fields to enable desired transitions in these quantum dynamical systems.


2018 ◽  
Author(s):  
Gary H. Chang ◽  
David T. Felson ◽  
Shangran Qiu ◽  
Terence D. Capellini ◽  
Vijaya B. Kolachalama

ABSTRACTBackground and objectiveIt remains difficult to characterize pain in knee joints with osteoarthritis solely by radiographic findings. We sought to understand how advanced machine learning methods such as deep neural networks can be used to analyze raw MRI scans and predict bilateral knee pain, independent of other risk factors.MethodsWe developed a deep learning framework to associate information from MRI slices taken from the left and right knees of subjects from the Osteoarthritis Initiative with bilateral knee pain. Model training was performed by first extracting features from two-dimensional (2D) sagittal intermediate-weighted turbo spin echo slices. The extracted features from all the 2D slices were subsequently combined to directly associate using a fused deep neural network with the output of interest as a binary classification problem.ResultsThe deep learning model resulted in predicting bilateral knee pain on test data with 70.1% mean accuracy, 51.3% mean sensitivity, and 81.6% mean specificity. Systematic analysis of the predictions on the test data revealed that the model performance was consistent across subjects of different Kellgren-Lawrence grades.ConclusionThe study demonstrates a proof of principle that a machine learning approach can be applied to associate MR images with bilateral knee pain.SIGNIFICANCE AND INNOVATIONKnee pain is typically considered as an early indicator of osteoarthritis (OA) risk. Emerging evidence suggests that MRI changes are linked to pre-clinical OA, thus underscoring the need for building image-based models to predict knee pain. We leveraged a state-of-the-art machine learning approach to associate raw MR images with bilateral knee pain, independent of other risk factors.


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Idris Kharroubi ◽  
Thomas Lim ◽  
Xavier Warin

AbstractWe study the approximation of backward stochastic differential equations (BSDEs for short) with a constraint on the gains process. We first discretize the constraint by applying a so-called facelift operator at times of a grid. We show that this discretely constrained BSDE converges to the continuously constrained one as the mesh grid converges to zero. We then focus on the approximation of the discretely constrained BSDE. For that we adopt a machine learning approach. We show that the facelift can be approximated by an optimization problem over a class of neural networks under constraints on the neural network and its derivative. We then derive an algorithm converging to the discretely constrained BSDE as the number of neurons goes to infinity. We end by numerical experiments.


Author(s):  
Eisha Akanksha

Abnormal level of stress is the root indicator factor to have significant impact over the health of heart and there is a close relationship between the stress levels with heart rate. Review of the existing literature showcase that there has been various work that has been carried out towards investigation of considering heart rate with an internet-of-things (IoT) system. Apart from this, existing system doesnt offer any instantaneous solution where certain intimation is offered in real-time to the user with wearables as a solution to control the stress condition. Therefore, the current paper introduces a novel framework where the sampled heart rates of the patients are captured by IoT deivices. The aggregated data are further forwarded to the cloud analytic system that uses correlation to extract the appropriate message. The system after being applied with teh machine learning approach could further extract the elite outcome followed by forwarding the contextual data to teh user. Using an analytical modelliig, the proposed system shows that it offers better accuracy and reduced processing time when compared with other machine learning approach and thereby it proves to be cost effective solution in IoT system over medical case study.


Sign in / Sign up

Export Citation Format

Share Document