scholarly journals Miniaturizing neural networks for charge state autotuning in quantum dots

2021 ◽  
Vol 3 (1) ◽  
pp. 015001
Author(s):  
Stefanie Czischek ◽  
Victor Yon ◽  
Marc-Antoine Genest ◽  
Marc-Antoine Roux ◽  
Sophie Rochette ◽  
...  

Abstract A key challenge in scaling quantum computers is the calibration and control of multiple qubits. In solid-state quantum dots (QDs), the gate voltages required to stabilize quantized charges are unique for each individual qubit, resulting in a high-dimensional control parameter space that must be tuned automatically. Machine learning techniques are capable of processing high-dimensional data—provided that an appropriate training set is available—and have been successfully used for autotuning in the past. In this paper, we develop extremely small feed-forward neural networks that can be used to detect charge-state transitions in QD stability diagrams. We demonstrate that these neural networks can be trained on synthetic data produced by computer simulations, and robustly transferred to the task of tuning an experimental device into a desired charge state. The neural networks required for this task are sufficiently small as to enable an implementation in existing memristor crossbar arrays in the near future. This opens up the possibility of miniaturizing powerful control elements on low-power hardware, a significant step towards on-chip autotuning in future QD computers.

2017 ◽  
Vol 66 ◽  
pp. 31-40 ◽  
Author(s):  
Raqibul Hasan ◽  
Tarek M. Taha ◽  
Chris Yakopcic

Author(s):  
Hoseok Choi ◽  
Seokbeen Lim ◽  
Kyeongran Min ◽  
Kyoung-ha Ahn ◽  
Kyoung-Min Lee ◽  
...  

Abstract Objective: With the development in the field of neural networks, Explainable AI (XAI), is being studied to ensure that artificial intelligence models can be explained. There are some attempts to apply neural networks to neuroscientific studies to explain neurophysiological information with high machine learning performances. However, most of those studies have simply visualized features extracted from XAI and seem to lack an active neuroscientific interpretation of those features. In this study, we have tried to actively explain the high-dimensional learning features contained in the neurophysiological information extracted from XAI, compared with the previously reported neuroscientific results. Approach: We designed a deep neural network classifier using 3D information (3D DNN) and a 3D class activation map (3D CAM) to visualize high-dimensional classification features. We used those tools to classify monkey electrocorticogram (ECoG) data obtained from the unimanual and bimanual movement experiment. Main results: The 3D DNN showed better classification accuracy than other machine learning techniques, such as 2D DNN. Unexpectedly, the activation weight in the 3D CAM analysis was high in the ipsilateral motor and somatosensory cortex regions, whereas the gamma-band power was activated in the contralateral areas during unimanual movement, which suggests that the brain signal acquired from the motor cortex contains information about both contralateral movement and ipsilateral movement. Moreover, the hand-movement classification system used critical temporal information at movement onset and offset when classifying bimanual movements. Significance: As far as we know, this is the first study to use high-dimensional neurophysiological information (spatial, spectral, and temporal) with the deep learning method, reconstruct those features, and explain how the neural network works. We expect that our methods can be widely applied and used in neuroscience and electrophysiology research from the point of view of the explainability of XAI as well as its performance.


2018 ◽  
Author(s):  
Bin Xie ◽  
YanHua Cheng ◽  
Xingjian Yu ◽  
Bofeng Shang ◽  
Kai Wang ◽  
...  

Nano Letters ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 6357-6363 ◽  
Author(s):  
Łukasz Dusanowski ◽  
Dominik Köck ◽  
Eunso Shin ◽  
Soon-Hong Kwon ◽  
Christian Schneider ◽  
...  

2021 ◽  
pp. 1-12
Author(s):  
Jian Zheng ◽  
Jianfeng Wang ◽  
Yanping Chen ◽  
Shuping Chen ◽  
Jingjin Chen ◽  
...  

Neural networks can approximate data because of owning many compact non-linear layers. In high-dimensional space, due to the curse of dimensionality, data distribution becomes sparse, causing that it is difficulty to provide sufficient information. Hence, the task becomes even harder if neural networks approximate data in high-dimensional space. To address this issue, according to the Lipschitz condition, the two deviations, i.e., the deviation of the neural networks trained using high-dimensional functions, and the deviation of high-dimensional functions approximation data, are derived. This purpose of doing this is to improve the ability of approximation high-dimensional space using neural networks. Experimental results show that the neural networks trained using high-dimensional functions outperforms that of using data in the capability of approximation data in high-dimensional space. We find that the neural networks trained using high-dimensional functions more suitable for high-dimensional space than that of using data, so that there is no need to retain sufficient data for neural networks training. Our findings suggests that in high-dimensional space, by tuning hidden layers of neural networks, this is hard to have substantial positive effects on improving precision of approximation data.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Faisal Shehzad ◽  
Muhammad Rashid ◽  
Mohammed H Sinky ◽  
Saud S Alotaibi ◽  
Muhammad Yousuf Irfan Zia

2021 ◽  
Vol 11 (4) ◽  
pp. 1581
Author(s):  
Jimy Oblitas ◽  
Jezreel Mejia ◽  
Miguel De-la-Torre ◽  
Himer Avila-George ◽  
Lucía Seguí Gil ◽  
...  

Although knowledge of the microstructure of food of vegetal origin helps us to understand the behavior of food materials, the variability in the microstructural elements complicates this analysis. In this regard, the construction of learning models that represent the actual microstructures of the tissue is important to extract relevant information and advance in the comprehension of such behavior. Consequently, the objective of this research is to compare two machine learning techniques—Convolutional Neural Networks (CNN) and Radial Basis Neural Networks (RBNN)—when used to enhance its microstructural analysis. Two main contributions can be highlighted from this research. First, a method is proposed to automatically analyze the microstructural elements of vegetal tissue; and second, a comparison was conducted to select a classifier to discriminate between tissue structures. For the comparison, a database of microstructural elements images was obtained from pumpkin (Cucurbita pepo L.) micrographs. Two classifiers were implemented using CNN and RBNN, and statistical performance metrics were computed using a 5-fold cross-validation scheme. This process was repeated one hundred times with a random selection of images in each repetition. The comparison showed that the classifiers based on CNN produced a better fit, obtaining F1–score average of 89.42% in front of 83.83% for RBNN. In this study, the performance of classifiers based on CNN was significantly higher compared to those based on RBNN in the discrimination of microstructural elements of vegetable foods.


2021 ◽  
Vol 47 (1) ◽  
Author(s):  
Fabian Laakmann ◽  
Philipp Petersen

AbstractWe demonstrate that deep neural networks with the ReLU activation function can efficiently approximate the solutions of various types of parametric linear transport equations. For non-smooth initial conditions, the solutions of these PDEs are high-dimensional and non-smooth. Therefore, approximation of these functions suffers from a curse of dimension. We demonstrate that through their inherent compositionality deep neural networks can resolve the characteristic flow underlying the transport equations and thereby allow approximation rates independent of the parameter dimension.


Sign in / Sign up

Export Citation Format

Share Document