scholarly journals Neuromorphic Computation With a Single Magnetic Domain Wall

Author(s):  
Razvan V. Ababei ◽  
Matthew O. A. Ellis ◽  
Ian T. Vidamour ◽  
Dhilan S. Devadasan ◽  
Dan A. Allwood ◽  
...  

Abstract Machine learning techniques are commonly used to model complex relationships but implementations on digital hardware are relatively inefficient due to poor matching between conventional computer architectures and the structures of the algorithms they are required to simulate. Neuromorphic devices, and in particular reservoir computing architectures, utilize the inherent properties of physical systems to implement machine learning algorithms and so have the potential to be much more efficient. In this work, we demonstrate that the dynamics of individual domain walls in magnetic nanowires are suitable for implementing the reservoir computing paradigm in hardware. We modelled the dynamics of a domain wall placed between two anti-notches in a nickel nanowire using both a 1d collective coordinates model and micromagnetic simulations. When driven by an oscillating magnetic field, the domain exhibits non-linear dynamics within the potential well created by the anti-notches that are analogous to those of the Duffing oscillator. We exploit the domain wall dynamics for reservoir computing by modulating the amplitude of the applied magnetic field to inject time-multiplexed input signals into the reservoir, and show how this allows us to perform machine learning tasks including: the classification of (1) sine and square waves; (2) spoken digits and (3) non-temporal 2D toy data and hand written digits. Our work lays the foundation for the creation of nanoscale neuromorphic devices in which individual magnetic domain walls are used to perform complex data analysis tasks.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Razvan V. Ababei ◽  
Matthew O. A. Ellis ◽  
Ian T. Vidamour ◽  
Dhilan S. Devadasan ◽  
Dan A. Allwood ◽  
...  

AbstractMachine learning techniques are commonly used to model complex relationships but implementations on digital hardware are relatively inefficient due to poor matching between conventional computer architectures and the structures of the algorithms they are required to simulate. Neuromorphic devices, and in particular reservoir computing architectures, utilize the inherent properties of physical systems to implement machine learning algorithms and so have the potential to be much more efficient. In this work, we demonstrate that the dynamics of individual domain walls in magnetic nanowires are suitable for implementing the reservoir computing paradigm in hardware. We modelled the dynamics of a domain wall placed between two anti-notches in a nickel nanowire using both a 1D collective coordinates model and micromagnetic simulations. When driven by an oscillating magnetic field, the domain exhibits non-linear dynamics within the potential well created by the anti-notches that are analogous to those of the Duffing oscillator. We exploit the domain wall dynamics for reservoir computing by modulating the amplitude of the applied magnetic field to inject time-multiplexed input signals into the reservoir, and show how this allows us to perform machine learning tasks including: the classification of (1) sine and square waves; (2) spoken digits; and (3) non-temporal 2D toy data and hand written digits. Our work lays the foundation for the creation of nanoscale neuromorphic devices in which individual magnetic domain walls are used to perform complex data analysis tasks.


2018 ◽  
Vol 7 (2.8) ◽  
pp. 684 ◽  
Author(s):  
V V. Ramalingam ◽  
Ayantan Dandapath ◽  
M Karthik Raja

Heart related diseases or Cardiovascular Diseases (CVDs) are the main reason for a huge number of death in the world over the last few decades and has emerged as the most life-threatening disease, not only in India but in the whole world. So, there is a need of reliable, accurate and feasible system to diagnose such diseases in time for proper treatment. Machine Learning algorithms and techniques have been applied to various medical datasets to automate the analysis of large and complex data. Many researchers, in recent times, have been using several machine learning techniques to help the health care industry and the professionals in the diagnosis of heart related diseases. This paper presents a survey of various models based on such algorithms and techniques andanalyze their performance. Models based on supervised learning algorithms such as Support Vector Machines (SVM), K-Nearest Neighbour (KNN), NaïveBayes, Decision Trees (DT), Random Forest (RF) and ensemble models are found very popular among the researchers.


Author(s):  
Dr. E. Baraneetharan

Machine Learning is capable of providing real-time solutions that maximize the utilization of resources in the network thereby increasing the lifetime of the network. It is able to process automatically without being externally programmed thus making the process more easy, efficient, cost-effective, and reliable. ML algorithms can handle complex data more quickly and accurately. Machine Learning is used to enhance the ability of the Wireless Sensor Network environment. Wireless Sensor Networks (WSN) is a combination of several networks and it is decentralized and distributed in nature. WSN consists of sensor nodes and sinks nodes which have a property of self-organizing and self-healing. WSN is used in other applications, such as biodiversity and ecosystem protection, surveillance, climate change tracking, and other military applications.Now-a-days, a huge development is seen in WSNs due to the advancement of electronics and wireless communication technologies, several drawbacks like low computational capacity, small memory, and limited energy resources infrastructure needs physical vulnerability to require source measures where privacy plays a key role.WSN is used to monitor the dynamic environments and to adapt to such situation sensor networks need Machine Learning techniques to avoid unnecessary redesign. Machine learning techniques survey for WSNs provide a wide range of applications in which security is given top priority. To secure data from attackers the WSNs system should be able to delete the instruction if any hackers/attackers are trying to steal data.


Diagnostics ◽  
2020 ◽  
Vol 10 (11) ◽  
pp. 958
Author(s):  
Alex Novaes Santana ◽  
Charles Novaes de Santana ◽  
Pedro Montoya

In the last decade, machine learning has been widely used in different fields, especially because of its capacity to work with complex data. With the support of machine learning techniques, different studies have been using data-driven approaches to better understand some syndromes like mild cognitive impairment, Alzheimer’s disease, schizophrenia, and chronic pain. Chronic pain is a complex disease that can recurrently be misdiagnosed due to its comorbidities with other syndromes with which it shares symptoms. Within that context, several studies have been suggesting different machine learning algorithms to classify or predict chronic pain conditions. Those algorithms were fed with a diversity of data types, from self-report data based on questionnaires to the most advanced brain imaging techniques. In this study, we assessed the sensitivity of different algorithms and datasets classifying chronic pain syndromes. Together with this assessment, we highlighted important methodological steps that should be taken into account when an experiment using machine learning is conducted. The best results were obtained by ensemble-based algorithms and the dataset containing the greatest diversity of information, resulting in area under the receiver operating curve (AUC) values of around 0.85. In addition, the performance of the algorithms is strongly related to the hyper-parameters. Thus, a good strategy for hyper-parameter optimization should be used to extract the most from the algorithm. These findings support the notion that machine learning can be a powerful tool to better understand chronic pain conditions.


2021 ◽  
Author(s):  
Randa Natras ◽  
Michael Schmidt

<p>The accuracy and reliability of Global Navigation Satellite System (GNSS) applications are affected by the state of the Earth‘s ionosphere, especially when using single frequency observations, which are employed mostly in mass-market GNSS receivers. In addition, space weather can be the cause of strong sudden disturbances in the ionosphere, representing a major risk for GNSS performance and reliability. Accurate corrections of ionospheric effects and early warning information in the presence of space weather are therefore crucial for GNSS applications. This correction information can be obtained by employing a model that describes the complex relation of space weather processes with the non-linear spatial and temporal variability of the Vertical Total Electron Content (VTEC) within the ionosphere and includes a forecast component considering space weather events to provide an early warning system. To develop such a model is challenging but an important task and of high interest for the GNSS community.</p><p>To model the impact of space weather, a complex chain of physical dynamical processes between the Sun, the interplanetary magnetic field, the Earth's magnetic field and the ionosphere need to be taken into account. Machine learning techniques are suitable in finding patterns and relationships from historical data to solve problems that are too complex for a traditional approach requiring an extensive set of rules (equations) or for which there is no acceptable solution available yet.</p><p>The main objective of this study is to develop a model for forecasting the ionospheric VTEC taking into account physical processes and utilizing state-of-art machine learning techniques to learn complex non-linear relationships from the data. In this work, supervised learning is applied to forecast VTEC. This means that the model is provided by a set of (input) variables that have some influence on the VTEC forecast (output). To be more specific, data of solar activity, solar wind, interplanetary and geomagnetic field and other information connected to the VTEC variability are used as input to predict VTEC values in the future. Different machine learning algorithms are applied, such as decision tree regression, random forest regression and gradient boosting. The decision trees are the simplest and easiest to interpret machine learning algorithms, but the forecasted VTEC lacks smoothness. On the other hand, random forest and gradient boosting use a combination of multiple regression trees, which lead to improvements in the prediction accuracy and smoothness. However, the results show that the overall performance of the algorithms, measured by the root mean square error, does not differ much from each other and improves when the data are well prepared, i.e. cleaned and transformed to remove trends. Preliminary results of this study will be presented including the methodology, goals, challenges and perspectives of developing the machine learning model.</p>


MRS Advances ◽  
2016 ◽  
Vol 1 (3) ◽  
pp. 241-246 ◽  
Author(s):  
Toshimasa Suzuki ◽  
Koichi Kawahara ◽  
Masaya Suzuki ◽  
Kenta Takagi ◽  
Kimihiro Ozaki

ABSTRACTWe conducted the in-situ observations of the magnetic domain structure change in Nd-Fe-B magnets at high temperature by transmission electron microscopy (TEM) / Lorentz microscopy with applying an external magnetic field. Prior to observation, a thin foil was magnetized by an external magnetic field of 2.0 T to almost saturation, then the magnetic domain structures were observed by the Fresnel mode with in-situ heating. At 225°C, reverse magnetic domains were found to generate in the thin foil sample without applying an external magnetic field. When we applied a magnetic field on the same direction to the pre-magnetization direction at 225°C, one magnetic domain wall was pinned by a grain boundary and the other magnetic domain wall moved. As the results, the reverse magnetic domain shrank then annihilated. When we cut the applied magnetic field, the reverse magnetic domain generated at almost the same location. On the other hand, when we applied a magnetic field to the foils in the opposite direction, the reverse domain started to grow, i.e., magnetic domain walls started to move. The observation results of the shrink or growth of the reverse domain showed that the pinning effect of grain boundary against domain wall motion would be different depending on the applied magnetic field direction. Moreover, domain walls was observed to be pinned by grain boundaries at elevated temperature, so that the coercivity of Nd-Fe-B magnet would occur by pinning mechanism.


Author(s):  
J.N. Chapman ◽  
P.E. Batson ◽  
E.M. Waddell ◽  
R.P. Ferrier

By far the most commonly used mode of Lorentz microscopy in the examination of ferromagnetic thin films is the Fresnel or defocus mode. Use of this mode in the conventional transmission electron microscope (CTEM) is straightforward and immediately reveals the existence of all domain walls present. However, if such quantitative information as the domain wall profile is required, the technique suffers from several disadvantages. These include the inability to directly observe fine image detail on the viewing screen because of the stringent illumination coherence requirements, the difficulty of accurately translating part of a photographic plate into quantitative electron intensity data, and, perhaps most severe, the difficulty of interpreting this data. One solution to the first-named problem is to use a CTEM equipped with a field emission gun (FEG) (Inoue, Harada and Yamamoto 1977) whilst a second is to use the equivalent mode of image formation in a scanning transmission electron microscope (STEM) (Chapman, Batson, Waddell, Ferrier and Craven 1977), a technique which largely overcomes the second-named problem as well.


2020 ◽  
Vol 12 (2) ◽  
pp. 84-99
Author(s):  
Li-Pang Chen

In this paper, we investigate analysis and prediction of the time-dependent data. We focus our attention on four different stocks are selected from Yahoo Finance historical database. To build up models and predict the future stock price, we consider three different machine learning techniques including Long Short-Term Memory (LSTM), Convolutional Neural Networks (CNN) and Support Vector Regression (SVR). By treating close price, open price, daily low, daily high, adjusted close price, and volume of trades as predictors in machine learning methods, it can be shown that the prediction accuracy is improved.


Author(s):  
Anantvir Singh Romana

Accurate diagnostic detection of the disease in a patient is critical and may alter the subsequent treatment and increase the chances of survival rate. Machine learning techniques have been instrumental in disease detection and are currently being used in various classification problems due to their accurate prediction performance. Various techniques may provide different desired accuracies and it is therefore imperative to use the most suitable method which provides the best desired results. This research seeks to provide comparative analysis of Support Vector Machine, Naïve bayes, J48 Decision Tree and neural network classifiers breast cancer and diabetes datsets.


Sign in / Sign up

Export Citation Format

Share Document