Development of a scientific base for creating a high-precision microsystem mechanisms with elements of artificial intelligence for medicine

2018 ◽  
pp. 118-125
Author(s):  
R.M. Glashev ◽  
◽  
R.I Zakirov ◽  
S.A. Sheptunov ◽  
◽  
...  
Author(s):  
Kai-Uwe Demasius ◽  
Aron Kirschen ◽  
Stuart Parkin

AbstractData-intensive computing operations, such as training neural networks, are essential for applications in artificial intelligence but are energy intensive. One solution is to develop specialized hardware onto which neural networks can be directly mapped, and arrays of memristive devices can, for example, be trained to enable parallel multiply–accumulate operations. Here we show that memcapacitive devices that exploit the principle of charge shielding can offer a highly energy-efficient approach for implementing parallel multiply–accumulate operations. We fabricate a crossbar array of 156 microscale memcapacitor devices and use it to train a neural network that could distinguish the letters ‘M’, ‘P’ and ‘I’. Modelling these arrays suggests that this approach could offer an energy efficiency of 29,600 tera-operations per second per watt, while ensuring high precision (6–8 bits). Simulations also show that the devices could potentially be scaled down to a lateral size of around 45 nm.


2022 ◽  
Vol 15 ◽  
Author(s):  
Vivek Parmar ◽  
Bogdan Penkovsky ◽  
Damien Querlioz ◽  
Manan Suri

With recent advances in the field of artificial intelligence (AI) such as binarized neural networks (BNNs), a wide variety of vision applications with energy-optimized implementations have become possible at the edge. Such networks have the first layer implemented with high precision, which poses a challenge in deploying a uniform hardware mapping for the network implementation. Stochastic computing can allow conversion of such high-precision computations to a sequence of binarized operations while maintaining equivalent accuracy. In this work, we propose a fully binarized hardware-friendly computation engine based on stochastic computing as a proof of concept for vision applications involving multi-channel inputs. Stochastic sampling is performed by sampling from a non-uniform (normal) distribution based on analog hardware sources. We first validate the benefits of the proposed pipeline on the CIFAR-10 dataset. To further demonstrate its application for real-world scenarios, we present a case-study of microscopy image diagnostics for pathogen detection. We then evaluate benefits of implementing such a pipeline using OxRAM-based circuits for stochastic sampling as well as in-memory computing-based binarized multiplication. The proposed implementation is about 1,000 times more energy efficient compared to conventional floating-precision-based digital implementations, with memory savings of a factor of 45.


Author(s):  
Serhii Mykolaiovych Boiko ◽  
Yurii Shmelev ◽  
Viktoriia Chorna ◽  
Marina Nozhnova

The system of supplying airports and airfields is subject to high requirements for the degree of reliability. This is due to the existence of a large number of factors that affect the work of airports and airfields. In this regard, the control systems for these complexes must, as soon as possible, adopt the most optimal criteria for the reliability and quality of the solution. This complicates the structure of the electricity supply complex quite a lot and necessitates the use of modern, reliable, and high-precision technologies for the management of these complexes. One of them is artificial intelligence, which allows you to make decisions in a non-standard situation, to give recommendations to the operator to perform actions based on analysis of diagnostic data.


Sensors ◽  
2020 ◽  
Vol 20 (22) ◽  
pp. 6476
Author(s):  
Yuanhong Li ◽  
Zuoxi Zhao ◽  
Yangfan Luo ◽  
Zhi Qiu

Artificial intelligence (AI) is widely used in pattern recognition and positioning. In most of the geological exploration applications, it needs to locate and identify underground objects according to electromagnetic wave characteristics from the ground-penetrating radar (GPR) images. Currently, a few robust AI approach can detect targets by real-time with high precision or automation for GPR images recognition. This paper proposes an approach that can be used to identify parabolic targets with different sizes and underground soil or concrete structure voids based on you only look once (YOLO) v3. With the TensorFlow 1.13.0 developed by Google, we construct YOLO v3 neural network to realize real-time pattern recognition of GPR images. We propose the specific coding method for the GPR image samples in Yolo V3 to improve the prediction accuracy of bounding boxes. At the same time, K-means algorithm is also applied to select anchor boxes to improve the accuracy of positioning hyperbolic vertex. For some instances electromagnetic-vacillated signals may occur, which refers to multiple parabolic electromagnetic waves formed by strong conductive objects among soils or overlapping waveforms. This paper deals with the vacillating signal similarity intersection over union (IoU) (V-IoU) methods. Experimental result shows that the V-IoU combined with non-maximum suppression (NMS) can accurately frame targets in GPR image and reduce the misidentified boxes as well. Compared with the single shot multi-box detector (SSD), YOLO v2, and Faster-RCNN, the V-IoU YOLO v3 shows its superior performance even when implemented by CPU. It can meet the real-time output requirements by an average 12 fps detected speed. In summary, this paper proposes a simple and high-precision real-time pattern recognition method for GPR imagery, and promoted the application of artificial intelligence or deep learning in the field of the geophysical science.


2020 ◽  
Vol 42 (4-5) ◽  
pp. 191-202 ◽  
Author(s):  
Xuesheng Zhang ◽  
Xiaona Lin ◽  
Zihao Zhang ◽  
Licong Dong ◽  
Xinlong Sun ◽  
...  

Breast cancer ranks first among cancers affecting women’s health. Our work aims to realize the intelligence of the medical ultrasound equipment with limited computational capability, which is used for the assistant detection of breast lesions. We embed the high-computational deep learning algorithm into the medical ultrasound equipment with limited computational capability by two techniques: (1) lightweight neural network: considering the limited computational capability of ultrasound equipment, a lightweight neural network is designed, which greatly reduces the amount of calculation. And we use the technique of knowledge distillation to train the low-precision network helped with the high-precision network; (2) asynchronous calculations: consider four frames of ultrasound images as a group; the image of the first frame of each group is used as the input of the network, and the result is respectively fused with the images of the fourth to seventh frames. An amount of computation of 30 GFLO/frame is required for the proposed lightweight neural network, about 1/6 of that of the large high-precision network. After trained from scratch using the knowledge distillation technique, the detection performance of the lightweight neural network (sensitivity = 89.25%, specificity = 96.33%, the average precision [AP] = 0.85) is close to that of the high-precision network (sensitivity = 98.3%, specificity = 88.33%, AP = 0.91). By asynchronous calculation, we achieve real-time automatic detection of 24 fps (frames per second) on the ultrasound equipment. Our work proposes a method to realize the intelligence of the low-computation-power ultrasonic equipment, and successfully achieves the real-time assistant detection of breast lesions. The significance of the study is as follows: (1) The proposed method is of practical significance in assisting doctors to detect breast lesions; (2) our method provides some practical and theoretical support for the development and engineering of intelligent equipment based on artificial intelligence algorithms.


Sign in / Sign up

Export Citation Format

Share Document