scholarly journals Research and Analysis of Electromagnetic Trojan Detection Based on Deep Learning

2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Jiazhong Lu ◽  
Xiaolei Liu ◽  
Shibin Zhang ◽  
Yan Chang

The electromagnetic Trojan attack can break through the physical isolation to attack, and the leaked channel does not use the system network resources, which makes the traditional firewall and other intrusion detection devices unable to effectively prevent. Based on the existing research results, this paper proposes an electromagnetic Trojan detection method based on deep learning, which makes the work of electromagnetic Trojan analysis more intelligent. First, the electromagnetic wave signal is captured using software-defined radio technology, and then the signal is initially filtered in combination with a white list, a demodulated signal, and a rate of change in intensity. Secondly, the signal in the frequency domain is divided into blocks in a time-window mode, and the electromagnetic signals are represented by features such as time, information amount, and energy. Finally, the serialized signal feature vector is further extracted using the LSTM algorithm to identify the electromagnetic Trojan. This experiment uses the electromagnetic Trojan data published by Gurion University to test. And it can effectively defend electromagnetic Trojans, improve the participation of computers in electromagnetic Trojan detection, and reduce the cost of manual testing.

Processes ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 2000
Author(s):  
Jin-Hwan Lee ◽  
Woo-Jung Kim ◽  
Sang-Yong Jung

This paper proposes a robust optimization algorithm customized for the optimal design of electric machines. The proposed algorithm, termed “robust explorative particle swarm optimization” (RePSO), is a hybrid algorithm that affords high accuracy and a high search speed when determining robust optimal solutions. To ensure the robustness of the determined optimal solution, RePSO employs the rate of change of the cost function. When this rate is high, the cost function appears as a steep curve, indicating low robustness; in contrast, when the rate is low, the cost function takes the form of a gradual curve, indicating high robustness. For verification, the performance of the proposed algorithm was compared with those of the conventional methods of robust particle swarm optimization and explorative particle swarm optimization with a Gaussian basis test function. The target performance of the traction motor for the optimal design was derived using a simulation of vehicle driving performance. Based on the simulation results, the target performance of the traction motor requires a maximum torque and power of 294 Nm and 88 kW, respectively. The base model, an 8-pole 72-slot permanent magnet synchronous machine, was designed considering the target performance. Accordingly, an optimal design was realized using the proposed algorithm. The cost function for this optimal design was selected such that the torque ripple, total harmonic distortion of back-electromotive force, and cogging torque were minimized. Finally, experiments were performed on the manufactured optimal model. The robustness and effectiveness of the proposed algorithm were validated by comparing the analytical and experimental results.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Bangtong Huang ◽  
Hongquan Zhang ◽  
Zihong Chen ◽  
Lingling Li ◽  
Lihua Shi

Deep learning algorithms are facing the limitation in virtual reality application due to the cost of memory, computation, and real-time computation problem. Models with rigorous performance might suffer from enormous parameters and large-scale structure, and it would be hard to replant them onto embedded devices. In this paper, with the inspiration of GhostNet, we proposed an efficient structure ShuffleGhost to make use of the redundancy in feature maps to alleviate the cost of computations, as well as tackling some drawbacks of GhostNet. Since GhostNet suffers from high computation of convolution in Ghost module and shortcut, the restriction of downsampling would make it more difficult to apply Ghost module and Ghost bottleneck to other backbone. This paper proposes three new kinds of ShuffleGhost structure to tackle the drawbacks of GhostNet. The ShuffleGhost module and ShuffleGhost bottlenecks are utilized by the shuffle layer and group convolution from ShuffleNet, and they are designed to redistribute the feature maps concatenated from Ghost Feature Map and Primary Feature Map. Besides, they eliminate the gap of them and extract the features. Then, SENet layer is adopted to reduce the computation cost of group convolution, as well as evaluating the importance of the feature maps which concatenated from Ghost Feature Maps and Primary Feature Maps and giving proper weights for the feature maps. This paper conducted some experiments and proved that the ShuffleGhostV3 has smaller trainable parameters and FLOPs with the ensurance of accuracy. And with proper design, it could be more efficient in both GPU and CPU side.


Dengue cases has become endemic in Malaysia. The cost of operation to exterminate mosquito habitats are also high. To do effective operation, information from community are crucial. But, without knowing the characteristic of Aedes larvae it is hard to recognize the larvae without guide from the expert. The use of deep learning in image classification and recognition is crucial to tackle this problem. The purpose of this project is to conduct a study of characteristics of Aedes larvae and determine the best convolutional neural network model in classifying the mosquito larvae. 3 performance evaluation vector which is accuracy, log-loss and AUC-ROC will be used to measure the model’s individual performance. Then performance category which consist of Accuracy Score, Loss Score, File Size Score and Training Time Score will be used to evaluate which model is the best to be implemented into web application or mobile application. From the score collected for each model, ResNet50 has proved to be the best model in classifying the mosquito larvae species.


2021 ◽  
Author(s):  
Indrajeet Kumar ◽  
Jyoti Rawat

Abstract The manual diagnostic tests performed in laboratories for pandemic disease such as COVID19 is time-consuming, requires skills and expertise of the performer to yield accurate results. Moreover, it is very cost ineffective as the cost of test kits is high and also requires well-equipped labs to conduct them. Thus, other means of diagnosing the patients with presence of SARS-COV2 (the virus responsible for COVID19) must be explored. A radiography method like chest CT images is one such means that can be utilized for diagnosis of COVID19. The radio-graphical changes observed in CT images of COVID19 patient helps in developing a deep learning-based method for extraction of graphical features which are then used for automated diagnosis of the disease ahead of laboratory-based testing. The proposed work suggests an Artificial Intelligence (AI) based technique for rapid diagnosis of COVID19 from given volumetric CT images of patient’s chest by extracting its visual features and then using these features in the deep learning module. The proposed convolutional neural network is deployed for classifying the infectious and non-infectious SARS-COV2 subjects. The proposed network utilizes 746 chests scanned CT images of which 349 images belong to COVID19 positive cases while remaining 397 belong negative cases of COVID19. The extensive experiment has been completed with the accuracy of 98.4 %, sensitivity of 98.5 %, the specificity of 98.3 %, the precision of 97.1 %, F1score of 97.8 %. The obtained result shows the outstanding performance for classification of infectious and non-infectious for COVID19 cases.


2018 ◽  
Vol 1 (1) ◽  
pp. 41
Author(s):  
Liang Chen ◽  
Xingwei Wang ◽  
Jinwen Shi

In the existing logistics distribution methods, the demand of customers is not considered. The goal of these methods is to maximize the vehicle capacity, which leads to the total distance of vehicles to be too long, the need for large numbers of vehicles and high transportation costs. To address these problems, a method of multi-objective clustering of logistics distribution route based on hybrid ant colony algorithm is proposed in this paper. Before choosing the distribution route, the customers are assigned to the unknown types according to a lot of customers attributes so as to reduce the scale of the solution. The discrete point location model is applied to logistics distribution area to reduce the cost of transportation. A mathematical model of multi-objective logistics distribution routing problem is built with consideration of constraints of the capacity, transportation distance, and time window, and a hybrid ant colony algorithm is used to solve the problem. Experimental results show that, the optimized route is more desirable, which can save the cost of transportation, reduce the time loss in the process of circulation, and effectively improve the quality of logistics distribution service.


2022 ◽  
Vol 25 (3) ◽  
pp. 28-33
Author(s):  
Francesco Restuccia ◽  
Tommaso Melodia

Wireless systems such as the Internet of Things (IoT) are changing the way we interact with the cyber and the physical world. As IoT systems become more and more pervasive, it is imperative to design wireless protocols that can effectively and efficiently support IoT devices and operations. On the other hand, today's IoT wireless systems are based on inflexible designs, which makes them inefficient and prone to a variety of wireless attacks. In this paper, we introduce the new notion of a deep learning-based polymorphic IoT receiver, able to reconfigure its waveform demodulation strategy itself in real time, based on the inferred waveform parameters. Our key innovation is the introduction of a novel embedded deep learning architecture that enables the solution of waveform inference problems, which is then integrated into a generalized hardware/software architecture with radio components and signal processing. Our polymorphic wireless receiver is prototyped on a custom-made software-defined radio platform. We show through extensive over-the-air experiments that the system achieves throughput within 87% of a perfect-knowledge Oracle system, thus demonstrating for the first time that polymorphic receivers are feasible.


Author(s):  
Vladislav V. Fomin ◽  
Hanah Zoo ◽  
Heejin Lee

This chapter is aimed at developing a document content analysis method to be applied in studies of standardization and technology development. The proposed method integrates two theoretical frameworks: the co-evolutionary technology development framework and the “D-N-S” (design, negotiation, sense-making) framework for anticipatory standardization. At the backdrop of the complex and diversified landscape of science and R&D efforts in the technology domain, and the repeated criticism of the weak link between R&D initiatives and standardization, the authors argue that the method offered in this chapter helps better understand the internal dynamics of the technology development process at the early stage of standardization or pre-standardization, which, in turn, can help mobilize and direct the R&D initiatives. To demonstrate the practical usefulness of the proposed method, they conduct a content analysis of the research contributions presented in the COST Action IC0905 “Techno-Economic Regulatory Framework for Radio Spectrum Access for Cognitive Radio/ Software Defined Radio” (COST-TERRA).


Author(s):  
Robin Hanson

As we will discuss in Chapter 18 , Cities section, em cities are likely to be big, dense, highly cost-effective concentrations of computer and communication hardware. How might such cities interact with their surroundings? Today, computer and communication hardware is known for being especially temperamental about its environment. Rooms and buildings designed to house such hardware tend to be climate-controlled to ensure stable and low values of temperature, humidity, vibration, dust, and electromagnetic field intensity. Such equipment housing protects it especially well from fire, flood, and security breaches. The simple assumption is that, compared with our cities today, em cities will also be more climate-controlled to ensure stable and low values of temperature, humidity, vibrations, dust, and electromagnetic signals. These controls may in fact become city level utilities. Large sections of cities, and perhaps entire cities, may be covered, perhaps even domed, to control humidity, dust, and vibration, with city utilities working to absorb remaining pollutants. Emissions within cities may also be strictly controlled. However, an em city may contain temperatures, pressures, vibrations, and chemical concentrations that are toxic to ordinary humans. If so, ordinary humans are excluded from most places in em cities for safety reasons. In addition, we will see in Chapter 18 , Transport section, that many em city transport facilities are unlikely to be well matched to the needs of ordinary humans. Higher prices to rent volume near city centers should push such centers to extend both higher into the sky and deeper into the ground, as happens in human cities today. It should also push computers in city centers to be made from denser physical devices, that is, supporting more computing operations per volume, even if such devices are proportionally more expensive than less dense variants. City centers are also less likely to use deterministic computing devices, if such devices require more volume and cooling. It may be possible to make computing devices that use less mass per computing speed supported, even if they cost more per operation computed. Such lighter devices are more likely to be used at higher city elevations, because they reduce the cost of the physical structures needed to hold them at these heights.


IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 145784-145797 ◽  
Author(s):  
Erick Schmidt ◽  
Devasena Inupakutika ◽  
Rahul Mundlamuri ◽  
David Akopian

Sign in / Sign up

Export Citation Format

Share Document