experimental result
Recently Published Documents





Tamilarasi Suresh ◽  
Tsehay Admassu Assegie ◽  
Subhashni Rajkumar ◽  
Napa Komal Kumar

Heart disease is one of the most widely spreading and deadliest diseases across the world. In this study, we have proposed hybrid model for heart disease prediction by employing random forest and support vector machine. With random forest, iterative feature elimination is carried out to select heart disease features that improves predictive outcome of support vector machine for heart disease prediction. Experiment is conducted on the proposed model using test set and the experimental result evidently appears to prove that the performance of the proposed hybrid model is better as compared to an individual random forest and support vector machine. Overall, we have developed more accurate and computationally efficient model for heart disease prediction with accuracy of 98.3%. Moreover, experiment is conducted to analyze the effect of regularization parameter (C) and gamma on the performance of support vector machine. The experimental result evidently reveals that support vector machine is very sensitive to C and gamma.

2022 ◽  
Vol 27 (3) ◽  
pp. 1-31
Yukui Luo ◽  
Shijin Duan ◽  
Xiaolin Xu

With the emerging cloud-computing development, FPGAs are being integrated with cloud servers for higher performance. Recently, it has been explored to enable multiple users to share the hardware resources of a remote FPGA, i.e., to execute their own applications simultaneously. Although being a promising technique, multi-tenant FPGA unfortunately brings its unique security concerns. It has been demonstrated that the capacitive crosstalk between FPGA long-wires can be a side-channel to extract secret information, giving adversaries the opportunity to implement crosstalk-based side-channel attacks. Moreover, recent work reveals that medium-wires and multiplexers in configurable logic block (CLB) are also vulnerable to crosstalk-based information leakage. In this work, we propose FPGAPRO: a defense framework leveraging P lacement, R outing, and O bfuscation to mitigate the secret leakage on FPGA components, including long-wires, medium-wires, and logic elements in CLB. As a user-friendly defense strategy, FPGAPRO focuses on protecting the security-sensitive instances meanwhile considering critical path delay for performance maintenance. As the proof-of-concept, the experimental result demonstrates that FPGAPRO can effectively reduce the crosstalk-caused side-channel leakage by 138 times. Besides, the performance analysis shows that this strategy prevents the maximum frequency from timing violation.

2022 ◽  
Vol 31 (1) ◽  
pp. 1-46
Chao Liu ◽  
Cuiyun Gao ◽  
Xin Xia ◽  
David Lo ◽  
John Grundy ◽  

Context: Deep learning (DL) techniques have gained significant popularity among software engineering (SE) researchers in recent years. This is because they can often solve many SE challenges without enormous manual feature engineering effort and complex domain knowledge. Objective: Although many DL studies have reported substantial advantages over other state-of-the-art models on effectiveness, they often ignore two factors: (1) reproducibility —whether the reported experimental results can be obtained by other researchers using authors’ artifacts (i.e., source code and datasets) with the same experimental setup; and (2) replicability —whether the reported experimental result can be obtained by other researchers using their re-implemented artifacts with a different experimental setup. We observed that DL studies commonly overlook these two factors and declare them as minor threats or leave them for future work. This is mainly due to high model complexity with many manually set parameters and the time-consuming optimization process, unlike classical supervised machine learning (ML) methods (e.g., random forest). This study aims to investigate the urgency and importance of reproducibility and replicability for DL studies on SE tasks. Method: In this study, we conducted a literature review on 147 DL studies recently published in 20 SE venues and 20 AI (Artificial Intelligence) venues to investigate these issues. We also re-ran four representative DL models in SE to investigate important factors that may strongly affect the reproducibility and replicability of a study. Results: Our statistics show the urgency of investigating these two factors in SE, where only 10.2% of the studies investigate any research question to show that their models can address at least one issue of replicability and/or reproducibility. More than 62.6% of the studies do not even share high-quality source code or complete data to support the reproducibility of their complex models. Meanwhile, our experimental results show the importance of reproducibility and replicability, where the reported performance of a DL model could not be reproduced for an unstable optimization process. Replicability could be substantially compromised if the model training is not convergent, or if performance is sensitive to the size of vocabulary and testing data. Conclusion: It is urgent for the SE community to provide a long-lasting link to a high-quality reproduction package, enhance DL-based solution stability and convergence, and avoid performance sensitivity on different sampled data.

2022 ◽  
Vol 2022 ◽  
pp. 1-9
Songshang Zou ◽  
Wenshu Chen ◽  
Hao Chen

Image saliency object detection can rapidly extract useful information from image scenes and further analyze it. At present, the traditional saliency target detection technology still has the edge of outstanding target that cannot be well preserved. Convolutional neural network (CNN) can extract highly general deep features from the images and effectively express the essential feature information of the images. This paper designs a model which applies CNN in deep saliency object detection tasks. It can efficiently optimize the edges of foreground objects and realize highly efficient image saliency detection through multilayer continuous feature extraction, refinement of layered boundary, and initial saliency feature fusion. The experimental result shows that the proposed method can achieve more robust saliency detection to adjust itself to complex background environment.

2022 ◽  
Mikhail Ivantsov

Abstract It is shown that the known task of single-electron atom can be established with its own solution of fine-structure constant. Moreover, this approach may relate to electron transition directly to the proton structure, that with a hyper-fine structure like the Lamb shift of hydrogen atom is specifically associated. Such highlighted result was expanded accordingly for the multiple-charge states, as beyond the existing classification of the Standard Model. Here is possible a certain prediction for the mass values by type the meson- boson particles. In particular, mass value for the Higgs boson has been modeled close enough to the experimental result. In this way a high-energy sequence for the exotic subatomic particles like the Higgs boson may be further revealed.

S Dhayaneethi ◽  
J Anburaj ◽  
S Arivazhagan

High Chromium White Cast Iron (HCWCI) plays a major role in manufacturing of wear-resistant components. Due to unique wear resistance property, attribution to the additions of carbide forming elements, they have been used for mill liner applications. By varying the wt% of alloying elements such as Cr, Ti, and Mo, the wear resistance and impact strength of High Chromium Cast Iron (HCCI) can be increased. To enhance the wear resistance property according to Central Composite Design (CCD), 16 samples were fabricated by varying the wt% of alloying elements. To fabricate the samples, furan sand molds were prepared and used for the further casting process. The properties of Furan sand mold enhance the mechanical properties and reduce the mold rejection rate, production time, etc. To attain the optimum Wear Rate (WR) and Impact Strength (IS) value without dominance, optimization techniques such as Response Surface Methodological (RSM) and Particle swarm optimization (PSO) are employed to solve the multi-objective problem. The RSM and PSO predicted optimum solutions are compared by using the Weighted Aggregated Sum Product Assessment (WASPAS) ranking method. The WASPAS result revealed that when compared to the RSM result, the PSO predicted optimal wt% of chemical composition (22 wt % Cr, 3 wt % Ti, and 2.99 wt % Mo) gives the optimum WR value (53 mm3/min) and IS value (3.77 J). To validate the PSO result, experiments were carried out for the predicted wt% of alloying elements and tested. The difference between the PSO predicted result and experimental result is less than 5% error which clearly shows that PSO is an effective method to solve the multi-objective problem.

2022 ◽  
Vol 12 (1) ◽  
Fengchang Bu ◽  
Lei Xue ◽  
Mengyang Zhai ◽  
Xiaolin Huang ◽  
Jinyu Dong ◽  

AbstractAcoustic emission (AE) characterization is an effective technique to indirectly capture the failure process of quasi brittle rock. In previous studies, both experiments and numerical simulations were adopted to investigate the AE characteristics of rocks. However, as the most popular numerical model, the moment tensor model (MTM) cannot be constrained by the experimental result because there is a gap between MTM and experiments in principle, signal processing and energy analysis. In this paper, we developed a particle-velocity-based model (PVBM) that enabled direct monitoring and analysis of the particle velocity in the numerical model and had good robustness. The PVBM imitated the actual experiment and could fill in gaps between the experiment and MTM. AE experiments of marine shale under uniaxial compression were carried out, and the results were simulated by MTM. In general, the variation trend of the experimental result could be presented by MTM. Nevertheless, the magnitudes of AE parameters by MTM presented notable differences of more than several orders of magnitude compared with those by the experiment. We sequentially used PVBM as a proxy to analyse these discrepancies and systematically evaluate the AE characterization of rocks from the experiment to numerical simulation, considering the influence of wave reflection, energy geometrical diffusion, viscous attenuation, particle size and progressive deterioration of rock material. The combination of MTM and PVBM could reasonably and accurately acquire AE characteristics of the actual AE experiment of rocks by making full use of their respective advantages.

Geofluids ◽  
2022 ◽  
Vol 2022 ◽  
pp. 1-9
Jingkui Mi ◽  
Kun He ◽  
Yanhuan Shuai ◽  
Jinhao Guo

In this study, a methane (CH4) cracking experiment in the temperature range of 425–800°C is presented. The experimental result shows that there are some alkane and alkene generation during CH4 cracking, in addition to hydrogen (H2). Moreover, the hydrocarbon gas displays carbon isotopic reversal ( δ 13 C 1 > δ 13 C 2 ) below 700°C, while solid carbon appears on the inner wall of the gold tube above 700°C. The variation in experimental products (including gas and solid carbon) with increasing temperature suggests that CH4 does not crack into carbon and H2 directly during its cracking, but first cracks into methyl (CH3⋅) and proton (H+) groups. CH3⋅ shares depleted 13C for preferential bond cleavage in 12C–H rather than 13C–H. CH3⋅ combination leads to depletion of 13C in heavy gas and further causes the carbon isotopic reversal ( δ 13 C 1 > δ 13 C 2 ) of hydrocarbon gas. Geological analysis of the experimental data indicates that the amount of heavy gas formed by the combination of CH3⋅ from CH4 early cracking and with depleted 13C is so little that can be masked by the bulk heavy gas from organic matter (OM) and with enriched 13C at R o < 2.5 % . Thus, natural gas shows normal isotope distribution ( δ 13 C 1 < δ 13 C 2 ) in this maturity stage. CH3⋅ combination (or CH4 polymerization) intensifies on exhaustion gas generation from OM in the maturity range of R o > 2.5 % . Therefore, the carbon isotopic reversal of natural gas appears at the overmature stage. CH4 polymerization is a possible mechanism for carbon isotopic reversal of overmature natural gas. The experimental results indicate that although CH4 might have start cracking at R o > 2.5 % , but it cracks substantially above 6.0% R o in actual geological settings.

2022 ◽  
Vol 12 (2) ◽  
pp. 633
Chunyu Xu ◽  
Hong Wang

This paper presents a convolution kernel initialization method based on the local binary patterns (LBP) algorithm and sparse autoencoder. This method can be applied to the initialization of the convolution kernel in the convolutional neural network (CNN). The main function of the convolution kernel is to extract the local pattern of the image by template matching as the target feature of subsequent image recognition. In general, the Xavier initialization method and the He initialization method are used to initialize the convolution kernel. In this paper, firstly, some typical sample images were selected from the training set, and the LBP algorithm was applied to extract the texture information of the typical sample images. Then, the texture information was divided into several small blocks, and these blocks were input into the sparse autoencoder (SAE) for pre-training. After finishing the training, the weight values of the sparse autoencoder that met the statistical features of the data set were used as the initial value of the convolution kernel in the CNN. The experimental result indicates that the method proposed in this paper can speed up the convergence of the network in the network training process and improve the recognition rate of the network to an extent.

2022 ◽  
Vol 2022 ◽  
pp. 1-17
Gopi Kasinathan ◽  
Selvakumar Jayakumar

Artificial intelligence (AI), Internet of Things (IoT), and the cloud computing have recently become widely used in the healthcare sector, which aid in better decision-making for a radiologist. PET imaging or positron emission tomography is one of the most reliable approaches for a radiologist to diagnosing many cancers, including lung tumor. In this work, we proposed stage classification of lung tumor which is a more challenging task in computer-aided diagnosis. As a result, a modified computer-aided diagnosis is being considered as a way to reduce the heavy workloads and second opinion to radiologists. In this paper, we present a strategy for classifying and validating different stages of lung tumor progression, as well as a deep neural model and data collection using cloud system for categorizing phases of pulmonary illness. The proposed system presents a Cloud-based Lung Tumor Detector and Stage Classifier (Cloud-LTDSC) as a hybrid technique for PET/CT images. The proposed Cloud-LTDSC initially developed the active contour model as lung tumor segmentation, and multilayer convolutional neural network (M-CNN) for classifying different stages of lung cancer has been modelled and validated with standard benchmark images. The performance of the presented technique is evaluated using a benchmark image LIDC-IDRI dataset of 50 low doses and also utilized the lung CT DICOM images. Compared with existing techniques in the literature, our proposed method achieved good result for the performance metrics accuracy, recall, and precision evaluated. Under numerous aspects, our proposed approach produces superior outcomes on all of the applied dataset images. Furthermore, the experimental result achieves an average lung tumor stage classification accuracy of 97%-99.1% and an average of 98.6% which is significantly higher than the other existing techniques.

Sign in / Sign up

Export Citation Format

Share Document