Novel Approach
Recently Published Documents


TOTAL DOCUMENTS

37223
(FIVE YEARS 16383)

H-INDEX

152
(FIVE YEARS 51)

Author(s):  
Ahmed Elaraby ◽  
Ayman Taha

<p><span>A novel approach for multimodal liver image contrast enhancement is put forward in this paper. The proposed approach utilizes magnetic resonance imaging (MRI) scan of liver as a guide to enhance the structures of computed tomography (CT) liver. The enhancement process consists of two phases: The first phase is the transformation of MRI and CT modalities to be in the same range. Then the histogram of CT liver is adjusted to match the histogram of MRI. In the second phase, an adaptive histogram equalization technique is presented by splitting the CT histogram into two sub-histograms and replacing their cumulative distribution functions with two smooths sigmoid. The subjective and objective assessments of experimental results indicated that the proposed approach yields better results. In addition, the image contrast is effectively enhanced as well as the mean brightness and details are well preserved.</span></p>


Author(s):  
Sameh El-Sharo ◽  
Amani Al-Ghraibah ◽  
Jamal Al-Nabulsi ◽  
Mustafa Muhammad Matalgah

<p>The use of pulse wave analysis may assist cardiologists in diagnosing patients with vascular diseases. However, it is not common in clinical practice to interpret and analyze pulse wave data and utilize them to detect the abnormalities of the signal. This paper presents a novel approach to the clinical application of pulse waveform analysis using the wavelet technique by decomposing the normal and pathology signal into many levels. The discrete wavelet transform (DWT) decomposes the carotid arterial pulse wave (CAPW) signal, and the continuous wavelet transform (CWT) creates images of the decomposed signal. The wavelet analysis technique in this work aims to strengthen the medical benefits of the pulse wave. The obtained results show a clear difference between the signal and the images of the arterial pathologies in comparison with normal ones. The certain distinct that were achieved are promising but further improvement may be required in the future.</p>


2022 ◽  
Vol 227 ◽  
pp. 107111
Author(s):  
A.Ya. Pak ◽  
K.B. Larionov ◽  
E.N. Kolobova ◽  
K.V. Slyusarskiy ◽  
J. Bolatova ◽  
...  

Semantic Web technology is not new as most of us contemplate; it has evolved over the years. Linked Data web terminology is the name set recently to the Semantic Web. Semantic Web is a continuation of Web 2.0 and it is to replace existing technologies. It is built on Natural Language processing and provides solutions to most of the prevailing issues. Web 3.0 is the version of Semantic Web caters to the information needs of half of the population on earth. This paper links two important current concerns, the security of information and enforced online education due to COVID-19 with Semantic Web. The Steganography requirement for the Semantic web is discussed elaborately, even though encryption is applied which is inadequate in providing protection. Web 2.0 issues concerning online education and semantic Web solutions have been discussed. An extensive literature survey has been conducted related to the architecture of Web 3.0, detailed history of online education, and Security architecture. Finally, Semantic Web is here to stay and data hiding along with encryption makes it robust.


Author(s):  
Deepang Raval ◽  
Vyom Pathak ◽  
Muktan Patel ◽  
Brijesh Bhatt

We present a novel approach for improving the performance of an End-to-End speech recognition system for the Gujarati language. We follow a deep learning-based approach that includes Convolutional Neural Network, Bi-directional Long Short Term Memory layers, Dense layers, and Connectionist Temporal Classification as a loss function. To improve the performance of the system with the limited size of the dataset, we present a combined language model (Word-level language Model and Character-level language model)-based prefix decoding technique and Bidirectional Encoder Representations from Transformers-based post-processing technique. To gain key insights from our Automatic Speech Recognition (ASR) system, we used the inferences from the system and proposed different analysis methods. These insights help us in understanding and improving the ASR system as well as provide intuition into the language used for the ASR system. We have trained the model on the Microsoft Speech Corpus, and we observe a 5.87% decrease in Word Error Rate (WER) with respect to base-model WER.


2022 ◽  
Vol 13 (2) ◽  
pp. 1-21
Author(s):  
Bo Sun ◽  
Takeshi Takahashi ◽  
Tao Ban ◽  
Daisuke Inoue

To relieve the burden of security analysts, Android malware detection and its family classification need to be automated. There are many previous works focusing on using machine (or deep) learning technology to tackle these two important issues, but as the number of mobile applications has increased in recent years, developing a scalable and precise solution is a new challenge that needs to be addressed in the security field. Accordingly, in this article, we propose a novel approach that not only enhances the performance of both Android malware and its family classification, but also reduces the running time of the analysis process. Using large-scale datasets obtained from different sources, we demonstrate that our method is able to output a high F-measure of 99.71% with a low FPR of 0.37%. Meanwhile, the computation time for processing a 300K dataset is reduced to nearly 3.3 hours. In addition, in classification evaluation, we demonstrate that the F-measure, precision, and recall are 97.5%, 96.55%, 98.64%, respectively, when classifying 28 malware families. Finally, we compare our method with previous studies in both detection and classification evaluation. We observe that our method produces better performance in terms of its effectiveness and efficiency.


2022 ◽  
Vol 46 ◽  
pp. 102551
Author(s):  
Mohammad T. Alresheedi ◽  
Seyedeh Laleh D. Kenari ◽  
Benoit Barbeau ◽  
Onita D. Basu

2022 ◽  
Vol 25 (1) ◽  
pp. 1-26
Author(s):  
Fabio Pagani ◽  
Davide Balzarotti

Despite a considerable number of approaches that have been proposed to protect computer systems, cyber-criminal activities are on the rise and forensic analysis of compromised machines and seized devices is becoming essential in computer security. This article focuses on memory forensics, a branch of digital forensics that extract artifacts from the volatile memory. In particular, this article looks at a key ingredient required by memory forensics frameworks: a precise model of the OS kernel under analysis, also known as profile . By using the information stored in the profile, memory forensics tools are able to bridge the semantic gap and interpret raw bytes to extract evidences from a memory dump. A big problem with profile-based solutions is that custom profiles must be created for each and every system under analysis. This is especially problematic for Linux systems, because profiles are not generic : they are strictly tied to a specific kernel version and to the configuration used to build the kernel. Failing to create a valid profile means that an analyst cannot unleash the true power of memory forensics and is limited to primitive carving strategies. For this reason, in this article we present a novel approach that combines source code and binary analysis techniques to automatically generate a profile from a memory dump, without relying on any non-public information. Our experiments show that this is a viable solution and that profiles reconstructed by our framework can be used to run many plugins, which are essential for a successful forensics investigation.


Sign in / Sign up

Export Citation Format

Share Document