artificial learning
Recently Published Documents


TOTAL DOCUMENTS

56
(FIVE YEARS 26)

H-INDEX

2
(FIVE YEARS 1)

2021 ◽  
Vol 15 (3) ◽  
pp. 265-290
Author(s):  
Saleh Abdulaziz Habtor ◽  
Ahmed Haidarah Hasan Dahah

The spread of ransomware has risen exponentially over the past decade, causing huge financial damage to multiple organizations. Various anti-ransomware firms have suggested methods for preventing malware threats. The growing pace, scale and sophistication of malware provide the anti-malware industry with more challenges. Recent literature indicates that academics and anti-virus organizations have begun to use artificial learning as well as fundamental modeling techniques for the research and identification of malware. Orthodox signature-based anti-virus programs struggle to identify unfamiliar malware and track new forms of malware. In this study, a malware evaluation framework focused on machine learning was adopted that consists of several modules: dataset compiling in two separate classes (malicious and benign software), file disassembly, data processing, decision making, and updated malware identification. The data processing module uses grey images, functions for importing and Opcode n-gram to remove malware functionality. The decision making module detects malware and recognizes suspected malware. Different classifiers were considered in the research methodology for the detection and classification of malware. Its effectiveness was validated on the basis of the accuracy of the complete process.


Author(s):  
C. Guney ◽  
O. Akinci ◽  
K. Çamoğlu

Abstract. Technological developments have paved the way for courses, trainings and assessments to be made online remotely in education and employment. In the meantime, over the years, the demand for online distance learning has increased rapidly. Eventually, it has been seen that this is a necessity and it has become widespread during the COVID-19 pandemic period. As can be seen in the example of massive online open courses, although remote learning is carried out online, perhaps the most important problem is how to evaluate the relevant courses safely and reliably. Thus, remote online proctoring is becoming an increasingly popular and needed issue. In this study, the issue of conducting the examination remotely online via internet-connected video and audio communication will be evaluated. Furthermore, the solution called vProctor was developed to contribute to the elimination of deficiencies in remote online proctoring using artificial learning. Overall, it has been observed that the proposed solution is managed to determine inappropriate behaviors like cheating in the online assessments.


2021 ◽  
Author(s):  
◽  
Syed Naqvi

<p>For a diverse range of applications in machine vision from social media searches to robotic home care providers, it is important to replicate the mechanism by which the human brain selects the most important visual information, while suppressing the remaining non-usable information.  Many computational methods attempt to model this process by following the traditional model of visual attention. The traditional model of attention involves feature extraction, conditioning and combination to capture this behaviour of human visual attention. Consequently, the model has inherent design choices at its various stages. These choices include selection of parameters related to the feature computation process, setting a conditioning approach, feature importance and setting a combination approach. Despite rapid research and substantial improvements in benchmark performance, the performance of many models depends upon tuning these design choices in an ad hoc fashion. Additionally, these design choices are heuristic in nature, thus resulting in good performance only in certain settings. Consequentially, many such models exhibit low robustness to difficult stimuli and the complexities of real-world imagery.  Machine learning and optimisation technique have long been used to increase the generalisability of a system to unseen data. Surprisingly, artificial learning techniques have not been investigated to their full potential to improve generalisation of visual attention methods.  The proposed thesis is that artificial learning can increase the generalisability of the traditional model of visual attention by effective selection and optimal combination of features.  The following new techniques have been introduced at various stages of the traditional model of visual attention to improve its generalisation performance, specifically on challenging cases of saliency detection:  1. Joint optimisation of feature related parameters and feature importance weights is introduced for the first time to improve the generalisation of the traditional model of visual attention. To evaluate the joint learning hypothesis, a new method namely GAOVSM is introduced for the tasks of eye fixation prediction. By finding the relationships between feature related parameters and feature importance, the developed method improves the generalisation performance of baseline method (that employ human encoded parameters).  2. Spectral matting based figure-ground segregation is introduced to overcome the artifacts encountered by region-based salient object detection approaches. By suppressing the unwanted background information and assigning saliency to object parts in a uniform manner, the developed FGS approach overcomes the limitations of region based approaches.  3. Joint optimisation of feature computation parameters and feature importance weights is introduced for optimal combination of FGS with complementary features for the first time for salient object detection. By learning feature related parameters and their respective importance at multiple segmentation thresholds and by considering the performance gaps amongst features, the developed FGSopt method improves the object detection performance of the FGS technique also improving upon several state-of-the-art salient object detection models.  4. The introduction of multiple combination schemes/rules further extends the generalisability of the traditional attention model beyond that of joint optimisation based single rules. The introduction of feature composition based grouping of images, enables the developed IGA method to autonomously identify an appropriate combination strategy for an unseen image. The results of a pair-wise ranksum test confirm that the IGA method is significantly better than the deterministic and classification based benchmark methods on the 99% confidence interval level. Extending this line of research, a novel relative encoding approach enables the adapted XCSCA method to group images having similar saliency prediction ability. By keeping track of previous inputs, the introduced action part of the XCSCA approach enables learning of generalised feature importance rules. By more accurate grouping of images as compared with IGA, generalised learnt rules and appropriate application of feature importance rules, the XCSCA approach improves upon the generalisation performance of the IGA method.  5. The introduced uniform saliency assignment and segmentation quality cues enable label free evaluation of a feature/saliency map. By accurate ranking and effective clustering, the developed DFS method successfully solves the complex problem of finding appropriate features for combination (on an-image-by-image basis) for the first time in saliency detection. The DFS method enables ground truth free evaluation of saliency methods and advances the state-of-the-art in data driven saliency aggregation by detection and deselection of redundant information.  The final contribution is that the developed methods are formed into a complete system where analysis shows the effects of their interactions on the system. Based on the saliency prediction accuracy versus computational time trade-off, specialised variants of the proposed methods are presented along with the recommendations for further use by other saliency detection systems.  This research work has shown that artificial learning can increase the generalisation of the traditional model of attention by effective selection and optimal combination of features. Overall, this thesis has shown that it is the ability to autonomously segregate images based on their types and subsequent learning of appropriate combinations that aid generalisation on difficult unseen stimuli.</p>


2021 ◽  
Author(s):  
◽  
Syed Naqvi

<p>For a diverse range of applications in machine vision from social media searches to robotic home care providers, it is important to replicate the mechanism by which the human brain selects the most important visual information, while suppressing the remaining non-usable information.  Many computational methods attempt to model this process by following the traditional model of visual attention. The traditional model of attention involves feature extraction, conditioning and combination to capture this behaviour of human visual attention. Consequently, the model has inherent design choices at its various stages. These choices include selection of parameters related to the feature computation process, setting a conditioning approach, feature importance and setting a combination approach. Despite rapid research and substantial improvements in benchmark performance, the performance of many models depends upon tuning these design choices in an ad hoc fashion. Additionally, these design choices are heuristic in nature, thus resulting in good performance only in certain settings. Consequentially, many such models exhibit low robustness to difficult stimuli and the complexities of real-world imagery.  Machine learning and optimisation technique have long been used to increase the generalisability of a system to unseen data. Surprisingly, artificial learning techniques have not been investigated to their full potential to improve generalisation of visual attention methods.  The proposed thesis is that artificial learning can increase the generalisability of the traditional model of visual attention by effective selection and optimal combination of features.  The following new techniques have been introduced at various stages of the traditional model of visual attention to improve its generalisation performance, specifically on challenging cases of saliency detection:  1. Joint optimisation of feature related parameters and feature importance weights is introduced for the first time to improve the generalisation of the traditional model of visual attention. To evaluate the joint learning hypothesis, a new method namely GAOVSM is introduced for the tasks of eye fixation prediction. By finding the relationships between feature related parameters and feature importance, the developed method improves the generalisation performance of baseline method (that employ human encoded parameters).  2. Spectral matting based figure-ground segregation is introduced to overcome the artifacts encountered by region-based salient object detection approaches. By suppressing the unwanted background information and assigning saliency to object parts in a uniform manner, the developed FGS approach overcomes the limitations of region based approaches.  3. Joint optimisation of feature computation parameters and feature importance weights is introduced for optimal combination of FGS with complementary features for the first time for salient object detection. By learning feature related parameters and their respective importance at multiple segmentation thresholds and by considering the performance gaps amongst features, the developed FGSopt method improves the object detection performance of the FGS technique also improving upon several state-of-the-art salient object detection models.  4. The introduction of multiple combination schemes/rules further extends the generalisability of the traditional attention model beyond that of joint optimisation based single rules. The introduction of feature composition based grouping of images, enables the developed IGA method to autonomously identify an appropriate combination strategy for an unseen image. The results of a pair-wise ranksum test confirm that the IGA method is significantly better than the deterministic and classification based benchmark methods on the 99% confidence interval level. Extending this line of research, a novel relative encoding approach enables the adapted XCSCA method to group images having similar saliency prediction ability. By keeping track of previous inputs, the introduced action part of the XCSCA approach enables learning of generalised feature importance rules. By more accurate grouping of images as compared with IGA, generalised learnt rules and appropriate application of feature importance rules, the XCSCA approach improves upon the generalisation performance of the IGA method.  5. The introduced uniform saliency assignment and segmentation quality cues enable label free evaluation of a feature/saliency map. By accurate ranking and effective clustering, the developed DFS method successfully solves the complex problem of finding appropriate features for combination (on an-image-by-image basis) for the first time in saliency detection. The DFS method enables ground truth free evaluation of saliency methods and advances the state-of-the-art in data driven saliency aggregation by detection and deselection of redundant information.  The final contribution is that the developed methods are formed into a complete system where analysis shows the effects of their interactions on the system. Based on the saliency prediction accuracy versus computational time trade-off, specialised variants of the proposed methods are presented along with the recommendations for further use by other saliency detection systems.  This research work has shown that artificial learning can increase the generalisation of the traditional model of attention by effective selection and optimal combination of features. Overall, this thesis has shown that it is the ability to autonomously segregate images based on their types and subsequent learning of appropriate combinations that aid generalisation on difficult unseen stimuli.</p>


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Javier Ferney Castillo García ◽  
Jesús Hamilton Ortiz ◽  
Osamah Ibrahim Khalaf ◽  
Adrián David Valencia Hernández ◽  
Luis Carlos Rodríguez Timaná

The present work demonstrates the design and implementation of a human-safe, portable, noninvasive device capable of predicting type 2 diabetes, using electrical bioimpedance and biometric features to train an artificial learning machine using an active learning algorithm based on population selection. In addition, there is an API with a graphical interface that allows the prediction and storage of data when the characteristics of the person are sent. The results obtained show an accuracy higher than 90% with statistical significance ( p  < 0.05). The Kappa coefficient values were higher than 0.9, showing that the device has a good predictive capacity which would allow the screening process of type 2 diabetes. This development contributes to preventive medicine and makes it possible to determine at a low cost, comfortably, without medical preparation, and in less than 2 minutes whether a person has type 2 diabetes.


2021 ◽  
Vol 4 (2) ◽  
pp. 221-230
Author(s):  
Zeliha Cagla Kuyumcu ◽  
Suhrab Ahadi ◽  
Hakan Aslan

The lives of approximately 1.3 million people are cut short every year as a result of road traffic crashes. Between 20 and 50 million people suffer non-fatal injuries, with many incurring a disability as a result of their injury. The risk of dying in a road traffic crash is more than 3 times higher in low-income countries than in high-income countries [1]. In Turkey, 18% of traffic accidents was related to pedestrian-vehicle collisions in urban roads in 2020. In addition, 20% of death toll caused by accidents is pedestrians in 2020 [2]. This study deals with the some of classifiers to forecast the number of injuries as a result of traffic accidents. The classifier’s performance ratios were also examined.


2021 ◽  
Vol 11 ◽  
pp. 103-112
Author(s):  
Cesar D. Lopez ◽  
Anastasia Gazgalis ◽  
Venkat Boddapati ◽  
Roshan P. Shah ◽  
H. John Cooper ◽  
...  

Author(s):  
Shamik Tiwari

Epiluminescence microscopy, more simply, dermatoscopy, entails a process using imaging to examine skin lesions. Various sorts of skin ailments, for example, melanoma, may be differentiated via these skin images. With the adverse possibilities of malignant melanoma causing death, an early diagnosis of melanoma can impact on the survival, length, and quality of life of the affected victim. Image recognition-based detection of different tissue classes is significant to implementing computer-aided diagnosis via histological images. Conventional image recognition require handcrafted feature extraction before the application of machine learning. Today, deep learning is offering significant choices with the progression of artificial learning to defeat the complications of the handcrafted feature extraction methods. A deep learning-based approach for the recognition of melanoma via the Capsule network is proposed here. The novel approach is compared with a multi-layer perceptron and convolution network with the Capsule network model yielding the classification accuracy at 98.9%.


2021 ◽  
Vol 30 ◽  
pp. 100522
Author(s):  
Ramesh Sekaran ◽  
Manikandan Ramachandran ◽  
Rizwan Patan ◽  
Fadi Al-Turjman

Author(s):  
Sameer Quazi ◽  
Rohit Jangi

Artificial learning and machine learning is playing a pivotal role is the society especially in the field of medicinal chemistry and drug discovery. Particularly its algorithms, neural networks or other recurrent networks drive this area. In this review, we have taken into account the diverse use of AI in a number of pharmaceutical industries including discovery of drugs, repurposing, development of pharmaceutical drug and its clinical trials. In addition, the efficiency of these artificial or machine learning programs in achieving the target drugs in short time period, along with accurate dosage and cost effectively of the drug has also been discussed. Numerous applications of AI in property prediction such as ADMET have been used for prediction of strength of this technology in QSAR. In case of de-novo synthesis, it results in generation of novel drug molecules with unique design proving this a promising field fir drug design. Moreover, its involvement in synthetic planning, ease of synthesis and much more contribute to automated drug discovery in near future.


Sign in / Sign up

Export Citation Format

Share Document