Advances in Machine Learning & Artificial Intelligence
Latest Publications


TOTAL DOCUMENTS

17
(FIVE YEARS 17)

H-INDEX

0
(FIVE YEARS 0)

Published By Opast Group LLC

2769-545x

An important diagnostic method for diagnosing abnormalities in the human heart is the electrocardiogram (ECG). A large number of heart patients increase the assignment of physicians. To reduce their assignment, an automatic computer detection system is needed. In this study, a computer system for classifying ECG signals is presented. The MIT-BIH, ECG arrhythmia database is used for analysis. After the ECG signal is noisy in the preprocessing stage, the data feature is extracted. In the feature extraction step, the decision tree is used and the support vector machine (SVM) is constructed to classify the ECG signal into two categories. It is normal or abnormal. The results show that the system classifies the given ECG signal with 90% sensitivity.


Over time, an exorbitant data quantity is generating which indeed requires a shrewd technique for handling such a big database to smoothen the data storage and disseminating process. Storing and exploiting such big data quantities require enough capable systems with a proactive mechanism to meet the technological challenges too. The available traditional Distributed File System (DFS) becomes inevitable while handling the dynamic variations and requires undefined settling time. Therefore, to address such huge data handling challenges, a proactive grid base data management approach is proposed which arranges the huge data into various tiny chunks called grids and makes the placement according to the currently available slots. The data durability and computation speed have been aligned by designing data disseminating and data eligibility replacement algorithms. This approach scrumptiously enhances the durability of data accessing and writing speed. The performance has been tested through numerous grid datasets and therefore, chunks have been analysed through various iterations by fixing the initial chunks statistics, then making a predefined chunk suggestion and then relocating the chunks after the substantial iterations and found that chunks are in an optimal node from the first iteration of replacement which is more than 21% of working clusters as compared to the traditional approach.


Time delays in systems are becoming important phenomena now-a-days in regards to its safety issues. A continuous delayed system proposed by A. Uçar is considered for this work. Detailed works are concentrated on finding behavior of this system of continuous delayed system with respect to different system parameters. Self-written code is used to observe the behavior of the system. Self-written code gives flexibility to see behaviors of the system in more in depth. System behavior is observed for a very large range of parameters and comparison is made with others works. Results indicate that for a certain range of values of parameters the system show predictable behavior but after certain range of parameter values the system goes to unpredictable chaotic behavior. In addition, parametric relation is shown for same type of chaotic behavior. It is expected that this finding will increase understanding of complex phenomena involved in delayed dynamical system when safety is prime importance.


The Exponentiated Gumbel (EG) distribution has been proposed to capture some aspects of the data that the Gumbel distribution fails to specify. it has an increasing hazard rate. The Exponentiated Gumbel distribution has applications in hydrology, meteorology, climatology, insurance, finance and geology, among many others. In this paper Firstly, the mathematical and statistical characteristics of the gumbel and Exponentiated Gumbel distribution are presented, then the applications of this distributions are studied using the real data set. Its first moment about origin and moments about mean have been obtained and expressions for skewness, kurtosis have been given. Estimation of its parameter has been discussed using the method of maximum likelihood. In the end, two applications of the gumbel and exponentiated gumbel distribution have been discussed with two real lifetime data sets. The results also confirmed the suitability of the Exponentiated Gumbel distribution for real data collection.


Software defect prediction is a significant activity in every software firm. It helps in producing quality software by reliable defect prediction, defect elimination, and prediction of modules that are susceptible to defect. Several researchers have proposed different software prediction approaches in the past. However, these conventional software defect predictions are prone to low classification accuracy, time-consuming, and tasking. This paper aims to develop a novel multi-model ensemble machine-learning for software defect prediction. The ensemble technique can reduce inconsistency among training and test datasets and eliminate bias in the training and testing phase of the model, thereby overcoming the downsides that have characterized the existing techniques used for the prediction of a software defect. To address these shortcomings, this paper proposes a new ensemble machine-learning model for software defect prediction using k Nearest Neighbour (kNN), Generalized Linear Model with Elastic Net Regularization (GLMNet), and Linear Discriminant Analysis (LDA) with Random Forest as base learner. Experiments were conducted using the proposed model on CM1, JM1, KC3, and PC3 datasets from the NASA PROMISE repository using the RStudio simulation tool. The ensemble technique achieved 87.69% for CM1 dataset, 81.11% for JM1 dataset, 90.70% for PC3 dataset, and 94.74% for KC3 dataset. The performance of the proposed system was compared with that of other existing techniques in literature in terms of AUC. The ensemble technique achieved 87%, which is better than the other seven state-of-the-art techniques under consideration. On average, the proposed model achieved an overall prediction accuracy of 88.56% for all datasets used for experiments. The results demonstrated that the ensemble model succeeded in effectively predicting the defects in PROMISE datasets that are notorious for their noisy features and high dimensions. This shows that ensemble machine learning is promising and the future of software defect prediction.


Companies involved in providing legal research services to lawyers, such as LexisNexis or Westlaw, have rapidly incorporated natural language processing (NLP) into their database systems to deal with the massive amounts of legal texts contained within them. These NLP techniques, which perform analysis on natural language texts by taking advantage of methods developed in the fields of computational linguistics and artificial intelligence, have potential applications ranging from text summarization all the way to the prediction of court judgments. However, a potential concern with the use of this technology is that professionals will come to depend on systems, over which they have little control or understanding, as a source of knowledge. While recent strides in AI and deep learning have led to increased effectiveness in NLP techniques, the decision-making processes of these algorithms have progressively become less intuitive for humans to understand. Concerns about the interpretability of patented legal services such as LexisNexis are more pertinent than ever. The following survey conducted for current NLP techniques shows that one potential avenue to make algorithms in NLP more explainable is to incorporate symbol-based methods that take advantage of knowledge models generated for specific domains. An example of this can be seen in NLP techniques developed to facilitate the retrieval of inventive information from patent applications.


Keyword(s):  

The dynamics experiment indicated, the Newton third law is wrong. So where is it being mistakes? The demand deepest analysis in action and reaction, it of meaning and it of principle. The result is to make person feelings shock, it is really wrong that Newton third law!


In this paper from communication channel coding perspective we are able to present both a theoretical and practical discussion of AI’s uncertainty, capacity and evolution for pattern classification based on the classical Rademacher complexity and Shannon entropy. First AI capacity is defined as in communication channels. It is shown qualitatively that the classical Rademacher complexity and Shannon rate in communication theory is closely related by their definitions. Secondly based on the Shannon mathematical theory on communication coding, we derive several sufficient and necessary conditions for an AI’s error rate approaching zero in classifications problems. A 1/2 criteria on Shannon entropy is derived in this paper so that error rate can approach zero or is zero for AI pattern classification problems. Last but not least, we show our analysis and theory by providing examples of AI pattern classifications with error rate approaching zero or being zero. Impact Statement: Error rate control of AI pattern classification is crucial in many lives related AI applications. AI uncertainty, capacity and evolution are investigated in this paper. Sufficient/necessary conditions for AI’s error rate approaching zero are derived based on Shannon’s communication coding theory. Zero error rate and zero error rate approaching AI design methodology for pattern classifications are illustrated using Shannon’s coding theory. Our method shows how to control the error rate of AI, how to measure the capacity of AI and how to evolve AI into higher levels. Index Terms: Rademacher Complexity, Shannon Theory, Shannon Entropy, Vapnik-Cheronenkis (VC) dimension.


Newton’s third law has been proved to be wrong, there are experimental evidence of the video, there are rigorous proof of a strong paper. Further obtained based on this, that is, Newton’s second law to prove is wrong. Therefore, the Newton law of correcting wrong, there are new second law of motion and new third law of motion, to be produces. So including Newton’s first law the New three laws of motion, will become more accurate, more efficient mechanical principles, guiding the new mechanical system is derived and the establishment. No one would doubt that Newton’s second law and Newton’s third law would be wrong. But a surprising discovery was produced in a simple mechanic’s experiment. The earliest experiments showed that two objects interact, acting force and reaction force, Is not the same size. Therefore, Newton’s third law seems to be wrong. Using conventional methods, considering objects with different masses, the inertia is also different. It can also provide a reasonable explanation for the unequal force and reaction force. But when It was further discovered that when Newton’s second law was also wrong, the introduction of the new second law made the establishment of the new third law also perfect. A series of extremely important new discoveries were successively produced and realized.


The COVID-19 pandemic has been causing a massive strain in different sectors around the globe, especially in the health care systems in many countries. Artificial Intelligence has found its way in the health care system in helping to find a cure or vaccine by screening out medicines that could be promising for cure. Not only that but by containing the virus and predicting highly effected areas and limiting the spread of the virus. Many use cases based on AI was successful to monitor the spread and lock areas that were predicted by AI algorithms to be at high risk. Broadly speaking, AI involves ‘the ability of machines to emulate human thinking, reasoning and decision - making.


Sign in / Sign up

Export Citation Format

Share Document