Analysis and application of martial arts video image based on fuzzy clustering algorithm

Author(s):  
Chao Zhao ◽  
Hongling Yang ◽  
Xiaoqian Li ◽  
Rui Li ◽  
ShouCun Zheng

The intelligent scheduling algorithm for hierarchical data migration is a key issue in data management. Mass media content platforms and the discovery of content object usage patterns is the basic schedule of data migration. We add QPop, the dimensionality reduction result of media content usage logs, as content objects for discovering usage patterns. On this basis, a clustering algorithm QPop is proposed to increase the time segmentation, thereby improving the mining performance. We hired the standard C-means algorithm as the clustering core and used segmentation to conduct an experimental mining process to collect the ted QPop increments in practical applications. The results show that the improved algorithm has good robustness in cluster cohesion and other indicators, slightly better than the basic model.

Author(s):  
Yang Xindi ◽  
Du Huanran

The intelligent scheduling algorithm for hierarchical data migration is a key issue in data management. Mass media content platforms and the discovery of content object usage patterns is the basic schedule of data migration. We add QPop, the dimensionality reduction result of media content usage logs, as content objects for discovering usage patterns. On this basis, a clustering algorithm QPop is proposed to increase the time segmentation, thereby improving the mining performance. We hired the standard C-means algorithm as the clustering core and used segmentation to conduct an experimental mining process to collect the ted QPop increments in practical applications. The results show that the improved algorithm has good robustness in cluster cohesion and other indicators, slightly better than the basic model.


SLEEP ◽  
2021 ◽  
Vol 44 (Supplement_2) ◽  
pp. A182-A182
Author(s):  
Yoav Nygate ◽  
Sam Rusk ◽  
Chris Fernandez ◽  
Nick Glattard ◽  
Nathaniel Watson ◽  
...  

Abstract Introduction Improving positive airway pressure (PAP) adherence is crucial to obstructive sleep apnea (OSA) treatment success. We have previously shown the potential of utilizing Deep Neural Network (DNN) models to accurately predict future PAP usage, based on predefined compliance phenotypes, to enable early patient outreach and interventions. These phenotypes were limited, based solely on usage patterns. We propose an unsupervised learning methodology for redefining these adherence phenotypes in order to assist with the creation of more precise and personalized patient categorization. Methods We trained a DNN model to predict PAP compliance based on daily usage patterns, where compliance was defined as the requirement for 4 hours of PAP usage a night on over 70% of the recorded nights. The DNN model was trained on N=14,000 patients with 455 days of daily PAP usage data. The latent dimension of the trained DNN model was used as a feature vector containing rich usage pattern information content associated with overall PAP compliance. Along with the 455 days of daily PAP usage data, our dataset included additional patient demographics such as age, sex, apnea-hypopnea index, and BMI. These parameters, along with the extracted usage patterns, were applied together as inputs to an unsupervised clustering algorithm. The clusters that emerged from the algorithm were then used as indicators for new PAP compliance phenotypes. Results Two main clusters emerged: highly compliant and highly non-compliant. Furthermore, in the transition between the two main clusters, a sparse cluster of struggling patients emerged. This method allows for the continuous monitoring of patients as they transition from one cluster to the other. Conclusion In this research, we have shown that by utilizing historical PAP usage patterns along with additional patient information we can identify PAP specific adherence phenotypes. Clinically, this allows focus of PAP adherence program resources to be targeted early on to patients susceptible to treatment non-adherence. Furthermore, the transition between the two main phenotypes can also indicate when personalized intervention is necessary to maximize treatment success and outcomes. Lastly, providers can transition patients in the highly non-compliant group more quickly to alternative therapies. Support (if any):


2019 ◽  
Vol 5 (11) ◽  
pp. 85 ◽  
Author(s):  
Ayan Chatterjee ◽  
Peter W. T. Yuen

This paper proposes a simple yet effective method for improving the efficiency of sparse coding dictionary learning (DL) with an implication of enhancing the ultimate usefulness of compressive sensing (CS) technology for practical applications, such as in hyperspectral imaging (HSI) scene reconstruction. CS is the technique which allows sparse signals to be decomposed into a sparse representation “a” of a dictionary D u . The goodness of the learnt dictionary has direct impacts on the quality of the end results, e.g., in the HSI scene reconstructions. This paper proposes the construction of a concise and comprehensive dictionary by using the cluster centres of the input dataset, and then a greedy approach is adopted to learn all elements within this dictionary. The proposed method consists of an unsupervised clustering algorithm (K-Means), and it is then coupled with an advanced sparse coding dictionary (SCD) method such as the basis pursuit algorithm (orthogonal matching pursuit, OMP) for the dictionary learning. The effectiveness of the proposed K-Means Sparse Coding Dictionary (KMSCD) is illustrated through the reconstructions of several publicly available HSI scenes. The results have shown that the proposed KMSCD achieves ~40% greater accuracy, 5 times faster convergence and is twice as robust as that of the classic Spare Coding Dictionary (C-SCD) method that adopts random sampling of data for the dictionary learning. Over the five data sets that have been employed in this study, it is seen that the proposed KMSCD is capable of reconstructing these scenes with mean accuracies of approximately 20–500% better than all competing algorithms adopted in this work. Furthermore, the reconstruction efficiency of trace materials in the scene has been assessed: it is shown that the KMSCD is capable of recovering ~12% better than that of the C-SCD. These results suggest that the proposed DL using a simple clustering method for the construction of the dictionary has been shown to enhance the scene reconstruction substantially. When the proposed KMSCD is incorporated with the Fast non-negative orthogonal matching pursuit (FNNOMP) to constrain the maximum number of materials to coexist in a pixel to four, experiments have shown that it achieves approximately ten times better than that constrained by using the widely employed TMM algorithm. This may suggest that the proposed DL method using KMSCD and together with the FNNOMP will be more suitable to be the material allocation module of HSI scene simulators like the CameoSim package.


Algorithms ◽  
2020 ◽  
Vol 13 (7) ◽  
pp. 158
Author(s):  
Tran Dinh Khang ◽  
Nguyen Duc Vuong ◽  
Manh-Kien Tran ◽  
Michael Fowler

Clustering is an unsupervised machine learning technique with many practical applications that has gathered extensive research interest. Aside from deterministic or probabilistic techniques, fuzzy C-means clustering (FCM) is also a common clustering technique. Since the advent of the FCM method, many improvements have been made to increase clustering efficiency. These improvements focus on adjusting the membership representation of elements in the clusters, or on fuzzifying and defuzzifying techniques, as well as the distance function between elements. This study proposes a novel fuzzy clustering algorithm using multiple different fuzzification coefficients depending on the characteristics of each data sample. The proposed fuzzy clustering method has similar calculation steps to FCM with some modifications. The formulas are derived to ensure convergence. The main contribution of this approach is the utilization of multiple fuzzification coefficients as opposed to only one coefficient in the original FCM algorithm. The new algorithm is then evaluated with experiments on several common datasets and the results show that the proposed algorithm is more efficient compared to the original FCM as well as other clustering methods.


2020 ◽  
Vol 17 (5) ◽  
pp. 2024-2029
Author(s):  
E. Bijolin Edwin ◽  
M. Roshni Thanka

The evolution of Information Systems implies new applications and the need to migrate the data from a previous application to a new one. At the same time, some organizations may need to replicate data from one technology to another one, in order to have backup systems and have a flexible load balanced strategies. The maximal uniform distribution of the load across closer and number of simpler nodes can help managing and providing the big data and large workloads which are more easy to handle. The ultimate goal is to balance the load through cloud and make internet less cloud defendant by having data available closer to the user end. One of the most challenging steps required to deploy an application infrastructure in the cloud involves the physics of moving data into and out of the cloud. Amazon Web Services (AWS) provides a number of services for moving data, and each solution offers various levels of speed, security, cost, and performance. This stems from the fact that almost all the typical distributed storage systems only provide data-amount-oriented balancing mechanisms without considering the different access load of data. To eliminate the system bottlenecks and optimize the resource utilization, there is a demand for such distributed storage systems to employ a workload balancing and adaptive resource management framework. We propose a framework of Enhanced replication scheduling algorithm which balances the replicated data to be balanced and to handle the overload data integration by data migration concept which gives more data efficiency and improved performance during migration of replicated data. For handling of data migration, we propose Ant Colony Algorithm which gives a safe data migration from one end to the other. This will improve the efficiency, Cost and takes less duration for the data to migrated and to be equally balanced.


Algorithms ◽  
2021 ◽  
Vol 14 (9) ◽  
pp. 258
Author(s):  
Tran Dinh Khang ◽  
Manh-Kien Tran ◽  
Michael Fowler

Clustering is an unsupervised machine learning method with many practical applications that has gathered extensive research interest. It is a technique of dividing data elements into clusters such that elements in the same cluster are similar. Clustering belongs to the group of unsupervised machine learning techniques, meaning that there is no information about the labels of the elements. However, when knowledge of data points is known in advance, it will be beneficial to use a semi-supervised algorithm. Within many clustering techniques available, fuzzy C-means clustering (FCM) is a common one. To make the FCM algorithm a semi-supervised method, it was proposed in the literature to use an auxiliary matrix to adjust the membership grade of the elements to force them into certain clusters during the computation. In this study, instead of using the auxiliary matrix, we proposed to use multiple fuzzification coefficients to implement the semi-supervision component. After deriving the proposed semi-supervised fuzzy C-means clustering algorithm with multiple fuzzification coefficients (sSMC-FCM), we demonstrated the convergence of the algorithm and validated the efficiency of the method through a numerical example.


2012 ◽  
Vol 39 (6) ◽  
pp. 1211-1224 ◽  
Author(s):  
Piotr Kulczycki ◽  
Malgorzata Charytanowicz ◽  
Piotr A. Kowalski ◽  
Szymon Lukasik

2018 ◽  
Vol 29 (1) ◽  
pp. 753-772 ◽  
Author(s):  
Omar A. Bari ◽  
Arvin Agah

Abstract Event studies in finance have focused on traditional news headlines to assess the impact an event has on a traded company. The increased proliferation of news and information produced by social media content has disrupted this trend. Although researchers have begun to identify trading opportunities from social media platforms, such as Twitter, almost all techniques use a general sentiment from large collections of tweets. Though useful, general sentiment does not provide an opportunity to indicate specific events worthy of affecting stock prices. This work presents an event clustering algorithm, utilizing natural language processing techniques to generate newsworthy events from Twitter, which have the potential to influence stock prices in the same manner as traditional news headlines. The event clustering method addresses the effects of pre-news and lagged news, two peculiarities that appear when connecting trading and news, regardless of the medium. Pre-news signifies a finding where stock prices move in advance of a news release. Lagged news refers to follow-up or late-arriving news, adding redundancy in making trading decisions. For events generated by the proposed clustering algorithm, we incorporate event studies and machine learning to produce an actionable system that can guide trading decisions. The recommended prediction algorithms provide investing strategies with profitable risk-adjusted returns. The suggested language models present annualized Sharpe ratios (risk-adjusted returns) in the 5–11 range, while time-series models produce in the 2–3 range (without transaction costs). The distribution of returns confirms the encouraging Sharpe ratios by identifying most outliers as positive gains. Additionally, machine learning metrics of precision, recall, and accuracy are discussed alongside financial metrics in hopes of bridging the gap between academia and industry in the field of computational finance.


2017 ◽  
Vol 43 (1) ◽  
pp. 181-200 ◽  
Author(s):  
Giovanni Stilo ◽  
Paola Velardi

Hashtags are creative labels used in micro-blogs to characterize the topic of a message/discussion. Regardless of the use for which they were originally intended, hashtags cannot be used as a means to cluster messages with similar content. First, because hashtags are created in a spontaneous and highly dynamic way by users in multiple languages, the same topic can be associated with different hashtags, and conversely, the same hashtag may refer to different topics in different time periods. Second, contrary to common words, hashtag disambiguation is complicated by the fact that no sense catalogs (e.g., Wikipedia or WordNet) are available; and, furthermore, hashtag labels are difficult to analyze, as they often consist of acronyms, concatenated words, and so forth. A common way to determine the meaning of hashtags has been to analyze their context, but, as we have just pointed out, hashtags can have multiple and variable meanings. In this article, we propose a temporal sense clustering algorithm based on the idea that semantically related hashtags have similar and synchronous usage patterns.


2018 ◽  
Vol 72 ◽  
pp. 01006
Author(s):  
Sheng-Ta Chen ◽  
Chi-Lun Liu ◽  
Ming-Hung Lee ◽  
Min Fung ◽  
Wei-Guang Teng

In the electricity market, the real-time balance of electricity generation and consumption is a main task. In view of this, power providers usually sign contracts with their critical consumers (i.e., usually large-scale industrial companies) for managing their capacity demands. On the other hand, aggregators group commercial and residential consumers, and integrate their demands to negotiate with power providers. With a proper grouping of numerous electricity consumers, aggregators help to ensure stable electric supply, and reduce the burden of managing many consumers. In this work, we thus propose a novel data clustering approach to group complementary consumers based on their usage patterns (i.e., daily electricity consumption curves.) Furthermore, we incorporate the technique of discrete wavelet transform to speed up the clustering process. Specifically, approximations reconstructed from only a few wavelet coefficients may precisely capture the shape of original usage patterns. Experimental results based on a real dataset show that our approach is promising in practical applications.


Sign in / Sign up

Export Citation Format

Share Document