scholarly journals On the complexity of edge-colored subgraph partitioning problems in network optimization

2016 ◽  
Vol Vol. 17 no. 3 (Discrete Algorithms) ◽  
Author(s):  
Xiaoyan Zhang ◽  
Zan-Bo Zhang ◽  
Hajo Broersma ◽  
Xuelian Wen

International audience Network models allow one to deal with massive data sets using some standard concepts from graph theory. Understanding and investigating the structural properties of a certain data set is a crucial task in many practical applications of network optimization. Recently, labeled network optimization over colored graphs has been extensively studied. Given a (not necessarily properly) edge-colored graph $G=(V,E)$, a subgraph $H$ is said to be <i>monochromatic</i> if all its edges have the same color, and called <i>multicolored</i> if all its edges have distinct colors. The monochromatic clique and multicolored cycle partition problems have important applications in the problems of network optimization arising in information science and operations research. We investigate the computational complexity of the problems of determining the minimum number of monochromatic cliques or multicolored cycles that, respectively, partition $V(G)$. We show that the minimum monochromatic clique partition problem is APX-hard on monochromatic-diamond-free graphs, and APX-complete on monochromatic-diamond-free graphs in which the size of a maximum monochromatic clique is bounded by a constant. We also show that the minimum multicolored cycle partition problem is NP-complete, even if the input graph $G$ is triangle-free. Moreover, for the weighted version of the minimum monochromatic clique partition problem on monochromatic-diamond-free graphs, we derive an approximation algorithm with (tight) approximation guarantee ln $|V(G)|+1$.

Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 558
Author(s):  
Anping Song ◽  
Xiaokang Xu ◽  
Xinyi Zhai

Rotation-Invariant Face Detection (RIPD) has been widely used in practical applications; however, the problem of the adjusting of the rotation-in-plane (RIP) angle of the human face still remains. Recently, several methods based on neural networks have been proposed to solve the RIP angle problem. However, these methods have various limitations, including low detecting speed, model size, and detecting accuracy. To solve the aforementioned problems, we propose a new network, called the Searching Architecture Calibration Network (SACN), which utilizes architecture search, fully convolutional network (FCN) and bounding box center cluster (CC). SACN was tested on the challenging Multi-Oriented Face Detection Data Set and Benchmark (MOFDDB) and achieved a higher detecting accuracy and almost the same speed as existing detectors. Moreover, the average angle error is optimized from the current 12.6° to 10.5°.


Author(s):  
Shaoqiang Wang ◽  
Shudong Wang ◽  
Song Zhang ◽  
Yifan Wang

Abstract To automatically detect dynamic EEG signals to reduce the time cost of epilepsy diagnosis. In the signal recognition of electroencephalogram (EEG) of epilepsy, traditional machine learning and statistical methods require manual feature labeling engineering in order to show excellent results on a single data set. And the artificially selected features may carry a bias, and cannot guarantee the validity and expansibility in real-world data. In practical applications, deep learning methods can release people from feature engineering to a certain extent. As long as the focus is on the expansion of data quality and quantity, the algorithm model can learn automatically to get better improvements. In addition, the deep learning method can also extract many features that are difficult for humans to perceive, thereby making the algorithm more robust. Based on the design idea of ResNeXt deep neural network, this paper designs a Time-ResNeXt network structure suitable for time series EEG epilepsy detection to identify EEG signals. The accuracy rate of Time-ResNeXt in the detection of EEG epilepsy can reach 91.50%. The Time-ResNeXt network structure produces extremely advanced performance on the benchmark dataset (Berne-Barcelona dataset) and has great potential for improving clinical practice.


Circulation ◽  
2019 ◽  
Vol 140 (Suppl_2) ◽  
Author(s):  
Alyssa Vermeulen ◽  
Marina Del Rios ◽  
Teri L Campbell ◽  
Hai Nguyen ◽  
Hoang H Nguyen

Introduction: The interactions of various variables on out-of-hospital cardiac arrest (OHCA) in the young (1-35 years old) outcomes are complex. Network models have emerged as a way to abstract complex systems and gain insights into relational patterns among observed variables. Hypothesis: Network analysis helps provide qualitative and quantitative insights into how various variables interact with each other and affect outcomes in OHCA in the young. Methods: A mixed graphical network analysis was performed using variables collected by CARES. The network allows the visualization and quantification of each unique interaction between two variables that cannot be explained away by other variables in the data set. The strength of the underlying interaction is proportional to the thickness of the connections (edges) between the variables (nodes). We used the mgm package in R. Results: Figure 1 shows the network of the OHCA in the young cases in Chicago from 2013 to 2017. There are apparent clusters. Sustained return of spontaneous circulation and hypothermia are strongly correlated with survival and neurological outcomes. This cluster is in turn connected to the rest of the network by survival to emergency room. The interaction between any two variables can also be quantified. For example, American Indians cases occur more often in disadvantaged locations when compared to Whites (OR 4.5). The network also predicts how much one node can be explained by adjacent nodes. Only 20% of survival to emergency room is explained by its adjacent nodes. The remaining 80% is attributed to variables not represented in this network. This suggests that interventions to improve this node is difficult unless further data is available. Conclusion: Network analysis provides both a qualitative and quantitative evaluation of the complex system governing OHCA in the young. The networks predictive capability could help in identifying the most effective interventions to improve outcomes.


2009 ◽  
Vol 13 (6) ◽  
pp. 833-845 ◽  
Author(s):  
Z. Su ◽  
W. J. Timmermans ◽  
C. van der Tol ◽  
R. Dost ◽  
R. Bianchi ◽  
...  

Abstract. EAGLE2006 – an intensive field campaign for the advances in land surface hydrometeorological processes – was carried out in the Netherlands from 8th to 18th June 2006, involving 16 institutions with in total 67 people from 16 different countries. In addition to the acquisition of multi-angle and multi-sensor satellite data, several airborne instruments – an optical imaging sensor, an imaging microwave radiometer, and a flux airplane – were deployed and extensive ground measurements were conducted over one grassland site at Cabauw and two forest sites at Loobos and Speulderbos in the central part of the Netherlands. The generated data set is both unique and urgently needed for the development and validation of models and inversion algorithms for quantitative land surface parameter estimation and land surface hydrometeorological process studies. EAGLE2006 was led by the Department of Water Resources of the International Institute for Geo-Information Science and Earth Observation (ITC) and originated from the combination of a number of initiatives supported by different funding agencies. The objectives of the EAGLE2006 campaign were closely related to the objectives of other European Space Agency (ESA) campaign activities (SPARC2004, SEN2FLEX2005 and especially AGRISAR2006). However, one important objective of the EAGLE2006 campaign is to build up a data base for the investigation and validation of the retrieval of bio-geophysical parameters, obtained at different radar frequencies (X-, C- and L-Band) and at hyperspectral optical and thermal bands acquired simultaneously over contrasting vegetated fields (forest and grassland). As such, all activities were related to algorithm development for future satellite missions such as the Sentinels and for validation of retrievals of land surface parameters with optical and thermal and microwave sensors onboard current and future satellite missions. This contribution describes the campaign objectives and provides an overview of the airborne and field campaign dataset. This dataset is available for scientific investigations and can be accessed on the ESA Principal Investigator Portal http://eopi.esa.int/.


Author(s):  
Nátalia NAKANO ◽  
Talita Cristina da SILVA ◽  
Maria José Vicentini JORENTE ◽  
José Eduardo SANTARÉM SEGUNDO

In 2001 Tim Berners-Lee revealed to the world what he wanted for the future of Web - man and machine working together to develop complex tasks, and that the Web could leverage the way human knowledge is acquired. Since then researchers from different fields of knowledge have engaged in scientific and empirical research to make this dream come true. In this context, the research problem of this article is established: What is the current situation of Semantic Web research in Brazil in Information Science? Who are the researchers of this theme in our country? What are the institutions that support these studies? The present study aimed at listing the most productive authors, institutions that support their research and the specific issues of their investigations. We conducted a literature review in Base de Dados Referencial de Artigos de Periódicos em Ciência da Informação (BRAPCI). We retrieved 41 articles, excluded five for not belonging to Brazilian authors and Brazilian institutions. From the analysis of this corpus, we realized the need to include additional keywords to better understanding of specific studies encompassed by the theme. Thus, we included the keywords: SPARQL, SKOS, RDF and ontology. It was concluded that the studies on the Semantic Web under the aegis of Information Science mostly perform theoretical and philosophical studies, while the computer science professionals seek practical applications of the topic. It was also concluded that a study including other databases could reveal other authors and institutions relevant to the subject of study.


2012 ◽  
Vol Vol. 14 no. 2 (Graph Theory) ◽  
Author(s):  
Laurent Gourvès ◽  
Adria Lyra ◽  
Carlos A. Martinhon ◽  
Jérôme Monnot

Graph Theory International audience In this paper we deal from an algorithmic perspective with different questions regarding properly edge-colored (or PEC) paths, trails and closed trails. Given a c-edge-colored graph G(c), we show how to polynomially determine, if any, a PEC closed trail subgraph whose number of visits at each vertex is specified before hand. As a consequence, we solve a number of interesting related problems. For instance, given subset S of vertices in G(c), we show how to maximize in polynomial time the number of S-restricted vertex (resp., edge) disjoint PEC paths (resp., trails) in G(c) with endpoints in S. Further, if G(c) contains no PEC closed trails, we show that the problem of finding a PEC s-t trail visiting a given subset of vertices can be solved in polynomial time and prove that it becomes NP-complete if we are restricted to graphs with no PEC cycles. We also deal with graphs G(c) containing no (almost) PEC cycles or closed trails through s or t. We prove that finding 2 PEC s-t paths (resp., trails) with length at most L > 0 is NP-complete in the strong sense even for graphs with maximum degree equal to 3 and present an approximation algorithm for computing k vertex (resp., edge) disjoint PEC s-t paths (resp., trails) so that the maximum path (resp., trail) length is no more than k times the PEC path (resp., trail) length in an optimal solution. Further, we prove that finding 2 vertex disjoint s-t paths with exactly one PEC s-t path is NP-complete. This result is interesting since as proved in Abouelaoualim et. al.(2008), the determination of two or more vertex disjoint PEC s-t paths can be done in polynomial time. Finally, if G(c) is an arbitrary c-edge-colored graph with maximum vertex degree equal to four, we prove that finding two monochromatic vertex disjoint s-t paths with different colors is NP-complete. We also propose some related problems.


Author(s):  
Aditya Rajbongshi ◽  
Thaharim Khan ◽  
Md. Mahbubur Rahman ◽  
Anik Pramanik ◽  
Shah Md Tanvir Siddiquee ◽  
...  

<p>The acknowledgment of plant diseases assumes an indispensable part in taking infectious prevention measures to improve the quality and amount of harvest yield. Mechanization of plant diseases is a lot advantageous as it decreases the checking work in an enormous cultivated area where mango is planted to a huge extend. Leaves being the food hotspot for plants, the early and precise recognition of leaf diseases is significant. This work focused on grouping and distinguishing the diseases of mango leaves through the process of CNN. DenseNet201, InceptionResNetV2, InceptionV3, ResNet50, ResNet152V2, and Xception all these models of CNN with transfer learning techniques are used here for getting better accuracy from the targeted data set. Image acquisition, image segmentation, and features extraction are the steps involved in disease detection. Different kinds of leaf diseases which are considered as the class for this work such as anthracnose, gall machi, powdery mildew, red rust are used in the dataset consisting of 1500 images of diseased and also healthy mango leaves image data another class is also added in the dataset. We have also evaluated the overall performance matrices and found that the DenseNet201 outperforms by obtaining the highest accuracy as 98.00% than other models.</p>


Data Mining ◽  
2013 ◽  
pp. 142-158
Author(s):  
Baoying Wang ◽  
Aijuan Dong

Clustering and outlier detection are important data mining areas. Online clustering and outlier detection generally work with continuous data streams generated at a rapid rate and have many practical applications, such as network instruction detection and online fraud detection. This chapter first reviews related background of online clustering and outlier detection. Then, an incremental clustering and outlier detection method for market-basket data is proposed and presented in details. This proposed method consists of two phases: weighted affinity measure clustering (WC clustering) and outlier detection. Specifically, given a data set, the WC clustering phase analyzes the data set and groups data items into clusters. Then, outlier detection phase examines each newly arrived transaction against the item clusters formed in WC clustering phase, and determines whether the new transaction is an outlier. Periodically, the newly collected transactions are analyzed using WC clustering to produce an updated set of clusters, against which transactions arrived afterwards are examined. The process is carried out continuously and incrementally. Finally, the future research trends on online data mining are explored at the end of the chapter.


Healthcare ◽  
2020 ◽  
Vol 8 (2) ◽  
pp. 181 ◽  
Author(s):  
Patricia Melin ◽  
Julio Cesar Monica ◽  
Daniela Sanchez ◽  
Oscar Castillo

In this paper, a multiple ensemble neural network model with fuzzy response aggregation for the COVID-19 time series is presented. Ensemble neural networks are composed of a set of modules, which are used to produce several predictions under different conditions. The modules are simple neural networks. Fuzzy logic is then used to aggregate the responses of several predictor modules, in this way, improving the final prediction by combining the outputs of the modules in an intelligent way. Fuzzy logic handles the uncertainty in the process of making a final decision about the prediction. The complete model was tested for the case of predicting the COVID-19 time series in Mexico, at the level of the states and the whole country. The simulation results of the multiple ensemble neural network models with fuzzy response integration show very good predicted values in the validation data set. In fact, the prediction errors of the multiple ensemble neural networks are significantly lower than using traditional monolithic neural networks, in this way showing the advantages of the proposed approach.


2019 ◽  
Vol 35 (3) ◽  
pp. 1373-1392 ◽  
Author(s):  
Dong Ding ◽  
Axel Gandy ◽  
Georg Hahn

Abstract We consider a statistical test whose p value can only be approximated using Monte Carlo simulations. We are interested in deciding whether the p value for an observed data set lies above or below a given threshold such as 5%. We want to ensure that the resampling risk, the probability of the (Monte Carlo) decision being different from the true decision, is uniformly bounded. This article introduces a simple open-ended method with this property, the confidence sequence method (CSM). We compare our approach to another algorithm, SIMCTEST, which also guarantees an (asymptotic) uniform bound on the resampling risk, as well as to other Monte Carlo procedures without a uniform bound. CSM is free of tuning parameters and conservative. It has the same theoretical guarantee as SIMCTEST and, in many settings, similar stopping boundaries. As it is much simpler than other methods, CSM is a useful method for practical applications.


Sign in / Sign up

Export Citation Format

Share Document