COMPUTING THE RUPTURE DEGREE IN COMPOSITE GRAPHS

2010 ◽  
Vol 21 (03) ◽  
pp. 311-319 ◽  
Author(s):  
AYSUN AYTAC ◽  
ZEYNEP NIHAN ODABAS

The rupture degree of an incomplete connected graph G is defined by [Formula: see text] where w(G - S) is the number of components of G - S and m(G - S) is the order of a largest component of G - S. For the complete graph Kn, rupture degree is defined as 1 - n. This parameter can be used to measure the vulnerability of a graph. Rupture degree can reflect the vulnerability of graphs better than or independent of the other parameters. To some extent, it represents a trade-off between the amount of work done to damage the network and how badly the network is damaged. Computing the rupture degree of a graph is NP-complete. In this paper, we give formulas for the rupture degree of composition of some special graphs and we consider the relationships between the rupture degree and other vulnerability parameters.

2017 ◽  
Vol 60 ◽  
pp. 687-716 ◽  
Author(s):  
Piotr Skowron ◽  
Piotr Faliszewski

We consider the problem of winner determination under Chamberlin--Courant's multiwinner voting rule with approval utilities. This problem is equivalent to the well-known NP-complete MaxCover problem and, so, the best polynomial-time approximation algorithm for it has approximation ratio 1 - 1/e. We show exponential-time/FPT approximation algorithms that, on one hand, achieve arbitrarily good approximation ratios and, on the other hand, have running times much better than known exact algorithms. We focus on the cases where the voters have to approve of at most/at least a given number of candidates.


2016 ◽  
Vol 21 (3) ◽  
pp. 403-426 ◽  
Author(s):  
Sungmook Choi

Research to date suggests that textual enhancement may positively affect the learning of multiword combinations known as collocations, but may impair recall of unenhanced text. However, the attentional mechanisms underlying such effects remain unclear. In this study, 38 undergraduate students were divided into two groups: one read a text containing typographically enhanced collocations (ET group) and the other read the same text with unenhanced collocations (the baseline text, or BT group). While reading, participants’ eye movements were recorded with an eye-tracker. Results showed that the ET group spent significantly longer time processing target collocations, and performed better than the BT group in a post-reading collocation test. However, apart from the enhanced collocations, the ET group recalled significantly less unenhanced text than the BT group. Further investigation of eye fixation data showed that the ET group spent substantially longer time processing collocations which, according to a pretest, they were not familiar with than did the BT group, whereas the two groups did not differ significantly in their processing of familiar collocations. Collectively, the results suggest that the trade-off between collocation learning and recall of unenhanced text is due to additional cognitive resources being allocated to enhanced collocations that are new to the reader.


2016 ◽  
Vol 27 (04) ◽  
pp. 501-509
Author(s):  
Zongtian Wei ◽  
Nannan Qi ◽  
Xiaokui Yue

Let G be a connected graph. A set of vertices [Formula: see text] is called subverted from G if each of the vertices in S and the neighbor of S in G are deleted from G. By G/S we denote the survival subgraph that remains after S is subverted from G. A vertex set S is called a cut-strategy of G if G/S is disconnected, a clique, or ø. The vertex-neighbor-scattering number of G is defined by [Formula: see text], where S is any cut-strategy of G, and ø(G/S) is the number of components of G/S. It is known that this parameter can be used to measure the vulnerability of spy networks and the computing problem of the parameter is NP-complete. In this paper, we discuss the vertex-neighbor-scattering number of bipartite graphs. The NP-completeness of the computing problem of this parameter is proven, and some upper and lower bounds of the parameter are also given.


Author(s):  
A. V. Crewe

We have become accustomed to differentiating between the scanning microscope and the conventional transmission microscope according to the resolving power which the two instruments offer. The conventional microscope is capable of a point resolution of a few angstroms and line resolutions of periodic objects of about 1Å. On the other hand, the scanning microscope, in its normal form, is not ordinarily capable of a point resolution better than 100Å. Upon examining reasons for the 100Å limitation, it becomes clear that this is based more on tradition than reason, and in particular, it is a condition imposed upon the microscope by adherence to thermal sources of electrons.


Author(s):  
Maxim B. Demchenko ◽  

The sphere of the unknown, supernatural and miraculous is one of the most popular subjects for everyday discussions in Ayodhya – the last of the provinces of the Mughal Empire, which entered the British Raj in 1859, and in the distant past – the space of many legendary and mythological events. Mostly they concern encounters with inhabitants of the “other world” – spirits, ghosts, jinns as well as miraculous healings following magic rituals or meetings with the so-called saints of different religions (Hindu sadhus, Sufi dervishes),with incomprehensible and frightening natural phenomena. According to the author’s observations ideas of the unknown in Avadh are codified and structured in Avadh better than in other parts of India. Local people can clearly define if they witness a bhut or a jinn and whether the disease is caused by some witchcraft or other reasons. Perhaps that is due to the presence in the holy town of a persistent tradition of katha, the public presentation of plots from the Ramayana epic in both the narrative and poetic as well as performative forms. But are the events and phenomena in question a miracle for the Avadhvasis, residents of Ayodhya and its environs, or are they so commonplace that they do not surprise or fascinate? That exactly is the subject of the essay, written on the basis of materials collected by the author in Ayodhya during the period of 2010 – 2019. The author would like to express his appreciation to Mr. Alok Sharma (Faizabad) for his advice and cooperation.


HortScience ◽  
1998 ◽  
Vol 33 (3) ◽  
pp. 452c-452 ◽  
Author(s):  
Schuyler D. Seeley ◽  
Raymundo Rojas-Martinez ◽  
James Frisby

Mature peach trees in pots were treated with nighttime temperatures of –3, 6, 12, and 18 °C for 16 h and a daytime temperature of 20 °C for 8 h until the leaves abscised in the colder treatments. The trees were then chilled at 6 °C for 40 to 70 days. Trees were removed from chilling at 40, 50, 60, and 70 days and placed in a 20 °C greenhouse under increasing daylength, spring conditions. Anthesis was faster and shoot length increased with longer chilling treatments. Trees exposed to –3 °C pretreatment flowered and grew best with 40 days of chilling. However, they did not flower faster or grow better than the other treatments with longer chilling times. There was no difference in flowering or growth between the 6 and 12 °C pretreatments. The 18 °C pretreatment resulted in slower flowering and very little growth after 40 and 50 days of chilling, but growth was comparable to other treatments after 70 days of chilling.


2020 ◽  
Vol 27 (3) ◽  
pp. 178-186 ◽  
Author(s):  
Ganesan Pugalenthi ◽  
Varadharaju Nithya ◽  
Kuo-Chen Chou ◽  
Govindaraju Archunan

Background: N-Glycosylation is one of the most important post-translational mechanisms in eukaryotes. N-glycosylation predominantly occurs in N-X-[S/T] sequon where X is any amino acid other than proline. However, not all N-X-[S/T] sequons in proteins are glycosylated. Therefore, accurate prediction of N-glycosylation sites is essential to understand Nglycosylation mechanism. Objective: In this article, our motivation is to develop a computational method to predict Nglycosylation sites in eukaryotic protein sequences. Methods: In this article, we report a random forest method, Nglyc, to predict N-glycosylation site from protein sequence, using 315 sequence features. The method was trained using a dataset of 600 N-glycosylation sites and 600 non-glycosylation sites and tested on the dataset containing 295 Nglycosylation sites and 253 non-glycosylation sites. Nglyc prediction was compared with NetNGlyc, EnsembleGly and GPP methods. Further, the performance of Nglyc was evaluated using human and mouse N-glycosylation sites. Results: Nglyc method achieved an overall training accuracy of 0.8033 with all 315 features. Performance comparison with NetNGlyc, EnsembleGly and GPP methods shows that Nglyc performs better than the other methods with high sensitivity and specificity rate. Conclusion: Our method achieved an overall accuracy of 0.8248 with 0.8305 sensitivity and 0.8182 specificity. Comparison study shows that our method performs better than the other methods. Applicability and success of our method was further evaluated using human and mouse N-glycosylation sites. Nglyc method is freely available at https://github.com/bioinformaticsML/ Ngly.


2019 ◽  
Vol 15 (5) ◽  
pp. 472-485 ◽  
Author(s):  
Kuo-Chen Chou ◽  
Xiang Cheng ◽  
Xuan Xiao

<P>Background/Objective: Information of protein subcellular localization is crucially important for both basic research and drug development. With the explosive growth of protein sequences discovered in the post-genomic age, it is highly demanded to develop powerful bioinformatics tools for timely and effectively identifying their subcellular localization purely based on the sequence information alone. Recently, a predictor called “pLoc-mEuk” was developed for identifying the subcellular localization of eukaryotic proteins. Its performance is overwhelmingly better than that of the other predictors for the same purpose, particularly in dealing with multi-label systems where many proteins, called “multiplex proteins”, may simultaneously occur in two or more subcellular locations. Although it is indeed a very powerful predictor, more efforts are definitely needed to further improve it. This is because pLoc-mEuk was trained by an extremely skewed dataset where some subset was about 200 times the size of the other subsets. Accordingly, it cannot avoid the biased consequence caused by such an uneven training dataset. </P><P> Methods: To alleviate such bias, we have developed a new predictor called pLoc_bal-mEuk by quasi-balancing the training dataset. Cross-validation tests on exactly the same experimentconfirmed dataset have indicated that the proposed new predictor is remarkably superior to pLocmEuk, the existing state-of-the-art predictor in identifying the subcellular localization of eukaryotic proteins. It has not escaped our notice that the quasi-balancing treatment can also be used to deal with many other biological systems. </P><P> Results: To maximize the convenience for most experimental scientists, a user-friendly web-server for the new predictor has been established at http://www.jci-bioinfo.cn/pLoc_bal-mEuk/. </P><P> Conclusion: It is anticipated that the pLoc_bal-Euk predictor holds very high potential to become a useful high throughput tool in identifying the subcellular localization of eukaryotic proteins, particularly for finding multi-target drugs that is currently a very hot trend trend in drug development.</P>


Author(s):  
Bahador Bahrami

Evidence for and against the idea that “two heads are better than one” is abundant. This chapter considers the contextual conditions and social norms that predict madness or wisdom of crowds to identify the adaptive value of collective decision-making beyond increased accuracy. Similarity of competence among members of a collective impacts collective accuracy, but interacting individuals often seem to operate under the assumption that they are equally competent even when direct evidence suggest the opposite and dyadic performance suffers. Cross-cultural data from Iran, China, and Denmark support this assumption of similarity (i.e., equality bias) as a sensible heuristic that works most of the time and simplifies social interaction. Crowds often trade off accuracy for other collective benefits such as diffusion of responsibility and reduction of regret. Consequently, two heads are sometimes better than one, but no-one holds the collective accountable, not even for the most disastrous of outcomes.


2020 ◽  
Vol 12 (7) ◽  
pp. 2767 ◽  
Author(s):  
Víctor Yepes ◽  
José V. Martí ◽  
José García

The optimization of the cost and CO 2 emissions in earth-retaining walls is of relevance, since these structures are often used in civil engineering. The optimization of costs is essential for the competitiveness of the construction company, and the optimization of emissions is relevant in the environmental impact of construction. To address the optimization, black hole metaheuristics were used, along with a discretization mechanism based on min–max normalization. The stability of the algorithm was evaluated with respect to the solutions obtained; the steel and concrete values obtained in both optimizations were analyzed. Additionally, the geometric variables of the structure were compared. Finally, the results obtained were compared with another algorithm that solved the problem. The results show that there is a trade-off between the use of steel and concrete. The solutions that minimize CO 2 emissions prefer the use of concrete instead of those that optimize the cost. On the other hand, when comparing the geometric variables, it is seen that most remain similar in both optimizations except for the distance between buttresses. When comparing with another algorithm, the results show a good performance in optimization using the black hole algorithm.


Sign in / Sign up

Export Citation Format

Share Document