theoretical computer
Recently Published Documents


TOTAL DOCUMENTS

409
(FIVE YEARS 70)

H-INDEX

17
(FIVE YEARS 2)

2021 ◽  
Vol 27 (4) ◽  
pp. 55-70
Author(s):  
P. K. Sharma ◽  
◽  
Chandni ◽  

The category theory deals with mathematical structures and relationships between them. Categories now appear in most branches of mathematics and in some areas of theoretical computer science and mathematical physics, and acting as a unifying notion. In this paper, we study the relationship between the category of groups and the category of intuitionistic fuzzy groups. We prove that the category of groups is a subcategory of category of intuitionistic fuzzy groups and that it is not an Abelian category. We establish a function β : Hom(A, B) → [0; 1] × [0; 1] on the set of all intuitionistic fuzzy homomorphisms between intuitionistic fuzzy groups A and B of groups G and H, respectively. We prove that β is a covariant functor from the category of groups to the category of intuitionistic fuzzy groups. Further, we show that the category of intuitionistic fuzzy groups is a top category by establishing a contravariant functor from the category of intuitionistic fuzzy groups to the lattices of all intuitionistic fuzzy groups.


2021 ◽  
Author(s):  
Padmanabhan Krishnan

Vedanta is one of the oldest philosophical systems. While there are many detailed commentaries on Vedanta, there are very few mathematical descriptions of the different concepts developed there. This article shows how ideas from theoretical computer science can be used to explain Vedanta. The standard idea of transition systems and modal logic are used to develop a formal description for the different ideas in Vedanta. The generality of the formalism is illustrated via a number of examples including \samsara, \Patanjali's yoga sutras, karma, the three avasthas from the Mandukya Upanishad and the key difference between advaita and dvaita in relation to moksha.


2021 ◽  
Vol 68 (5) ◽  
pp. 1-43
Author(s):  
Mark Zhandry

Pseudorandom functions ( PRFs ) are one of the foundational concepts in theoretical computer science, with numerous applications in complexity theory and cryptography. In this work, we study the security of PRFs when evaluated on quantum superpositions of inputs. The classical techniques for arguing the security of PRFs do not carry over to this setting, even if the underlying building blocks are quantum resistant. We therefore develop a new proof technique to show that many of the classical PRF constructions remain secure when evaluated on superpositions.


2021 ◽  
Vol 52 (3) ◽  
pp. 11-13
Author(s):  
Michael Cadilhac

At its core, communication complexity is the study of the amount of information two parties need to exchange in order to compute a function. For instance, Alice receives a string of characters, Bob receives another, and they should decide whether these strings are the same with as few rounds of communication as possible. Multiple settings are conceivable, for instance with multiple parties or with randomness. Upper and lower bounds for communication problems rely on a wealth of mathematical tools, from probability theory to Ramsey theory, making this a complete and exciting topic. Further, communication complexity finds applications in different aspects of theoretical computer science, including circuit complexity and data structures. This usually requires to take a "communication" view of a problem, which in itself can be an eye-opening vantage point.


2021 ◽  
Vol 29 (3) ◽  
pp. 141-151
Author(s):  
Hiroshi Fujiwara ◽  
Ryota Adachi ◽  
Hiroaki Yamamoto

Summary. The bin packing problem is a fundamental and important optimization problem in theoretical computer science [4], [6]. An instance is a sequence of items, each being of positive size at most one. The task is to place all the items into bins so that the total size of items in each bin is at most one and the number of bins that contain at least one item is minimum. Approximation algorithms have been intensively studied. Algorithm NextFit would be the simplest one. The algorithm repeatedly does the following: If the first unprocessed item in the sequence can be placed, in terms of size, additionally to the bin into which the algorithm has placed an item the last time, place the item into that bin; otherwise place the item into an empty bin. Johnson [5] proved that the number of the resulting bins by algorithm NextFit is less than twice the number of the fewest bins that are needed to contain all items. In this article, we formalize in Mizar [1], [2] the bin packing problem as follows: An instance is a sequence of positive real numbers that are each at most one. The task is to find a function that maps the indices of the sequence to positive integers such that the sum of the subsequence for each of the inverse images is at most one and the size of the image is minimum. We then formalize algorithm NextFit, its feasibility, its approximation guarantee, and the tightness of the approximation guarantee.


2021 ◽  
Author(s):  
Deep Bhattacharjee ◽  
Sanjeevan Singha Roy

<p>If in future, the highly intelligent machines control the world, then what would be its advantages and disadvantages? Will, those artificial intelligence powered superintelligent machines become an anathema for humanity or will they ease out the human works by guiding humans in complicated tasks, thereby extending a helping hand to the human works making them comfortable. Recent studies in theoretical computer science especially artificial intelligence predicted something called ‘technological singularity’ or the ‘intelligent explosion’ and if this happens then there can be a further stage as transfused machinery intelligence and actual intelligence where the machines being immensely powerful with a cognitive capacity more than that of humans for solving ‘immensely complicated tasks’ can takeover the humans and even the machines by more intelligent machines of superhuman intelligence. Therefore, it is troublesome and worry-full to think that ‘if in case the machines turned out against humans for their optimal domination in this planet’. Can humans have any chances to avoid them by bypassing the inevitable ‘hard singularity’ through a set of ‘soft singularity’. This paper discusses all the facts in details along with significant calculations showing humanity, how to avoid the hard singularity when the progress of intelligence is inevitable. </p>


2021 ◽  
Author(s):  
Deep Bhattacharjee ◽  
Sanjeevan Singha Roy

<p>If in future, the highly intelligent machines control the world, then what would be its advantages and disadvantages? Will, those artificial intelligence powered superintelligent machines become an anathema for humanity or will they ease out the human works by guiding humans in complicated tasks, thereby extending a helping hand to the human works making them comfortable. Recent studies in theoretical computer science especially artificial intelligence predicted something called ‘technological singularity’ or the ‘intelligent explosion’ and if this happens then there can be a further stage as transfused machinery intelligence and actual intelligence where the machines being immensely powerful with a cognitive capacity more than that of humans for solving ‘immensely complicated tasks’ can takeover the humans and even the machines by more intelligent machines of superhuman intelligence. Therefore, it is troublesome and worry-full to think that ‘if in case the machines turned out against humans for their optimal domination in this planet’. Can humans have any chances to avoid them by bypassing the inevitable ‘hard singularity’ through a set of ‘soft singularity’. This paper discusses all the facts in details along with significant calculations showing humanity, how to avoid the hard singularity when the progress of intelligence is inevitable. </p>


2021 ◽  
Vol 52 (2) ◽  
pp. 46-70
Author(s):  
A. Knop ◽  
S. Lovett ◽  
S. McGuire ◽  
W. Yuan

Communication complexity studies the amount of communication necessary to compute a function whose value depends on information distributed among several entities. Yao [Yao79] initiated the study of communication complexity more than 40 years ago, and it has since become a central eld in theoretical computer science with many applications in diverse areas such as data structures, streaming algorithms, property testing, approximation algorithms, coding theory, and machine learning. The textbooks [KN06,RY20] provide excellent overviews of the theory and its applications.


2021 ◽  
Vol 52 (2) ◽  
pp. 7-9
Author(s):  
Erick Galinkin

Computability theory forms the foundation for much of theoretical computer science. Many of our great unsolved questions stem from the need to understand what problems can even be solved. The greatest question of computer science, P vs. NP, even sidesteps this entirely, asking instead how efficiently we can find solutions for the problems that we know are solvable. For many students both at the undergraduate and graduate level, a first exposure to computability theory follows a standard sequence on data structures and algorithms and students often marvel at the first results they see on undecidability - how could we possibly prove that we can never solve a problem? This book, in contrast with other books that are often used as first exposures to computability, finite automata, Turing machines, and the like, focuses very specifically on the notion of what is computable and how computability theory, as a science unto itself, fits into the grander scheme. The book is appropriate for advanced undergraduates and beginning graduate students in computer science or mathematics who are interested in theoretical computer science. Robič sidesteps the standard theoretical computer science progression - understanding finite automata and pushdown automata before moving into Turing machines - by setting the stage with Hilbert's program and mathematical prerequisites before introducing the Turing machine absent the usual prerequisites, and then introducing advanced topics often absent in introductory texts. Most chapters are relatively short and contain problem sets, making it appropriate for both a classroom text or for self-study.


Symmetry ◽  
2021 ◽  
Vol 13 (6) ◽  
pp. 1036
Author(s):  
Abel Cabrera Martínez ◽  
Alejandro Estrada-Moreno ◽  
Juan Alberto Rodríguez-Velázquez

This paper is devoted to the study of the quasi-total strong differential of a graph, and it is a contribution to the Special Issue “Theoretical computer science and discrete mathematics” of Symmetry. Given a vertex x∈V(G) of a graph G, the neighbourhood of x is denoted by N(x). The neighbourhood of a set X⊆V(G) is defined to be N(X)=⋃x∈XN(x), while the external neighbourhood of X is defined to be Ne(X)=N(X)∖X. Now, for every set X⊆V(G) and every vertex x∈X, the external private neighbourhood of x with respect to X is defined as the set Pe(x,X)={y∈V(G)∖X:N(y)∩X={x}}. Let Xw={x∈X:Pe(x,X)≠⌀}. The strong differential of X is defined to be ∂s(X)=|Ne(X)|−|Xw|, while the quasi-total strong differential of G is defined to be ∂s*(G)=max{∂s(X):X⊆V(G)andXw⊆N(X)}. We show that the quasi-total strong differential is closely related to several graph parameters, including the domination number, the total domination number, the 2-domination number, the vertex cover number, the semitotal domination number, the strong differential, and the quasi-total Italian domination number. As a consequence of the study, we show that the problem of finding the quasi-total strong differential of a graph is NP-hard.


Sign in / Sign up

Export Citation Format

Share Document