Lower Bounds in Computational Complexity from Information Theory, Algebra and Combinatorics PDF Download

Are you looking for read ebook online? Search for your book and save it on your Kindle device, PC, phones or tablets. Download Lower Bounds in Computational Complexity from Information Theory, Algebra and Combinatorics PDF full book. Access full book title Lower Bounds in Computational Complexity from Information Theory, Algebra and Combinatorics by Sivaramakrishnan Natarajan Ramamoorthy. Download full books in PDF and EPUB format.

Lower Bounds in Computational Complexity from Information Theory, Algebra and Combinatorics

Lower Bounds in Computational Complexity from Information Theory, Algebra and Combinatorics PDF Author: Sivaramakrishnan Natarajan Ramamoorthy
Publisher:
ISBN:
Category : Algebra
Languages : en
Pages : 133

Book Description
In this thesis, we study basic lower bound questions in communication complexity, data structures and depth-2 threshold circuits, and prove lower bounds in these models by devising new techniques in information theory, algebra and combinatorics. Communication Complexity: A central open problem in communication complexity is to determine whether the messages exchanged by two parties can be compressed if we know that the amount of information revealed by the parties about their inputs is small. We consider the compression question when the information revealed by one of the parties is much less than the information revealed by the other. In this setting, we prove two new improved compression schemes. Data Structures: Our contribution to data structure lower bounds is threefold: (a) Consider the Vector-Matrix-Vector problem, in which the data structure stores a \sqrt{n} \times \sqrt{n} bit matrix and provides an algorithm to compute uMv (mod 2) for \sqrt{n}-bit vectors u, v. We prove new static data structure lower bounds for this problem, which improve upon the previous work of Chattopadhyay, Kouck\'{y}, Loff, and Mukhopadhyay by a factor of log n. Our proof uses a new technique by combining the discrepancy method from communication complexity with a modification of cell sampling. This technique turns out to be more general, and can be used to prove strong lower bounds for data structures that err and have a binary query output. (b) We show new connections between systematic linear data structures, linear data structures and matrix rigidity. Specifically, we prove the equivalence between systematic linear data structures and set rigidity, a relaxation of matrix rigidity that was defined by Alon, Panigrahy and Yekhanin. This equivalence not only sheds light on the difficulty of proving strong lower bounds against data structures but also suggests candidate rigid sets from data structures. We also use this equivalence to relate linear data structures and rigidity. (c) We study data structures that maintain a set from {1,2,...,n}, allow insertion of new elements and report the median, minimum or predecessors of the set. In particular, we prove that if one of the operations of the data structure is non-adaptive and each cell in memory stores O(log n) bits, then some operation must take time Omega(log n/ log log n). This bound nearly matches the guarantees of binary search trees, whose insertions and predecessor operations can be made non-adaptive. Our lower bounds are obtained via the sunflower lemma from combinatorics. Balancing Sets and Depth-2 Threshold Circuits: Majority and threshold circuits are important sub-classes of Boolean circuits. Kulikov and Podoslkii asked the question of finding the minimum fan-in required to compute the majority of n-bits using a depth-2 majority circuit. We identify a connection between this circuit question and Galvin's balancing sets problem from combinatorics, a well studied discrepancy-type question that was initiated by the work of Frankl and R{\"o}dl. We use this finding to prove tight bounds for both the circuit question and Galvin's problem. The proofs use polynomials over finite fields.In this thesis, we study basic lower bound questions in communication complexity, data structures and depth-2 threshold circuits, and prove lower bounds in these models by devising new techniques in information theory, algebra and combinatorics. Communication Complexity: A central open problem in communication complexity is to determine whether the messages exchanged by two parties can be compressed if we know that the amount of information revealed by the parties about their inputs is small. We consider the compression question when the information revealed by one of the parties is much less than the information revealed by the other. In this setting, we prove two new improved compression schemes. Data Structures: Our contribution to data structure lower bounds is threefold: (a) Consider the Vector-Matrix-Vector problem, in which the data structure stores a \sqrt{n} \times \sqrt{n} bit matrix and provides an algorithm to compute uMv (mod 2) for \sqrt{n}-bit vectors u, v. We prove new static data structure lower bounds for this problem, which improve upon the previous work of Chattopadhyay, Kouck\'{y}, Loff, and Mukhopadhyay by a factor of log n. Our proof uses a new technique by combining the discrepancy method from communication complexity with a modification of cell sampling. This technique turns out to be more general, and can be used to prove strong lower bounds for data structures that err and have a binary query output. (b) We show new connections between systematic linear data structures, linear data structures and matrix rigidity. Specifically, we prove the equivalence between systematic linear data structures and set rigidity, a relaxation of matrix rigidity that was defined by Alon, Panigrahy and Yekhanin. This equivalence not only sheds light on the difficulty of proving strong lower bounds against data structures but also suggests candidate rigid sets from data structures. We also use this equivalence to relate linear data structures and rigidity. (c) We study data structures that maintain a set from {1,2,...,n}, allow insertion of new elements and report the median, minimum or predecessors of the set. In particular, we prove that if one of the operations of the data structure is non-adaptive and each cell in memory stores O(log n) bits, then some operation must take time Omega(log n/ log log n). This bound nearly matches the guarantees of binary search trees, whose insertions and predecessor operations can be made non-adaptive. Our lower bounds are obtained via the sunflower lemma from combinatorics. Balancing Sets and Depth-2 Threshold Circuits: Majority and threshold circuits are important sub-classes of Boolean circuits. Kulikov and Podoslkii asked the question of finding the minimum fan-in required to compute the majority of n-bits using a depth-2 majority circuit. We identify a connection between this circuit question and Galvin's balancing sets problem from combinatorics, a well studied discrepancy-type question that was initiated by the work of Frankl and R{\"o}dl. We use this finding to prove tight bounds for both the circuit question and Galvin's problem. The proofs use polynomials over finite fields.

Lower Bounds in Computational Complexity from Information Theory, Algebra and Combinatorics

Lower Bounds in Computational Complexity from Information Theory, Algebra and Combinatorics PDF Author: Sivaramakrishnan Natarajan Ramamoorthy
Publisher:
ISBN:
Category : Algebra
Languages : en
Pages : 133

Book Description
In this thesis, we study basic lower bound questions in communication complexity, data structures and depth-2 threshold circuits, and prove lower bounds in these models by devising new techniques in information theory, algebra and combinatorics. Communication Complexity: A central open problem in communication complexity is to determine whether the messages exchanged by two parties can be compressed if we know that the amount of information revealed by the parties about their inputs is small. We consider the compression question when the information revealed by one of the parties is much less than the information revealed by the other. In this setting, we prove two new improved compression schemes. Data Structures: Our contribution to data structure lower bounds is threefold: (a) Consider the Vector-Matrix-Vector problem, in which the data structure stores a \sqrt{n} \times \sqrt{n} bit matrix and provides an algorithm to compute uMv (mod 2) for \sqrt{n}-bit vectors u, v. We prove new static data structure lower bounds for this problem, which improve upon the previous work of Chattopadhyay, Kouck\'{y}, Loff, and Mukhopadhyay by a factor of log n. Our proof uses a new technique by combining the discrepancy method from communication complexity with a modification of cell sampling. This technique turns out to be more general, and can be used to prove strong lower bounds for data structures that err and have a binary query output. (b) We show new connections between systematic linear data structures, linear data structures and matrix rigidity. Specifically, we prove the equivalence between systematic linear data structures and set rigidity, a relaxation of matrix rigidity that was defined by Alon, Panigrahy and Yekhanin. This equivalence not only sheds light on the difficulty of proving strong lower bounds against data structures but also suggests candidate rigid sets from data structures. We also use this equivalence to relate linear data structures and rigidity. (c) We study data structures that maintain a set from {1,2,...,n}, allow insertion of new elements and report the median, minimum or predecessors of the set. In particular, we prove that if one of the operations of the data structure is non-adaptive and each cell in memory stores O(log n) bits, then some operation must take time Omega(log n/ log log n). This bound nearly matches the guarantees of binary search trees, whose insertions and predecessor operations can be made non-adaptive. Our lower bounds are obtained via the sunflower lemma from combinatorics. Balancing Sets and Depth-2 Threshold Circuits: Majority and threshold circuits are important sub-classes of Boolean circuits. Kulikov and Podoslkii asked the question of finding the minimum fan-in required to compute the majority of n-bits using a depth-2 majority circuit. We identify a connection between this circuit question and Galvin's balancing sets problem from combinatorics, a well studied discrepancy-type question that was initiated by the work of Frankl and R{\"o}dl. We use this finding to prove tight bounds for both the circuit question and Galvin's problem. The proofs use polynomials over finite fields.In this thesis, we study basic lower bound questions in communication complexity, data structures and depth-2 threshold circuits, and prove lower bounds in these models by devising new techniques in information theory, algebra and combinatorics. Communication Complexity: A central open problem in communication complexity is to determine whether the messages exchanged by two parties can be compressed if we know that the amount of information revealed by the parties about their inputs is small. We consider the compression question when the information revealed by one of the parties is much less than the information revealed by the other. In this setting, we prove two new improved compression schemes. Data Structures: Our contribution to data structure lower bounds is threefold: (a) Consider the Vector-Matrix-Vector problem, in which the data structure stores a \sqrt{n} \times \sqrt{n} bit matrix and provides an algorithm to compute uMv (mod 2) for \sqrt{n}-bit vectors u, v. We prove new static data structure lower bounds for this problem, which improve upon the previous work of Chattopadhyay, Kouck\'{y}, Loff, and Mukhopadhyay by a factor of log n. Our proof uses a new technique by combining the discrepancy method from communication complexity with a modification of cell sampling. This technique turns out to be more general, and can be used to prove strong lower bounds for data structures that err and have a binary query output. (b) We show new connections between systematic linear data structures, linear data structures and matrix rigidity. Specifically, we prove the equivalence between systematic linear data structures and set rigidity, a relaxation of matrix rigidity that was defined by Alon, Panigrahy and Yekhanin. This equivalence not only sheds light on the difficulty of proving strong lower bounds against data structures but also suggests candidate rigid sets from data structures. We also use this equivalence to relate linear data structures and rigidity. (c) We study data structures that maintain a set from {1,2,...,n}, allow insertion of new elements and report the median, minimum or predecessors of the set. In particular, we prove that if one of the operations of the data structure is non-adaptive and each cell in memory stores O(log n) bits, then some operation must take time Omega(log n/ log log n). This bound nearly matches the guarantees of binary search trees, whose insertions and predecessor operations can be made non-adaptive. Our lower bounds are obtained via the sunflower lemma from combinatorics. Balancing Sets and Depth-2 Threshold Circuits: Majority and threshold circuits are important sub-classes of Boolean circuits. Kulikov and Podoslkii asked the question of finding the minimum fan-in required to compute the majority of n-bits using a depth-2 majority circuit. We identify a connection between this circuit question and Galvin's balancing sets problem from combinatorics, a well studied discrepancy-type question that was initiated by the work of Frankl and R{\"o}dl. We use this finding to prove tight bounds for both the circuit question and Galvin's problem. The proofs use polynomials over finite fields.

Combinatorics, Computing and Complexity

Combinatorics, Computing and Complexity PDF Author: Dingzhu Du
Publisher: Springer
ISBN:
Category : Computers
Languages : en
Pages : 256

Book Description
One service mathematics has rendered the 'Et moi, ... , si j'avait su comment en revenir, It has put common sense back je n'y serais point al!e.' human race. Jules Verne where it belongs, on the topmost shelf next to the dusty canister labelled 'discarded n- sense'. The series is divergent; therefore we may be able to do something with it. Eric T. Bell o. Heaviside Mathematics is a tool for thought. A highly necessary tool in a world where both feedback and non­ linearities abound. Similarly, all kinds of parts of mathematics serve as tools for other parts and for other sciences. Applying a simple rewriting rule to the quote on the right above one finds such statements as: 'One service topology has rendered mathematical physics .. .'; 'One service logic has rendered com­ puterscience .. .'; 'One service category theory has rendered mathematics .. .'. All arguably true. And all statements obtainable this way form part of the raison d'etre of this series.

Algebraic Complexity Theory

Algebraic Complexity Theory PDF Author: Peter Bürgisser
Publisher: Springer Science & Business Media
ISBN: 3662033380
Category : Mathematics
Languages : en
Pages : 630

Book Description
The algorithmic solution of problems has always been one of the major concerns of mathematics. For a long time such solutions were based on an intuitive notion of algorithm. It is only in this century that metamathematical problems have led to the intensive search for a precise and sufficiently general formalization of the notions of computability and algorithm. In the 1930s, a number of quite different concepts for this purpose were pro posed, such as Turing machines, WHILE-programs, recursive functions, Markov algorithms, and Thue systems. All these concepts turned out to be equivalent, a fact summarized in Church's thesis, which says that the resulting definitions form an adequate formalization of the intuitive notion of computability. This had and continues to have an enormous effect. First of all, with these notions it has been possible to prove that various problems are algorithmically unsolvable. Among of group these undecidable problems are the halting problem, the word problem theory, the Post correspondence problem, and Hilbert's tenth problem. Secondly, concepts like Turing machines and WHILE-programs had a strong influence on the development of the first computers and programming languages. In the era of digital computers, the question of finding efficient solutions to algorithmically solvable problems has become increasingly important. In addition, the fact that some problems can be solved very efficiently, while others seem to defy all attempts to find an efficient solution, has called for a deeper under standing of the intrinsic computational difficulty of problems.

Advances in Computational Complexity Theory

Advances in Computational Complexity Theory PDF Author: Jin-yi Cai
Publisher: American Mathematical Soc.
ISBN: 9780821885758
Category : Mathematics
Languages : en
Pages : 234

Book Description
* Recent papers on computational complexity theory * Contributions by some of the leading experts in the field This book will prove to be of lasting value in this fast-moving field as it provides expositions not found elsewhere. The book touches on some of the major topics in complexity theory and thus sheds light on this burgeoning area of research.

Lower Bounds in Communication Complexity

Lower Bounds in Communication Complexity PDF Author: Troy Lee
Publisher: Now Publishers Inc
ISBN: 1601982585
Category : Computers
Languages : en
Pages : 152

Book Description
The communication complexity of a function f(x, y) measures the number of bits that two players, one who knows x and the other who knows y, must exchange to determine the value f(x, y). Communication complexity is a fundamental measure of complexity of functions. Lower bounds on this measure lead to lower bounds on many other measures of computational complexity. This monograph surveys lower bounds in the field of communication complexity. Our focus is on lower bounds that work by first representing the communication complexity measure in Euclidean space. That is to say, the first step in these lower bound techniques is to find a geometric complexity measure, such as rank or trace norm, that serves as a lower bound to the underlying communication complexity measure. Lower bounds on this geometric complexity measure are then found using algebraic and geometric tools.

Computational Complexity

Computational Complexity PDF Author: Sanjeev Arora
Publisher: Cambridge University Press
ISBN: 0521424267
Category : Computers
Languages : en
Pages : 609

Book Description
New and classical results in computational complexity, including interactive proofs, PCP, derandomization, and quantum computation. Ideal for graduate students.

Numbers, Information and Complexity

Numbers, Information and Complexity PDF Author: Ingo Althöfer
Publisher: Springer Science & Business Media
ISBN: 9780792377658
Category : Technology & Engineering
Languages : en
Pages : 676

Book Description
Numbers, Information and Complexity is a collection of about 50 articles in honour of Rudolf Ahlswede. His main areas of research are represented in the three sections, `Numbers and Combinations', `Information Theory (Channels and Networks, Combinatorial and Algebraic Coding, Cryptology, with the related fields Data Compression, Entropy Theory, Symbolic Dynamics, Probability and Statistics)', and `Complexity'. Special attention was paid to the interplay between the fields. Surveys on topics of current interest are included as well as new research results. The book features surveys on Combinatorics about topics such as intersection theorems, which are not yet covered in textbooks, several contributions by leading experts in data compression, and relations to Natural Sciences are discussed.

Complexity and Approximation

Complexity and Approximation PDF Author: Ding-Zhu Du
Publisher: Springer Nature
ISBN: 3030416720
Category : Computers
Languages : en
Pages : 298

Book Description
This Festschrift is in honor of Ker-I Ko, Professor in the Stony Brook University, USA. Ker-I Ko was one of the founding fathers of computational complexity over real numbers and analysis. He and Harvey Friedman devised a theoretical model for real number computations by extending the computation of Turing machines. He contributed significantly to advancing the theory of structural complexity, especially on polynomial-time isomorphism, instance complexity, and relativization of polynomial-time hierarchy. Ker-I also made many contributions to approximation algorithm theory of combinatorial optimization problems. This volume contains 17 contributions in the area of complexity and approximation. Those articles are authored by researchers over the world, including North America, Europe and Asia. Most of them are co-authors, colleagues, friends, and students of Ker-I Ko.

Feasible Mathematics II

Feasible Mathematics II PDF Author: Peter Clote
Publisher: Springer Science & Business Media
ISBN: 1461225663
Category : Computers
Languages : en
Pages : 456

Book Description
Perspicuity is part of proof. If the process by means of which I get a result were not surveyable, I might indeed make a note that this number is what comes out - but what fact is this supposed to confirm for me? I don't know 'what is supposed to come out' . . . . 1 -L. Wittgenstein A feasible computation uses small resources on an abstract computa tion device, such as a 'lUring machine or boolean circuit. Feasible math ematics concerns the study of feasible computations, using combinatorics and logic, as well as the study of feasibly presented mathematical structures such as groups, algebras, and so on. This volume contains contributions to feasible mathematics in three areas: computational complexity theory, proof theory and algebra, with substantial overlap between different fields. In computational complexity theory, the polynomial time hierarchy is characterized without the introduction of runtime bounds by the closure of certain initial functions under safe composition, predicative recursion on notation, and unbounded minimization (S. Bellantoni); an alternative way of looking at NP problems is introduced which focuses on which pa rameters of the problem are the cause of its computational complexity and completeness, density and separation/collapse results are given for a struc ture theory for parametrized problems (R. Downey and M. Fellows); new characterizations of PTIME and LINEAR SPACE are given using predicative recurrence over all finite tiers of certain stratified free algebras (D.

Complexity in Information Theory

Complexity in Information Theory PDF Author: Yaser S. Abu-Mostafa
Publisher: Springer
ISBN: 9780387966007
Category : Computers
Languages : en
Pages : 131

Book Description
The means and ends of information theory and computational complexity have grown significantly closer over the past decade. Common analytic tools, such as combinatorial mathematics and information flow arguments, have been the cornerstone of VLSl complexity and cooperative computation. The basic assumption of limited computing resources is the premise for cryptography, where the distinction is made between available information and accessible information. Numerous other examples of common goals and tools between the two disciplines have shaped a new research category of 'information and complexity theory'. This volume is intended to expose to the research community some of the recent significant topics along this theme. The contributions selected here are all very basic, presently active, fairly well-established, and stimulating for substantial follow-ups. This is not an encyclopedia on the subject, it is concerned only with timely contributions of sufficient coherence and promise. The styles of the six chapters cover a wide spectrum from specific mathematical results to surveys of large areas. It is hoped that the technical content and theme of this volume will help establish this general research area. I would like to thank the authors of the chapters for contributing to this volume. I also would like to thank Ed Posner for his initiative to address this subject systematically, and Andy Fyfe and Ruth Erlanson for proofreading some of the chapters.