8:15 AM | Breakfast |
9:00 AM |
Strong locally testable codes with relaxed local decoders (abstract)
Oded Goldreich (Weizmann Institute), Tom Gur (Weizmann Institute), Ilan Komargodski (Weizmann Institute)
Locally testable codes (LTCs) are error-correcting codes that admit very efficient codeword tests. An LTC is said to be strong if it has a proximity-oblivious tester; that is, a tester that makes only a constant number of queries and reject non-codewords with probability that depends solely on their distance from the code. Locally decodable codes (LDCs) are complimentary to LTCs. While the latter allow for highly efficient rejection of strings that are far from being codewords, LDCs allow for highly efficient recovery of individual bits of the information that is encoded in strings that are close to being codewords.
While there are known constructions of strong-LTCs with nearly-linear length, the existence of a constant-query LDC with polynomial length is a major open problem. In an attempt to bypass this barrier, Ben-Sasson et al. (SICOMP 2006) introduced a natural relaxation of local decodability, called relaxed-LDCs. This notion requires local recovery of nearly all individual information-bits, yet allows for recovery-failure (but not error) on the rest. Ben-Sasson et al. constructed a constant-query relaxed-LDC with nearly-linear length (i.e., length k^(1 + alpha) for an arbitrarily small constant alpha>0, where k is the dimension of the code).
This work focuses on obtaining strong testability and relaxed decodability simultaneously. We construct a family of binary linear codes of nearly-linear length that are both strong-LTCs (with one-sided error) and constant-query relaxed-LDCs. This improves upon the previously known constructions, which obtain either weak LTCs or require polynomial length.
Our construction heavily relies on tensor codes and PCPs. In particular, we provide strong canonical PCPs of proximity for membership in any linear code with constant rate and relative distance. Loosely speaking, these are PCPs of proximity wherein the verifier is proximity oblivious (similarly to strong-LTCs and every valid statement has a unique canonical proof. Furthermore, the verifier is required to reject non-canonical proofs (even for valid statements).
As an application, we improve the best known separation result between the complexity of decision and verification in the setting of property testing.
|
9:30 AM |
An entropy sumset inequality and polynomially fast convergence to Shannon capacity over all alphabets (abstract)
Venkatesan Guruswami (Carnegie Mellon University), Ameya Velingker (Carnegie Mellon University)
We prove a lower estimate on the increase in entropy when two copies of a conditional random variable X | Y, with X supported on Z_q={0,1,...,q-1} for prime q, are summed modulo q. Specifically, given two i.i.d. copies (X_1,Y_1) and (X_2,Y_2) of a pair of random variables (X,Y), with X taking values in Z_q, we show
H(X_1 + X_2 \mid Y_1, Y_2) - H(X|Y) >=e alpha(q) * H(X|Y) (1-H(X|Y))
for some alpha(q) > 0, where H(.) is the normalized (by factor log_2(q)) entropy. In particular, if X | Y is not close to being fully random or fully deterministic and H(X| Y) \in (gamma,1-gamma), then the entropy of the sum increases by Omega_q(gamma). Our motivation is an effective analysis of the finite-length behavior of polar codes, for which the linear dependence on gamma is quantitatively important. The assumption of q being prime is necessary: for X supported uniformly on a proper subgroup of Z_q we have H(X+X)=H(X). For X supported on infinite groups without a finite subgroup (the torsion-free case) and no conditioning, a sumset inequality for the absolute increase in (unnormalized) entropy was shown by Tao in [Tao, CP&R 2010].
We use our sumset inequality to analyze Ari kan's construction of polar codes and prove that for any q-ary source X, where q is any fixed prime, and anyepsilon > 0, polar codes allow efficient data compression of N i.i.d. copies of X into (H(X)+epsilon)N q-ary symbols, as soon as N is polynomially large in 1/epsilon. We can get capacity-achieving source codes with similar guarantees for composite alphabets, by factoring q into primes and combining different polar codes for each prime in factorization.
A consequence of our result for noisy channel coding is that for all discrete memoryless channels, there are explicit codes enabling reliable communication within epsilon > 0 of the symmetric Shannon capacity for a block length and decoding complexity bounded by a polynomial in 1/epsilon. The result was previously shown for the special case of binary-input channels [Guruswami/Xial, FOCS'13; Hassani/Alishahi/Urbanke, CoRR 2013], and this work extends the result to channels over any alphabet.
|
10:00 AM |
The list-decoding size of Fourier-sparse Boolean functions (abstract)
Ishay Haviv (Academic College of Tel Aviv-Yaffo), Oded Regev (New York University)
A function defined on the Boolean hypercube is k-Fourier-sparse if it has at most k nonzero Fourier coefficients. For a function f: F_2^n -> R and parameters k and d, we prove a strong upper bound on the number of k-Fourier-sparse Boolean functions that disagree with f on at most d inputs. Our bound implies that the number of uniform and independent random samples needed for learning the class of k-Fourier-sparse Boolean functions on n variables exactly is at most O(n * k * log(k)).
As an application, we prove an upper bound on the query complexity of testing Booleanity of Fourier-sparse functions. Our bound is tight up to a logarithmic factor and quadratically improves on a result due to Gur and Tamuz [Chicago J. Theor. Comput. Sci.,2013].
|
10:30 AM |
Nonclassical polynomials as a barrier to polynomial lower bounds (abstract)
Abhishek Bhowmick (UT Austin), Shachar Lovett (UC San Diego)
The problem of constructing explicit functions which cannot be approximated by low degree polynomials has been extensively studied in computational complexity, motivated by applications in circuit lower bounds, pseudo-randomness, constructions of Ramsey graphs and locally decodable codes. Still, most of the known lower bounds become trivial for polynomials of super-logarithmic degree. Here, we suggest a new barrier explaining this phenomenon. We show that many of the existing lower bound proof techniques extend to nonclassical polynomials, an extension of classical polynomials which arose in higher order Fourier analysis. Moreover, these techniques are tight for nonclassical polynomials of logarithmic degree.
|
11:00 AM | Coffee break |
11:20 AM |
FCRC keynote lecture
Don Syme (Microsoft Research Cambridge)
|
12:30 PM | Lunch |
2:00 PM |
Simplified lower bounds on the multiparty communication complexity of disjointness (abstract)
Anup Rao (University of Washington), Amir Yehudayoff (Technion)
We show that the deterministic number-on-forehead communication complexity of set disjointness for k parties on a universe of size n is Omega(n/4^k). This gives the first lower bound that is linear in n, nearly matching Grolmusz's upper bound of O(log^2(n) + k^2n/2^k). We also simplify the proof of Sherstov's Omega(sqrt(n)/(k2^k)) lower bound for the randomized communication complexity of set disjointness.
|
2:30 PM |
How to compress asymmetric communication (abstract)
Sivaramakrishnan Natarajan Ramamoorthy (University of Washington), Anup Rao (University of Washington)
We study the relationship between communication and information in 2-party communication protocols when the information is asymmetric. If I^A denotes the number of bits of information revealed by the first party, I^B denotes the information revealed by the second party, and C is the number of bits of communication in the protocol, we show that i) one can simulate the protocol using order I^A + (C^3 * I^B)^(1/4) * log(C) + (C * I^B)^(1/2) * log(C) bits of communication, ii) one can simulate the protocol using order I^A * 2^(O(I^B)) bits of communication The first result gives the best known bound on the complexity of a simulation when I^A >> I^B,C^(3/4). The second gives the best known bound when I^B << log C. In addition we show that if a function is computed by a protocol with asymmetric information complexity, then the inputs must have a large, nearly monochromatic rectangle of the right dimensions, a fact that is useful for proving lower bounds on lopsided communication problems.
|
3:00 PM |
Majority is incompressible by AC^0[p] circuits (abstract)
Igor C. Oliveira (Columbia University), Rahul Santhanam (University of Edinburgh)
We consider C-compression games, a hybrid model between computational and communication complexity. A C-compression game for a function f:{0,1}^n -> {0,1} is a two-party communication game, where the first party Alice knows the entire input x but is restricted to use strategies computed by C-circuits, while the second party Bob initially has no information about the input, but is computationally unbounded. The parties implement an interactive communication protocol to decide the value of f(x), and the communication cost of the protocol is the maximum number of bits sent by Alice as a function of n = |x|.
We show that any AC_d[p]-compression protocol to compute Majority_n requires communication n / (log(n))^(2d + O(1)), where p is prime, and AC_d[p] denotes polynomial size unbounded fan-in depth-d Boolean circuits extended with modulo p gates. This bound is essentially optimal, and settles a question of Chattopadhyay and Santhanam (2012). This result has a number of consequences, and yields a tight lower bound on the total fan-in of oracle gates in constant-depth oracle circuits computing Majority_n. We define multiparty compression games, where Alice interacts in parallel with a polynomial number of players that are not allowed to communicate with each other, and communication cost is defined as the sum of the lengths of the longest messages sent by Alice during each round. In this setting, we prove that the randomized r-round AC^0[p]-compression cost of Majority_n is n^(Theta(1/r)). This result implies almost tight lower bounds on the maximum individual fan-in of oracle gates in certain restricted bounded-depth oracle circuits computing Majority_n. Stronger lower bounds for functions in NP would separate NP from NC^1.
Finally, we consider the round separation question for two-party AC-compression games, and significantly improve known separations between r-round and (r+1)-round protocols, for any constant r.
|
3:30 PM | Coffee break |
4:00 PM |
Lower bounds for depth three arithmetic circuits with small bottom fanin (abstract, slides)
Neeraj Kayal (Microsoft Research India), Chandan Saha (Indian Institute of Science)
Shpilka and Wigderson (CCC 99) had posed the problem of proving exponential lower bounds for (nonhomogeneous) depth three arithmetic circuits with bounded bottom fanin over a field F of characteristic zero. We resolve this problem by proving a N^(Omega(d/t)) lower bound for (nonhomogeneous) depth three arithmetic circuits with bottom fanin at most t computing an explicit N-variate polynomial of degree d over F.
Meanwhile, Nisan and Wigderson (CC 97) had posed the problem of proving superpolynomial lower bounds for homogeneous depth five arithmetic circuits. Over fields of characteristic zero, we show a lower bound of N^(Omega(sqrt(d))) for homogeneous depth five circuits (resp. also for depth three circuits) with bottom fanin at most N^(u), for any fixed u < 1. This resolves the problem posed by Nisan and Wigderson only partially because of the added restriction on the bottom fanin (a general homogeneous depth five circuit has bottom fanin at most N).
|
4:30 PM |
A depth-five lower bound for iterated matrix multiplication (abstract)
Suman K. Bera (Dartmouth College), Amit Chakrabarti (Dartmouth College)
We prove that certain instances of the iterated matrix multiplication (IMM) family of polynomials with N variables and degree n require N^(Omega(sqrt(n))) gates when expressed as a homogeneous depth-five Sigma Pi Sigma Pi Sigma arithmetic circuit with the bottom fan-in bounded by N^(1/2-epsilon). By a depth-reduction result of Tavenas, this size lower bound is optimal and can be achieved by the weaker class of homogeneous depth-four Sigma Pi Sigma Pi circuits.
Our result extends a recent result of Kumar and Saraf, who gave the same N^(Omega(sqrt(n))) lower bound for homogeneous depth-four Sigma Pi Sigma Pi circuits computing IMM. It is analogous to a recent result of Kayal and Saha, who gave the same lower bound for homogeneous Sigma Pi Sigma Pi Sigma circuits (over characteristic zero) with bottom fan-in at most N^(1-epsilon), for the harder problem of computing certain polynomials defined by Nisan-Wigderson designs.
|
5:00 PM |
Best student paper award winner
Factors of low individual degree polynomials (abstract)
Rafael Oliveira (Princeton University)
In [Kaltofen, 1989], Kaltofen proved the remarkable fact that multivariate polynomial factorization can be done efficiently, in randomized polynomial time. Still, more than twenty years after Kaltofen's work, many questions remain unanswered regarding the complexity aspects of polynomial factorization, such as the question of whether factors of polynomials efficiently computed by arithmetic formulas also have small arithmetic formulas, asked in [Kopparty/Saraf/Shpilka,CCC'14], and the question of bounding the depth of the circuits computing the factors of a polynomial.
We are able to answer these questions in the affirmative for the interesting class of polynomials of bounded individual degrees, which contains polynomials such as the determinant and the permanent. We show that if P(x_1, ..., x_n) is a polynomial with individual degrees bounded by r that can be computed by a formula of size s and depth d, then any factor f(x_1, ..., x_n) of P(x_1, ..., x_n) can be computed by a formula of size poly((rn)^r, s) and depth d+5. This partially answers the question above posed in [Kopparty/Saraf/Shpilka,CCC'14], that asked if this result holds without the exponential dependence on r. Our work generalizes the main factorization theorem from Dvir et al. [Dvir/Shpilka/Yehudayoff,SIAM J. Comp., 2009], who proved it for the special case when the factors are of the form f(x_1, ..., x_n) = x_n - g(x_1, ..., x_n-1). Along the way, we introduce several new technical ideas that could be of independent interest when studying arithmetic circuits (or formulas).
|
5:45 PM | Dinner (on your own)
|
8:30 PM | Business meeting
|
8:15 AM | Breakfast |
9:00 AM |
Verifiable stream computation and Arthur-Merlin communication (abstract, slides)
Amit Chakrabarti (Dartmouth College), Graham Cormode (University of Warwick), Andrew McGregor (UMass Amherst), Justin Thaler (Yahoo Labs), Suresh Venkatasubramanian (University of Utah)
In the setting of streaming interactive proofs (SIPs), a client (verifier) needs to compute a given function on a massive stream of data, arriving online, but is unable to store even a small fraction of the data. It outsources the processing to a third party service (prover), but is unwilling to blindly trust answers returned by this service. Thus, the service cannot simply supply the desired answer; it must convince the verifier of its correctness via a short interaction after the stream has been seen.
In this work we study "barely interactive" SIPs. Specifically, we show that two or three rounds of interaction suffice to solve several query problems - including Index, Median, Nearest Neighbor Search, Pattern Matching, and Range Counting - with polylogarithmic space and communication costs. Such efficiency with O(1) rounds of interaction was thought to be impossible based on previous work.
On the other hand, we initiate a formal study of the limitations of constant-round SIPs by introducing a new hierarchy of communication models called Online Interactive Proofs (OIPs). The online nature of these models is analogous to the streaming restriction placed upon the verifier in an SIP. We give upper and lower bounds that (1) characterize, up to quadratic blowups, every finite level of the OIP hierarchy in terms of other well-known communication complexity classes, (2) separate the first four levels of the hierarchy, and (3) reveal that the hierarchy collapses to the fourth level. Our study of OIPs reveals marked contrasts and some parallels with the classic Turing Machine theory of interactive proofs, establishes limits on the power of existing techniques for developing constant-round SIPs, and provides a new characterization of (non-online) Arthur-Merlin communication in terms of an online model.
|
9:30 AM |
Identifying an honest EXP^NP oracle among many (abstract, slides)
Shuichi Hirahara (The University of Tokyo)
We provide a general framework to remove short advice by formulating the following computational task for a function f: given two oracles at least one of which is honest (i.e. correctly computes f on all inputs) as well as an input, the task is to compute f on the input with the help of the oracles by a probabilistic polynomial-time machine, which we shall call a selector. We characterize the languages for which short advice can be removed by the notion of selector: a paddable language has a selector if and only if short advice of a probabilistic machine that accepts the language can be removed under any relativized world.
Previously, instance checkers have served as a useful tool to remove short advice of probabilistic computation. We indicate that existence of instance checkers is a property stronger than that of removing short advice: although no instance checker for EXP^NP-complete languages exists unless EXP^NP = NEXP, we prove that there exists a selector for any EXP^NP-complete language, by building on the proof of MIP = NEXP by Babai, Fortnow, and Lund (1991).
|
10:00 AM |
Adaptivity helps for testing juntas (abstract)
Rocco A. Servedio (Columbia University), Li-Yang Tan (Columbia University / Simons Institute, UC Berkeley), John Wright (Carnegie Mellon University)
We give a new lower bound on the query complexity of any non-adaptive algorithm for testing whether an unknown Boolean function is a k-junta versus epsilon-far from every k-junta. Our lower bound is that any non-adaptive algorithm must make Omega(( k * log*(k)) / ( epsilon^c * log(log(k)/epsilon^c))) queries for this testing problem, where c is any absolute constant <1. For suitable values of epsilon this is asymptotically larger than the O(k * log(k) + k/epsilon) query complexity of the best known adaptive algorithm [Blais,STOC'09] for testing juntas, and thus the new lower bound shows that adaptive algorithms are more powerful than non-adaptive algorithms for the junta testing problem.
|
10:30 AM |
A characterization of hard-to-cover CSPs (abstract)
Amey Bhangale (Rutgers University), Prahladh Harsha (Tata Institute of Fundamental Research), Girish Varma (Tata Institute of Fundamental Research)
We continue the study of covering complexity of constraint satisfaction problems (CSPs) initiated by Guruswami, Håstad and Sudan [SIAM J. Computing, 31(6):1663--1686, 2002] and Dinur and Kol [in Proc. 28th IEEE Conference on Computational Complexity, 2013]. The covering number of a CSP instance Phi, denoted by nu(Phi) is the smallest number of assignments to the variables of Phi, such that each constraint of Phi is satisfied by at least one of the assignments. We show the following results regarding how well efficient algorithms can approximate the covering number of a given CSP instance.
1. Assuming a covering unique games conjecture, introduced by Dinur and Kol, we show that for every non-odd predicate P over any constant sized alphabet and every integer K, it is NP-hard to distinguish between P-CSP instances (i.e., CSP instances where all the constraints are of type P) which are coverable by a constant number of assignments and those whose covering number is at least K. Previously, Dinur and Kol, using the same covering unique games conjecture, had shown a similar hardness result for every non-odd predicate over the Boolean alphabet that supports a pairwise independent distribution. Our generalization yields a complete characterization of CSPs over constant sized alphabet Sigma that are hard to cover since CSPs over odd predicates are trivially coverable with |Sigma| assignments.
2. For a large class of predicates that are contained in the 2k-LIN predicate, we show that it is quasi-NP-hard to distinguish between instances which have covering number at most two and covering number at least Omega(log(log(n))). This generalizes the 4-LIN result of Dinur and Kol that states it is quasi-NP-hard to distinguish between 4-LIN-CSP instances which have covering number at most two and covering number at least Omega(log(log(log(n)))).
|
11:00 AM | Coffee break |
11:20 AM |
FCRC keynote lecture
Kathy Yelick (UC Berkeley) |
12:30 PM | Lunch |
2:00 PM |
Subexponential size hitting sets for bounded depth multilinear formulas (abstract)
Rafael Oliveira (Princeton University), Amir Shpilka (Tel-Aviv University), Ben Lee Volk (Tel-Aviv University)
In this paper we give subexponential size hitting sets for bounded depth multilinear arithmetic formulas. Using the known relation
between black-box PIT and lower bounds we obtain lower bounds for these models.
For depth-3 multilinear formulas, of size exp(n^delta), we give a hitting set of size exp(~O(n^(2/3 + 2*delta/3))). This implies a lower bound of exp(~Omega(n^(1/2))) for depth-3 multilinear formulas, for some explicit polynomial.
For depth-4 multilinear formulas, of size exp(n^delta), we give a hitting set of size exp(~O(n^(2/3 + 4*delta/3)). This implies a lower bound of exp(~Omega(n^(1/4))) for depth-4 multilinear formulas, for some explicit polynomial.
A regular formula consists of alternating layers of +,* gates, where all gates at layer i have the same fan-in. We give a
hitting set of size (roughly) exp(n^(1-delta)), for regular depth-d multilinear formulas of size exp(n^delta), where delta = O(1/sqrt(5)^d)). This result implies a lower bound of roughly exp(~Omega(n^(1/sqrt(5)^d))) for such formulas.
We note that better lower bounds are known for these models, but also that none of these bounds was achieved via construction of
a hitting set. Moreover, no lower bound that implies such PIT results, even in the white-box model, is currently known.
Our results are combinatorial in nature and rely on reducing the underlying formula, first to a depth-4 formula, and then to a
read-once algebraic branching program (from depth-3 formulas we go straight to read-once algebraic branching programs).
|
2:30 PM |
Deterministic identity testing for sum of read-once oblivious arithmetic branching programs (abstract, slides)
Rohit Gurjar (IIT Kanpur), Arpita Korwar (IIT Kanpur), Nitin Saxena (IIT Kanpur), Thomas Thierauf (Hochschule Aalen)
A read-once oblivious arithmetic branching program (ROABP) is an arithmetic branching program (ABP) where each variable occurs in at most one layer. We give the first polynomial time whitebox identity test for a polynomial computed by a sum of constantly many ROABPs. We also give a corresponding blackbox algorithm with quasi-polynomial time complexity n^(O(log(n))). In both the cases, our time complexity is double exponential in the number of ROABPs.
ROABPs are a generalization of set-multilinear depth-3 circuits. The prior results for the sum of constantly many set-multilinear depth-3 circuits were only slightly better than brute-force, i.e. exponential-time.
Our techniques are a new interplay of three concepts for ROABP: low evaluation dimension, basis isolating weight assignment and low-support rank concentration. We relate basis isolation to rank concentration and extend it to a sum of two ROABPs using evaluation dimension (or partial derivatives).
|
3:00 PM |
Kolmogorov width of discrete linear spaces: an approach to matrix rigidity (abstract,
slides)
Alex Samorodnitsky (Hebrew University), Ilya Shkredov (Steklov Mathematical Institute and IITP RAS), Sergey Yekhanin (Microsoft)
A square matrix V is called rigid if every matrix V' obtained by altering a small number of entries of $V$ has sufficiently high rank. While random matrices are rigid with high probability, no explicit constructions of rigid matrices are known to date. Obtaining such explicit matrices would have major implications in computational complexity theory. One approach to establishing rigidity of a matrix V is to come up with a property that is satisfied by any collection of vectors arising from a low-dimensional space, but is not satisfied by the rows of V even after alterations. In this paper we propose such a candidate property that has the potential of establishing rigidity of combinatorial design matrices over the field F_2.
Stated informally, we conjecture that under a suitable embedding of F_2^n into R^n, vectors arising from a low dimensional F_2-linear space always have somewhat small Kolmogorov width, i.e., admit a non-trivial simultaneous approximation by a low dimensional Euclidean space. This implies rigidity of combinatorial designs, as their rows do not admit such an approximation even after alterations. Our main technical contribution is a collection of results establishing weaker forms and special cases of the conjecture above.
|
3:30 PM | Coffee break |
4:00 PM |
On the (non) NP-hardness of computing circuit complexity (abstract)
Cody Murray (Stanford), Ryan Williams (Stanford)
The Minimum Circuit Size Problem (MCSP) is: given the truth table of a Boolean function f and a size parameter k, is the circuit complexity of f at most k? This is the definitive problem of circuit synthesis, and it has been studied since the 1950s. Unlike many problems of its kind, MCSP is not known to be NP-hard, yet an efficient algorithm for this problem also seems very unlikely: for example, MCSP in P would imply there are no pseudorandom functions.
Although most NP-complete problems are complete under strong "local" reduction notions such as poly-logarithmic time projections, we show that MCSP is provably not NP-hard under O(n^(1/2-epsilon))-time projections, for every epsilon > 0. We prove that the NP-hardness of MCSP under (logtime-uniform) AC0 reductions would imply extremely strong lower bounds: NP \not\subset P/poly and E \not\subset i.o.-SIZE(2^(delta * n)) for some delta > 0 (hence P = BPP also follows). We show that even the NP-hardness of MCSP under general polynomial-time reductions would separate complexity classes: EXP != NP \cap P/poly, which implies EXP != ZPP. These results help explain why it has been so difficult to prove that MCSP is NP-hard.
We also consider the nondeterministic generalization of MCSP: the Nondeterministic Minimum Circuit Size Problem (NMCSP), where one wishes to compute the nondeterministic circuit complexity of a given function. We prove that the Sigma_2 P-hardness of NMCSP, even under arbitrary polynomial-time reductions, would imply EXP \not\subset P/poly.
|
4:30 PM |
Circuits with medium fan-in (abstract)
Pavel Hrubes (Czech Academy of Sciences), Anup Rao (University of Washington)
We consider boolean circuits in which every gate may compute an arbitrary boolean function of k other gates, for a parameter k. We give an explicit function $f:{0,1}^n -> {0,1} that requires at least Omega(log^2(n)) non-input gates when k = 2n/3. When the circuit is restricted to being layered and depth 2, we prove a lower bound of n^(Omega(1)) on the number of non-input gates. When the circuit is a formula with gates of fan-in k, we give a lower bound Omega(n^2/k*log(n)) on the total number of gates.
Our model is connected to some well known approaches to proving lower bounds in complexity theory. Optimal lower bounds for the Number-On-Forehead model in communication complexity, or for bounded depth circuits in AC_0, or extractors for varieties over small fields would imply strong lower bounds in our model. On the other hand, new lower bounds for our model would prove new time-space tradeoffs for branching programs and impossibility results for (fan-in 2) circuits with linear size and logarithmic depth. In particular, our lower bound gives a different proof for a known time-space tradeoff for oblivious branching programs.
|
5:00 PM |
Best paper award winner
Correlation bounds against monotone NC1 (abstract)
Benjamin Rossman (NII, Tokyo and Simons Institute, UC Berkeley)
This paper gives the first correlation bounds under product distributions, including the uniform distribution, against the class mNC^ of polynomial-size O(log(n))-depth monotone circuits. Our main theorem, proved using the pathset complexity framework introduced in [Rossmann,arXiv:1312.0355], shows that the average-case k-CYCLE problem (on Erdös-Renyi random graphs with an appropriate edge density) is 1/2 + 1/poly(n) hard for mNC^1. Combining this result with O'Donnell's hardness amplification theorem [O'Donnell,2002], we obtain an explicit monotone function of n variables (in the class mSAC^1) which is 1/2 + n^(-1/2+epsilon) hard for mNC^1 under the uniform distribution for any desired constant epsilon > 0. This bound is nearly best possible, since every monotone function has agreement 1/2 + Omega(log(n)/sqrt(n)) with some function in mNC^1 [O'Donnell/Wimmer,FOCS'09].
Our correlation bounds against mNC^1 extend smoothly to non-monotone NC^1 circuits with a bounded number of negation gates. Using Holley's monotone coupling theorem [Holley,Comm. Math. Physics,1974], we prove the following lemma: with respect to any product distribution, if a balanced monotone function f is 1/2 + delta hard for monotone circuits of a given size and depth, then f is 1/2 + (2^(t+1)-1)*delta hard for (non-monotone) circuits of the same size and depth with at most t negation gates. We thus achieve a lower bound against NC^1 circuits with (1/2-epsilon)*log(n) negation gates, improving the previous record of 1/6*log(log(n)) [Amano/Maruoka,SIAML J. Comp.,2005]. Our bound on negations is "half" optimal, since \lceil log(n+1) \rceil negation gates are known to be fully powerful for NC^1 [Ajtai/Komlos/Szemeredi,STOC'83; Fischer,GI'75].
|
5:45 PM | Dinner (on your own) |
8:30 PM | Rump session
|
8:15 AM | Breakfast |
9:00 AM |
Non-commutative formulas and Frege lower bounds: a new characterization of propositional proofs (abstract)
Fu Li (Tsinghua University), Iddo Tzameret (Royal Holloway, University of London), Zhengyu Wang (Harvard University)
Does every Boolean tautology have a short propositional-calculus proof? Here, a propositional-calculus (i.e. Frege) proof is any proof starting from a set of axioms and deriving new Boolean formulas using a fixed set of sound derivation rules. Establishing any super-polynomial size lower bound on Frege proofs (in terms of the size of the formula proved) is a major open problem in proof complexity, and among a handful of fundamental hardness questions in complexity theory by and large. Non-commutative arithmetic formulas, on the other hand, constitute a quite weak computational model, for which exponential-size lower bounds were shown already back in 1991 by Nisan [STOC 1991], using a particularly transparent argument.
In this work we show that Frege lower bounds in fact follow from corresponding size lower bounds on non-commutative formulas computing certain polynomials (and that such lower bounds on non-commutative formulas must exist, unless NP=coNP). More precisely, we demonstrate a natural association between tautologies T to non-commutative polynomials p, such that:
(*) if T has a polynomial-size Frege proof then p has a polynomial-size non-commutative arithmetic formula; and conversely, when T is a DNF, if p has a polynomial-size non-commutative arithmetic formula over GF(2) then T has a Frege proof of quasi-polynomial size.
The argument is a characterization of Frege proofs as non-commutative formulas: we show that the Frege system is (quasi-)polynomially equivalent to a non-commutative Ideal Proof System (IPS), following the recent work of Grochow and Pitassi [FOCS 2014] that introduced a propositional proof system in which proofs are arithmetic circuits, and the work in [Tzameret 2011] that considered adding the commutator as an axiom in algebraic propositional proof systems. This gives a characterization of propositional Frege proofs in terms of (non-commutative) arithmetic formulas that is tighter than (the formula version of IPS) in Grochow and Pitassi [FOCS 2014], in the following sense:
(i) The non-commutative IPS is polynomial-time checkable - whereas the original IPS was checkable in probabilistic polynomial-time; and
(ii) Frege proofs unconditionally quasi-polynomially simulate the non-commutative IPS - whereas Frege was shown to efficiently simulate IPS only assuming that the decidability of PIT for (commutative) arithmetic formulas by polynomial-size circuits is efficiently provable in Frege.
|
9:30 AM |
The space complexity of cutting planes refutations (abstract, slides)
Nicola Galesi (Sapienza University of Rome), Pavel Pudlák (Czech Academy of Sciences), Neil Thapen (Czech Academy of Sciences)
We study the space complexity of the cutting planes proof system, in which the lines in a proof are integral linear inequalities. We measure the space used by a refutation as the number of linear inequalities that need to be kept on a blackboard while verifying it. We show that any unsatisfiable set of linear inequalities has a cutting planes refutation in space five. This is in contrast to the weaker resolution proof system, for which the analogous space measure has been well-studied and many optimal linear lower bounds are known.
Motivated by this result we consider a natural restriction of cutting planes, in which all coefficients have size bounded by a constant. We show that there is a CNF which requires super-constant space to refute in this system. The system nevertheless already has an exponential speed-up over resolution with respect to size, and we additionally show that it is stronger than resolution with respect to space, by constructing constant-space cutting planes proofs, with coefficients bounded by two, of the pigeonhole principle.
We also consider variable instance space for cutting planes, where we count the number of instances of variables on the blackboard, and total space, where we count the total number of symbols.
|
10:00 AM |
Tight size-degree lower bounds for sums-of-squares proofs (abstract, slides)
Massimo Lauria (KTH Royal Institute of Technology) Jakob Nordström (KTH Royal Institute of Technology)
We exhibit families of 4-CNF formulas over n variables that have sums-of-squares (SOS) proofs of unsatisfiability of degree (a.k.a. rank) d but require SOS proofs of size n^Omega(d) for values of d = d(n) from constant all the way up to n^delta for some universal constant delta. This shows that the n^O(d) running time obtained by using the Lasserre semidefinite programming relaxations to find degree-d SOS proofs is optimal up to constant factors in the exponent. We establish this result by combining NP-reductions expressible as low-degree SOS derivations with the idea of relativizing CNF formulas in [Krajicek '04] and [Dantchev and Riis '03], and then applying a restriction argument as in [Atserias, Müller, and Oliva '13] and [Atserias, Lauria, and Nordstrom '14]. This yields a generic method of amplifying SOS degree lower bounds to size lower bounds, and also generalizes the approach in [ALN14] to obtain size lower bounds for the proof systems resolution, polynomial calculus, and Sherali-Adams from lower bounds on width, degree, and rank, respectively.
|
10:30 AM |
A generalized method for proving polynomial calculus degree lower bounds (abstract, slides)
Mladen Miksa (KTH Royal Institute of Technology), Jakob Nordstrom (KTH Royal Institute of Technology)
We study the problem of establishing lower bounds for polynomial calculus (PC) and polynomial calculus resolution (PCR) on proof degree, and hence by [Impagliazzo et al. '99] also on proof size. [Alekhnovich and Razborov '03] established that if the clause-variable incidence graph of a CNF formula F is a good enough expander, then proving that F is unsatisfiable requires high PC/PCR degree. We further develop the techniques in [AR03] to show that if one can "cluster" clauses and variables in a way that "respects the structure" of the formula in a certain sense, then it is sufficient that the incidence graph of this clustered version is an expander. As a corollary of this, we prove that the functional pigeonhole principle (FPHP) formulas require high PC/PCR degree when restricted to constant-degree expander graphs. This answers an open question in [Razborov '02], and also implies that the standard CNF encoding of the FPHP formulas require exponential proof size in polynomial calculus resolution.
|
11:00 AM | Coffee break |
11:20 AM |
FCRC keynote lecture
Balaji Prabhakar (Stanford) |
12:30 PM | Lunch |
2:00 PM |
Generalized quantum Arthur-Merlin games (abstract, slides)
Hirotada Kobayashi (National Institute of Informatics), François Le Gall
(University of Tokyo), Harumichi Nishimura (Nagoya University)
This paper investigates the role of interaction and coins in quantum Arthur-Merlin games (also called public-coin quantum interactive proof systems). While the existing model restricts the messages from the verifier to be classical even in the quantum setting, the present work introduces a generalized version of quantum Arthur-Merlin games where the messages from the verifier can be quantum as well: the verifier can send not only random bits, but also halves of EPR pairs. This generalization turns out to provide several novel characterizations of quantum interactive proof systems with a constant number of turns. First, it is proved that the complexity class corresponding to two-turn quantum Arthur-Merlin games where both of the two messages are quantum, denoted qq-QAM in this paper, does not change by adding a constant number of turns of classical interaction prior to the communications of qq-QAM proof systems. This can be viewed as a quantum analogue of the celebrated collapse theorem for AM due to Babai. To prove this collapse theorem, this paper presents a natural complete problem for qq-QAM: deciding whether the output of a given quantum circuit is close to a totally mixed state. This complete problem is on the very line of the previous studies investigating the hardness of checking properties related to quantum circuits, and thus, qq-QAM may provide a good measure in computational complexity theory. It is further proved that the class qq-QAM_1, the perfect-completeness variant of qq-QAM, gives new bounds for standard well-studied classes of two-turn quantum interactive proof systems. Finally, the collapse theorem above is extended to comprehensively classify the role of classical and quantum interactions in quantum Arthur-Merlin games: it is proved that, for any constant m >= 2, the class of problems having $m$-turn quantum Arthur-Merlin proof systems is either equal to PSPACE or equal to the class of problems having two-turn quantum Arthur-Merlin proof systems of a specific type, which provides a complete set of quantum analogues of Babai's collapse theorem.
|
2:30 PM |
Parallel repetition for entangled k-player games via fast quantum search (abstract)
Kai-Min Chung (Academia Sinica), Xiaodi Wu (MIT), Henry Yuen (MIT)
We present two parallel repetition theorems for the entangled value of multi-player, one-round free games (games where the inputs come from a product distribution). Our first theorem shows that for a k-player free game G with entangled value val^*(G) = 1 - epsilon, the n-fold repetition of G has entangled value val^*(G^(\otimes n)) at most (1 - epsilon^(3/2))^(Omega(n/sk^4)), where s is the answer length of any player. In contrast, the best known parallel repetition theorem for the classical value of two-player free games is val(G^(\otimes n)) <= (1 - epsilon^2)^(Omega(n/s)), due to Barak, et al. (RANDOM 2009). This suggests the possibility of a separation between the behavior of entangled and classical free games under parallel repetition.
Our second theorem handles the broader class of free games G where the players can output (possibly entangled) quantum states. For such games, the repeated entangled value is upper bounded by (1 - epsilon^2)^(Omega(n/sk^2)). We also show that the dependence of the exponent on k is necessary: we exhibit a k-player free game G and n >= 1 such that val^*(G^(\otimes n)) >= val^*(G)^(n/k).
Our analysis exploits the novel connection between communication protocols and quantum parallel repetition, first explored by Chailloux and Scarpa (ICALP 2014). We demonstrate that better communication protocols yield better parallel repetition theorems: in particular, our first theorem crucially uses a quantum search protocol by Aaronson and Ambainis, which gives a quadratic Grover speed-up for distributed search problems. Finally, our results apply to a broader class of games than were previously considered before; in particular, we obtain the first parallel repetition theorem for entangled games involving more than two players, and for games involving quantum outputs.
|
3:00 PM |
Upper bounds on quantum query complexity inspired by the Elitzur-Vaidman bomb tester (abstract, slides)
Cedric Yen-Yu Lin (MIT), Han-Hsuan Lin (MIT)
Inspired by the Elitzur-Vaidman bomb testing problem [Elitzur/Vaidman 1993], we introduce a new query complexity model, which we call bomb query complexity B(f). We investigate its relationship with the usual quantum query complexity Q(f), and show that B(f)=Theta(Q(f)^2).
This result gives a new method to upper bound the quantum query complexity: we give a method of finding bomb query algorithms from classical algorithms, which then provide nonconstructive upper bounds on Q(f)=Theta(sqrt(B(f))). We subsequently were able to give explicit quantum algorithms matching our upper bound method. We apply this method on the single-source shortest paths problem on unweighted graphs, obtaining an algorithm with O(n^(1.5)) quantum query complexity, improving the best known algorithm of O(n^(1.5) * sqrt(log(n))) [Furrow, 2008]. Applying this method to the maximum bipartite matching problem gives an O(n^(1.75)) algorithm, improving the best known trivial O(n^2) upper bound.
|
3:30 PM | Coffee break |
4:00 PM |
A polylogarithmic PRG for degree 2 threshold functions in the Gaussian setting (abstract)
Daniel Kane (University of California, San Diego)
We construct and analyze a new pseudorandom generator for degree 2 polynomial threshold functions with respect to the Gaussian measure. In particular, we obtain one whose seed length is polylogarithmic in both the dimension and the desired error, a substantial improvement over existing constructions.
Our generator is obtained as an appropriate weighted average of pseudorandom generators against read once branching programs. The analysis requires a number of ideas including a hybrid argument and a structural result that allows us to treat our degree 2 threshold function as a function of a number of linear polynomials and one approximately linear polynomial.
|
4:30 PM |
Incompressible functions, relative-error extractors, and the power of nondeterministic reductions (abstract)
Benny Applebaum (Tel-Aviv University), Sergei Artemenko (University of Haifa), Ronen Shaltiel (University of Haifa), Guang Yang (Tsinghua University)
A circuit C compresses a function f:{0,1}^n -> {0,1}^m if given an input x in {0,1}^n the circuit C can shrink x to a shorter l-bit string x' such that later, a computationally-unbounded solver D will be able to compute f(x) based on x'. In this paper we study the existence of functions which are incompressible by circuits of some fixed polynomial size s=n^c. Motivated by cryptographic applications, we focus on average-case (l,epsilon) incompressibility, which guarantees that on a random input x in {0,1}^n, for every size s circuit C:{0,1}^n -> {0,1}^l and any unbounded solver D, the success probability Pr_x[D(C(x))=f(x)] is upper-bounded by 2^(-m)+epsilon. While this notion of incompressibility appeared in several works (e.g., Dubrov and Ishai, STOC 06), so far no explicit constructions of efficiently computable incompressible functions were known. In this work we present the following results:
1. Assuming that E is hard for exponential size nondeterministic circuits, we construct a polynomial time computable boolean function f:{0,1}^n -> {0,1} which is incompressible by size n^c circuits with communication l=(1-o(1)) * n and error epsilon=n^(-c). Our technique generalizes to the case of PRGs against nonboolean circuits, improving and simplifying the previous construction of Shaltiel and Artemenko (STOC 14).
2. We show that it is possible to achieve negligible error parameter epsilon=n^(-omega(1)) for nonboolean functions. Specifically, assuming that E is hard for exponential size Sigma_3-circuits, we construct a nonboolean function f:{0,1}^n -> {0,1}^m which is incompressible by size n^c circuits with l=Omega(n) and extremely small epsilon=n^(-c) * 2^(-m). Our construction combines the techniques of Trevisan and Vadhan (FOCS 00) with a new notion of relative error deterministic extractor which may be of independent interest.
3. We show that the task of constructing an incompressible boolean function f:{0,1}^n -> {0,1} with negligible error parameter epsilon cannot be achieved by "existing proof techniques". Namely, nondeterministic reductions (or even Sigma_i reductions) cannot get epsilon=n^(-omega(1)) for boolean incompressible functions. Our results also apply to constructions of standard Nisan-Wigderson type PRGs and (standard) boolean functions that are hard on average, explaining, in retrospective, the limitations of existing constructions. Our impossibility result builds on an approach of Shaltiel and Viola (SIAM J. Comp., 2010).
|
5:00 PM |
On randomness extraction in AC0 (abstract)
Oded Goldreich (Weizmann Institute), Emanuele Viola (Northeastern University and Harvard University), Avi Wigderson (Institute for Advanced Study)
We consider randomness extraction by AC0 circuits. The main parameter, n, is the length of the source, and all other parameters are functions of it. The additional extraction parameters are the min-entropy bound k=k(n), the seed length r=r(n), the output length m=m(n), and the (output) deviation bound epsilon=epsilon(n).
For k <=e n/\log^(omega(1))(n), we show that AC0-extraction is possible if and only if m/r <= 1+ poly(log(n)) * k/n; that is, the extraction rate m/r exceeds the trivial rate (of one) by an additive amount that is proportional to the min-entropy rate k/n. In particular, non-trivial AC0-extraction (i.e., m >= r+1) is possible if and only if k * r > n/poly(log(n)). For k >= n/log^(O(1))(n),
we show that AC0-extraction of r+Omega(r) bits is possible when r=O(log(n)), but leave open the question of whether more bits can be extracted in this case.
The impossibility result is for constant epsilon, and the possibility result supports epsilon=1/poly(n). The impossibility result is for (possibly) non-uniform AC0, whereas the possibility result hold for uniform AC0. All our impossibility results hold even for the model of bit-fixing sources, where k coincides with the number of non-fixed (i.e., random) bits.
We also consider deterministic AC0 extraction from various classes of restricted sources. In particular, for any constant $\delta>0$, we give explicit AC0 extractors for poly(1/delta) independent sources that are each of min-entropy rate delta; and four sources suffice for delta=0.99. Also, we give non-explicit AC0 extractors for bit-fixing sources of entropy rate 1/poly(log(n)) (i.e., having n/poly(log(n)) unfixed bits). This shows that the known analysis of the "restriction method" (for making a circuit constant by fixing as few variables as possible) is tight for AC0 even if the restriction is picked deterministically depending on the circuit.
|
5:30 PM | End of the conference |