## CCC 2013 Accepted papers and abstracts

### (In order of submission)

[Show without abstracts]
• The Correct Exponent for the Gotsman-Linial Conjecture
Author(s): Daniel Kane (Stanford University)
Abstract: We prove new bounds on the average sensitivity of polynomial threshold functions. In particular, we show that for $f$ a degree-$d$ polynomial threshold function in $n$ variables that

\as(f) \leq \sqrt{n}(\log(n))^{O(d\log(d))}2^{O(d^2\log(d))}.

This bound amounts to a significant improvement over previous bounds, and in particular, for fixed $d$ gives the same asymptotic exponent of $n$ as the one predicted by the Gotsman-Linial Conjecture.

• On Rigid Matrices and U-Polynomials
Author(s): Noga Alon (Tel-Aviv University), Gil Cohen (Weizmann Institute of Science)
Abstract: We introduce a class of polynomials, which we call U-polynomials and show that the problem of explicitly constructing a rigid matrix can be reduced to the problem of explicitly constructing a small hitting set for this class. We prove that small-bias sets are hitting sets for the class of U-polynomials, though their size is larger than desired. Furthermore, we give two alternative proofs for the fact that small-bias sets induce rigid matrices.

Finally, we construct rigid matrices from unbalanced expanders, with essentially the same size as the construction via small-bias sets.

• Formulas are exponentially stronger than monotone circuits in non-commutative setting
Author(s): Pavel Hrubes (University of Washington, Seattle), Amir Yehudayoff (Technion-IIT, Israel)
Abstract: We give an example of a non-commutative monotone polynomial f which can be computed by a polynomial-size non-commutative formula, but every monotone non-commutative circuit computing f must have an exponential size. In the non-commutative setting this gives, a fortiori, an exponential separation between monotone and general formulas, mono- tone and general branching programs, and monotone and general circuits. This answers some questions raised by Nisan.

• Just a Pebble Game
Author(s): Siu Man Chan (UC Berkeley)
Abstract: The two-player pebble game of Dymond.Tompa is identified as a barrier for existing techniques to save space or to speed up parallel algorithms for evaluation problems.

Many combinatorial lower bounds to study L versus NL and NC versus P under different restricted settings scale in the same way as the pebbling algorithm of Dymond.Tompa. These lower bounds include,

* the monotone separation of m-L from m-NL by studying the size of monotone switching networks in Potechin .10;

* a new semantic separation of NC from P and of NC^i from NC^{i+1} by studying circuit depth, based on the techniques developed for the semantic separation of NC^1 from NC^2 by the universal composition relation in Edmonds.Impagliazzo.Rudich.Sgall .01 and in Håstad. Wigderson .97; and

* the monotone separation of m-NC from m-P and of m-NC^i from m-NC^{i+1} by studying (1) the depth of monotone circuits in Raz.McKenzie .99; and (2) the size of monotone switching networks in Chan.Potechin .12.

This supports the attempt to separate NC from P by focusing on depth complexity, and suggests the study of combinatorial invariants shaped by pebbling for proving lower bounds. An application to proof complexity gives tight bounds for the size and the depth of some refinements of resolution refutations.

• Quantum XOR Games
Author(s): Oded Regev (Courant Institute, New York University), Thomas Vidick (Massachusetts Institute of Technology)
Abstract: We introduce quantum XOR games, a model of two-player one-round games that extends the model of XOR games by allowing the referee's questions to the players to be quantum states. We give examples showing that quantum XOR games exhibit a wide range of behaviors that are known not to exist for standard XOR games, such as cases in which the use of entanglement leads to an arbitrarily large advantage over the use of no entanglement. By invoking two deep extensions of Grothendieck's inequality, we present an efficient algorithm that gives a constant-factor approximation to the best performance players can obtain in a given game, both in case they have no shared entanglement and in case they share unlimited entanglement. As a byproduct of the algorithm we prove some additional interesting properties of quantum XOR games, such as the fact that sharing a maximally entangled state of arbitrary dimension gives only a small advantage over having no entanglement at all.

• Strong LTCs with inverse polylogarithmic rate and soundness
Author(s): Michael Viderman (Technion)
Abstract: An error-correcting code $C \subseteq \F^n$ is called $(q,\epsilon)$-strong locally testable code (LTC) if there exists a randomized algorithm (tester) that makes at most $q$ queries to the input word. This algorithm accepts all codewords with probability 1 and rejects all non-codewords $x\notin C$ with probability at least $\epsilon \cdot \delta(x,C)$, where $\delta(x,C)$ denotes the relative Hamming distance between the word $x$ and the code $C$. The parameter $q$ is called the query complexity and the parameter $\epsilon$ is called soundness.

A well-known open question in the area of LTCs (Goldreich and Sudan, J.ACM 2006) asks whether exist strong LTCs with constant query complexity, constant soundness and inverse polylogarithmic rate.

In this paper, we construct strong LTCs with query complexity 3, inverse polylogarithmic soundness and inverse polylogarithmic rate.

• Two-message quantum interactive proofs and the quantum separability problem
Author(s): Patrick Hayden (School of Computer Science, McGill University), Kevin Milner (School of Computer Science, McGill University), Mark M. Wilde (School of Computer Science, McGill University)
Abstract: Suppose that a polynomial-time mixed-state quantum circuit, described as a sequence of local unitary interactions followed by a partial trace, generates a quantum state shared between two parties. One might then wonder, does this quantum circuit produce a state that is separable or entangled? Here, we give evidence that it is computationally hard to decide the answer to this question, even if one has access to the power of quantum computation. We begin by exhibiting a two-message quantum interactive proof system that can decide the answer to a promise version of the question. We then prove that the promise problem is hard for the class of promise problems with "quantum statistical zero knowledge" (QSZK) proof systems by demonstrating a polynomial-time Karp reduction from the QSZK-complete promise problem "quantum state distinguishability" to our quantum separability problem. By exploiting Knill's efficient encoding of a matrix description of a state into a description of a circuit to generate the state, we can show that our promise problem is NP-hard. Thus, the quantum separability problem (as phrased above) constitutes the first nontrivial promise problem decidable by a two-message quantum interactive proof system while being hard for both NP and QSZK. We consider a variant of the problem, in which a given polynomial-time mixed-state quantum circuit accepts a quantum state as input, and the question is to decide if there is an input to this circuit which makes its output separable across some bipartite cut. We prove that this problem is a complete promise problem for the class QIP of problems decidable by quantum interactive proof systems. Finally, we show that a two-message quantum interactive proof system can also decide a multipartite generalization of the quantum separability problem.

• How Low Can Approximate Degree and Quantum Query Complexity be for Total Boolean Functions?
Author(s): Andris Ambainis (University of Latvia, Riga), Ronald de Wolf (CWI and University of Amsterdam)
Abstract: It has long been known that any Boolean function that depends on $n$ input variables has both \emph{degree} and \emph{exact quantum query complexity} of $\Omega(\log n)$, and that this bound is achieved for some functions. In this paper we study the case of \emph{approximate degree} and \emph{bounded-error} quantum query complexity. We show that for these measures the correct lower bound is $\Omega(\log n / \log\log n)$, and we exhibit quantum algorithms for two functions where this bound is achieved.

• Approaching the chasm at depth four
Author(s): Ankit Gupta (Microsoft Research, India), Pritish Kamath (Microsoft Research, India), Neeraj Kayal (Microsoft Research, India), Ramprasad Saptharishi (Chennai Mathematical Institute)
Abstract: Agrawal-Vinay (FOCS 2008) and Koiran (TCC 2012) have recently shown that an exp(omega(sqrt{n} (log n)^2)) lower bound for depth four homogeneous circuits computing the permanent with bottom layer of multiplication gates having fanin bounded by sqrt{n} translates to super-polynomial lower bound for a general arithmetic circuits computing the permanent. Motivated by this, we examine the complexity of computing the permanent and determinant via such homogeneous depth four circuits with bounded bottom fanin. We show here that any homogeneous depth four arithmetic circuit with bottom fanin bounded by sqrt{n} computing the permanent (or the determinant) must be of size exp(Omega(sqrt(n))).

• Shared Randomness and Quantum Communication in the Multi-Party Model
Author(s): Dmitry Gavinsky (NEC Laboratories America), Tsuyoshi Ito (NEC Laboratories America), Guoming Wang (NEC Laboratories America; Computer Science Division, UC Berkeley)
Abstract: We study shared randomness in the context of multi-party number-in-hand communication protocols in the simultaneous message passing model. We show that with three or more players, shared randomness exhibits new interesting properties that have no direct analogues in the two-party case.

First, we demonstrate a hierarchy of modes of shared randomness, with the usual shared randomness where all parties access the same random string as the strongest form in the hierarchy. We show exponential separations between its levels, and some of our bounds may be of independent interest (e.g., we show that the equality function can be solved by a protocol of constant length using the weakest form of shared randomness, which we call "XOR-shared randomness").

Second, we show that quantum communication cannot replace shared randomness in the multi-party case. We demonstrate a promise function GP_k that can be computed by a classical protocol of constant length when (the strongest form of) shared randomness is available, but any quantum protocol without shared randomness must send n^Omega(1) qubits to compute it. Moreover, the quantum complexity of GP_k remains n^Omega(1) even if the "second strongest" mode of shared randomness is available. While a somewhat similar separation was already known in the two-party case, in the multi-party case our statement is qualitatively stronger:

* In the two-party case only a relational communication problem with similar properties is known.

* In the two-party case the gap between the two complexities of a problem can be at most exponential, as it is known that 2^(O(c)) log n qubits can always replace shared randomness in any c-bit protocol. Our bounds imply that with quantum communication alone, in general, it is not possible to simulate efficiently even a three-bit three-party classical protocol that uses shared randomness.

• On the Power of Non-Adaptive Learning Graphs
Author(s): Aleksandrs Belovs (University of Latvia), Ansis Rosmanis (David R. Cheriton School of Computer Science and Institute for Quantum Computing, University of Waterloo.)
Abstract: We introduce a notion of the quantum query complexity of a certificate structure. This is a formalisation of a well-known observation that many quantum query algorithms only require the knowledge of the disposition of possible certificates in the input string, not the precise values therein.

Next, we derive a dual formulation of the complexity of a non-adaptive learning graph, and use it to show that non-adaptive learning graphs are tight for all certificate structures. By this, we mean that there exists a function possessing the certificate structure and such that a learning graph gives an optimal quantum query algorithm for it.

For a special case of certificate structures generated by certificates of bounded size, we construct a relatively general class of functions having this property. The construction is based on orthogonal arrays, and generalizes the quantum query lower bound for the $k$-sum problem derived recently in arXiv:1206.6528.

Finally, we use these results to show that the learning graph for the triangle problem from arXiv:1210.1014 is almost optimal in these settings. This also gives a quantum query lower bound for the triangle-sum problem.

• Collapse of the Hierarchy of Constant-Depth Exact Quantum Circuits
Author(s): Yasuhiro Takahashi (NTT Communication Science Laboratories), Seiichiro Tani (NTT Communication Science Laboratories)
Abstract: We study the quantum complexity class QNC^0_f of quantum operations implementable exactly by constant-depth polynomial-size quantum circuits with unbounded fan-out gates. Our main result is that the quantum OR operation is in QNC^0_f, which is an affirmative answer to the question of Hoyer and Spalek. In sharp contrast to the strict hierarchy of the classical complexity classes: NC^0 \subsetneq AC^0 \subsetneq TC^0, our result with Hoyer and Spalek's one implies the collapse of the hierarchy of the corresponding quantum ones: QNC^0_f = QAC^0_f = QTC^0_f. Then, we show that there exists a constant-depth subquadratic-size quantum circuit for the quantum threshold operation. This allows us to obtain a better bound on the size difference between the QNC^0_f and QTC^0_f circuits for implementing the same operation. Lastly, we show that, if the quantum Fourier transform modulo a prime is in QNC^0_f, there exists a polynomial-time exact classical algorithm for a discrete logarithm problem using a QNC^0_f oracle. This implies that, under a plausible assumption, there exists a classically hard problem that is solvable exactly by a QNC^0_f circuit with gates for the quantum Fourier transform.

• A Derandomized Switching Lemma and an improved Derandomization of AC0
Author(s): Luca Trevisan (Stanford University), TongKe Xue (Stanford University)
Abstract: We describe a new pseudorandom generator for AC0. Our generator $\epsilon$-fools circuits of depth $d$ and size $M$ and uses a seed of length $\tilde O( \log^{d+4} M/\epsilon)$. The previous best construction for $d \geq 3$ was due to Nisan, and had seed length $O(\log^{2d+6} M/\epsilon)$. A seed length of $O(\log^{2d + \Omega(1)} M)$ is best possible given Nisan-type generators and the current state of circuit lower bounds; Seed length $\Omega(\log^d M/\epsilon)$ is a barrier for any pseudorandom generator construction given the current state of circuit lower bounds. For $d=2$, a pseudorandom generator of seed length $\tilde O(\log^2 M/\epsilon)$ was known.

Our generator is based on a pseudorandom restriction'' generator which outputs restrictions that satisfy the conclusions of the Håstad Switching Lemma and that uses a seed of polylogarithmic length.

• Random Arithmetic Formulas can be Reconstructed Efficiently
Author(s): Ankit Gupta (Microsoft Research India), Neeraj Kayal (Microsoft Research India), Youming Qiao (Centre for Quantum Technologies, National University of Singapore)
Abstract: Informally stated, we present here a randomized algorithm that given blackbox access to the polynomial $f$ computed by an unknown/hidden arithmetic formula $\phi$ reconstructs, on the average, an equivalent or smaller formula $\psi$ in time polynomial in the size of its output $\psi$.

Specifically, we consider arithmetic formulas wherein the underlying tree is a complete binary tree, the leaf nodes are labelled by affine forms (i.e. degree one polynomials) over the input variables and where the internal nodes consist of alternating layers of addition and multiplication gates. We call these alternating normal form (ANF) formulas. If a polynomial $f$ can be computed by an arithmetic formula of size $s$, it can also be computed by an ANF formula, possibly of slightly larger size $s^O(1)$. Our algorithm gets as input blackbox access to the output polynomial $f$ (i.e. for any point $x$ in the domain, it can query the blackbox and obtain $f(x)$ in one step) of a random ANF formula of size $s$ (wherein the coefficients of the affine forms in the leaf nodes of are chosen independently and uniformly at random from a large enough subset of the underlying field). With high probability (over the choice of coefficients in the leaf nodes), the algorithm efficiently (i.e. in time $s^O(1)$) computes an ANF formula of size $s$ computing $f$. This then is the strongest model of arithmetic computation for which a reconstruction algorithm is presently known, albeit efficient in a distributional sense rather than in the worst case.

• Optimal Inapproximability for Scheduling Problems via Structural Hardness for Hypergraph Vertex Cover
Author(s): Sushant Sachdeva (Princeton University), Rishi Saket (IBM T. J. Watson Research Center)
Abstract: This work studies the inapproximability of two well known scheduling problems: Concurrent Open Shop and the Assembly Line problem. For both these problems, Bansal and Khot [BK10] obtained tight (2-\eps)-factor inapproximability, assuming the Unique Games Conjecture (UGC). In this paper, we prove optimal (2-\eps)-factor NP-hardness of approximation for both these problems unconditionally, i.e., without assuming UGC.

Our results for the scheduling problems follow from a structural hardness for Minimum Vertex Cover on hypergraphs -- an unconditional but weaker analog of a similar result of Bansal and Khot [BK10] which however, is based on UGC. Formally, we prove that for any \eps > 0, and positive integers q, T > 1, the following is NP-hard: Given a qT-uniform hypergraph G(V, E), decide between the following cases,

Yes Case: There is a partition of V into V_1,...,V_q, X with |V_1| = ... = |V_q| >= (1-\eps)/q * |V| such that the vertices of any hyperedge e can be partitioned into T blocks of q vertices each so that at least (T-1) of the blocks each contain at most one vertex from V_j, for any j \in [q]. Since T > 1, this implies that for any j \in [q], (V_j \cup X) is a vertex cover of size at most (1/q+\eps) |V|.

No Case: Every vertex cover in G has size at least (1-\eps)|V|.

The above hardness result suffices to prove optimal inapproximability for the scheduling problems mentioned above, via black-box hardness reductions. Using this result, we also prove a super constant hardness factor for Two Stage Stochastic Vehicle Routing}, for which a similar inapproximability was shown by Görtz, Nagarajan, and Saket [GNS12] assuming the UGC and based on the result of [BK10].

• Lower Bounds for DNF-Refutations of a Relativized Weak Pigeonhole Principle
Author(s): Albert Atserias (UPC, Barcelona), Moritz Müller (Kurt Gödel Research Center, Vienna), Sergi Oliva (UPC, Barcelona)
Abstract: The relativized weak pigeonhole principle states that if at least $2n$ out of $n^2$ pigeons fly into $n$ holes, then some hole must be doubly occupied. We prove that every DNF-refutation of the CNF encoding of this principle requires size $2^{(\log n)^{3/2-\epsilon}}$ for every $\epsilon > 0$ and every sufficiently large~$n$. For its proof we need to discuss the existence of unbalanced low-degree bipartite expanders satisfying a certain robustness condition.

• Superlinear lower bounds for multipass graph processing
Author(s): Venkatesan Guruswami (CMU), Krzysztof Onak (IBM Research)
Abstract: We prove n^(1+Omega(1/p))/p^O(1) lower bounds for the space complexity of p-pass streaming algorithms solving the following problems on n-vertex graphs:

* testing if an undirected graph has a perfect matching (this implies lower bounds for computing a maximum matching or even just the maximum matching size),

* testing if two specific vertices are at distance at most 2(p+1) in an undirected graph,

* testing if there is a directed path from s to t for two specific vertices s and t in a directed graph.

Before our result, it was known that these problems require Omega(n^2) space in one pass, but no n^(1+Omega(1)) lower bound was known for any p >= 2.

These streaming results follow from a communication complexity lower bound for a communication game in which the players hold two graphs on the same set of vertices. The task of the players is to find out whether the sets of vertices reachable from a specific vertex in exactly p+1 steps intersect. The game requires a significant amount of communication only if the players are forced to speak in a specific difficult order. This is reminiscent of lower bounds for communication problems such as indexing and pointer chasing. Among other things, our line of attack requires proving an information cost lower bound for a decision version of the classic pointer chasing problem and a direct sum type theorem for the disjunction of several instances of this problem.

• On the Parameterized and Approximation Hardness of Metric Dimension
Author(s): Sepp Hartung (Institut für Softwaretechnik und Theoretische Informatik, TU Berlin, Berlin, Germany), André Nichterlein (Institut für Softwaretechnik und Theoretische Informatik, TU Berlin, Berlin, Germany)
Abstract: The NP-hard Metric Dimension problem is to decide for a given graph G and a positive integer k whether there is a vertex subset of size at most k that separates all vertex pairs in G. Herein, a vertex v separates a pair {u,w} if the distance (length of a shortest path) between v and u is different from the distance of v and w. We give a polynomial-time computable reduction from the Bipartite Dominating Set problem to Metric Dimension on maximum degree three graphs such that there is a one-to-one correspondence between the solution sets of both problems. There are two main consequences of this: First, it proves that Metric Dimension on maximum degree three graphs is W[2]-hard with respect to the parameter k. This answers an open question concerning the parameterized complexity of Metric Dimension posed by Lokshtanov [Dagstuhl seminar, 2009] and also by D\'{\i}az et al. [ESA'12]. Additionally, it implies that a trivial n^{O(k)}-time algorithm cannot be improved to an n^{o(k)}-time algorithm, unless the assumption FPT\neq W[1] fails. Second, as Bipartite Dominating Set is inapproximable within o(log n), it follows that Metric Dimension on maximum degree three graphs is also inapproximable by a factor of o(log n), unless NP=P. This strengthens the result of Hauptmann et al. [JDA'12] who proved APX-hardness on bounded-degree graphs.

• Covering CSPs
Author(s): Irit Dinur (Weizmann Institute), Gillat Kol (Weizmann Institute)
Abstract: We study the covering complexity of constraint satisfaction problems (CSPs). The covering number of a CSP instance C, denoted v(C), is the smallest number of assignments to the variables, such that each constraint is satisfied by at least one of the assignments. This covering notion describes situations in which we must satisfy all the constraints, and are willing to use more than one assignment to do so. At the same time, we want to minimize the number of assignments.

We study the covering problem for different constraint predicates. We first observe that if the predicate contains an odd predicate, then it is covered by any assignment and its negation. In particular, 3CNF and 3LIN, that are hard in the max-CSP sense, are easy to cover. However, the covering problem is hard for predicates that do not contain an odd predicate:

1. For the 4LIN predicate, it is NP-hard to decide if a given instance C has v(C)<=2 or v(C)>=omega(1).

2. (a) We propose a framework of covering dictatorship tests. We design and analyze such a dictatorship test for every predicate that supports a pairwise independent distribution.

(b) We introduce a covering unique games conjecture, and use it to convert the covering dictatorship tests into conditional hardness results.

3. Finally, we study a hypothesis about the hardness of covering random instances that is similar to Feige's R3SAT hypothesis. We show the following somewhat surprising implication: If our hypothesis holds for dense enough instances, then it is hard to color an O(1)-colorable hypergraph with a polynomial number of colors.

• The distinguishability of product distributions by read-once branching programs
Author(s): John Steinberger (Tsinghua University IIIS)
Abstract: We improve the main result of Brody and Verbin from FOCS 2010 on the power of constant-width branching programs to distinguish product distributions. Specifically, we show that a 0-1 coin must have bias at least \Omega(1/ log(n)w.2) to be distinguishable from a fair coin by a width w, length n read-once branching program (for each constant w), which is a tight bound. The proof uses two new techniques, "interwoven hybrids" and "program randomization". Using similar techniques, we also give tight upper bounds on the maximum influence of monotone functions computable by width w read-once branching programs.

• An $O({n^{{1\over 2}+\epsilon}})$ Space Algorithm for Directed Planar Reachability with Polynomial Running Time
Author(s): T. Imai (Tokyo Institute of Technology), K. Nakagawa (Tokyo Institute of Technology), A. Pavan (Iowa State University), N. V. Vinodchandran (University of Nebraska-Lincoln), O. Watanabe (Tokyo Institute of Technology)
Abstract: We show that the reachability problem over directed planar graphs can be solved simultaneously in space $O(n^{{1\over 2} + \epsilon})$ and polynomial time, for any constant $\epsilon> 0$, where $n$ is the number of vertices of the input graph. In contrast, for the reachability problem on general directed graphs the best space bound known with polynomial running time is $O(n/2^{\sqrt{\log n}})$.

• Short lists with short programs in short time
Author(s): Bruno Bauwens (Université de Lorraine), Anton Makhlin (Moscow State University), Nikolay Vereshchagin (Moscow State University), Marius Zimand (Towson University)
Abstract: Given a machine U, a c-short program for x is a string p such that U(p)=x and the length of p is bounded by c + (the length of a shortest program for x). We show that for any universal machine, it is possible to compute in polynomial time on input x a list of polynomial size guaranteed to contain a O(log |x|)-short program for x. We also show that there exist computable functions that map every x to a list of size O(|x|^2) containing a O(1)-short program for x and this is essentially optimal because we prove that such a list must have size \Omega(|x|^2). Finally we show that for some machines, computable lists containing a shortest program must have length \Omega(2^|x|).

• Constructing hard functions from learning algorithms
Author(s): Adam Klivans (University of Texas at Austin), Pravesh Kothari (University of Texas at Austin), Igor C. Oliveira (Columbia University)
Abstract: Fortnow and Klivans proved the following relationship between efficient learning algorithms and circuit lower bounds: if a class of boolean circuits C contained in P/poly of Boolean is exactly learnable with membership and equivalence queries in polynomial-time, then EXP^NP is not contained in C (the class EXP^NP was subsequently improved to EXP by Hitchcock and Harkins). In this paper, we improve on these results and show

* If C is exactly learnable with membership and equivalence queries in polynomial-time, then DTIME(n^{\omega(1)}) is not contained in C. We obtain even stronger consequences if C is learnable in the mistake-bounded model, in which case we prove an average-case hardness result against C.

* If C is learnable in polynomial time in the PAC model then PSPACE is not contained in C, unless PSPACE is contained in BPP. Removing this extra assumption from the statement of the theorem would provide an unconditional separation of PSPACE and BPP.

* If C is efficiently learnable in the Correlational Statistical Query (CSQ) model, we show that there exists an explicit function f that is average-case hard for circuits in C. This result provides stronger average-case hardness guarantees than those obtained by SQ-dimension arguments (Blum et al. 1993). We also obtain a non-constructive extension of this result to the stronger Statistical Query (SQ) model.

Similar results hold in the case where the learning algorithm runs in subexponential time.

Our proofs regarding exact and mistake-bounded learning are simple and self-contained, yield explicit hard functions, and show how to use mistake-bounded learners to "diagonalize"' over families of polynomial-size circuits. Our consequences for PAC learning lead to new proofs of Karp-Lipton-style collapse results, and the lower bounds from SQ learning make use of recent work relating combinatorial discrepancy to the existence of hard-on-average functions.

• Approximating Boolean functions with depth-2 circuits
Author(s): Eric Blais (MIT), Li-Yang Tan (Columbia University)
Abstract: We study the complexity of approximating Boolean functions with DNFs and other depth-2 circuits, exploring two main directions: universal bounds on the approximability of all Boolean functions, and the approximability of the parity function. In the first direction, our main positive results are the first non-trivial universal upper bounds on approximability by DNFs:

* Every Boolean function can be approximated by a DNF of size O(2^n/\log n).

* Every Boolean function can be approximated by a DNF of width cn, where c < 1.

Our techniques extend broadly to give strong universal upper bounds on approximability by various depth-2 circuits that generalize DNFs, including the intersection of halfspaces, low-degree PTFs, and unate functions. We show that the parameters of our constructions come close to matching the information-theoretic inapproximability of a random function.

In the second direction our main positive result is the construction of an explicit DNF that approximates the parity function:

* PAR_n can be epsilon-approximated by a DNF of size 2^{(1-2epsilon)n} and width (1-2\eps)n.

Using Fourier analytic tools we show that our construction is essentially optimal not just within the class of DNFs, but also within the far more expressive classes of the intersection of halfspaces and intersection of unate functions.

• Towards a Reverse Newman.s Theorem in Interactive Information Complexity
Author(s): Joshua Brody (Aarhus University), Harry Buhrman (CWI Amsterdam and University of Amsterdam), Michal Koucky (Mathematical Institute of Czech Academy of Sciences, Prague), Bruno Loff (CWI Amsterdam), Florian Speelman (CWI Amsterdam), Nikolay Vereshchagin (Moscow State University)
Abstract: Newman.s theorem states that we can take any public-coin communication protocol and convert it into one that uses only private randomness with only a little increase in communication complexity. We consider a reversed scenario in the context of information complexity: can we take a protocol that uses private randomness and convert it into one that only uses public randomness while preserving the information revealed to each player? We prove that the answer is yes, at least for protocols that use a bounded number of rounds. As an application, we prove new direct sum theorems through the compression of interactive communication in the bounded-round setting. Furthermore, we show that if a Reverse Newman.s Theorem can be proven in full generality, then full compression of interactive communication and fully-general direct-sum theorems will result.

• LS+ Lower Bounds from Pairwise Independence
Author(s): Madhur Tulsiani (TTI Chicago), Pratik Worah (University of Chicago)
Abstract: We consider the complexity of LS+ refutations of unsatisfiable instances of Constraint Satisfaction Problems (CSPs) when the underlying predicate supports a pairwise independent distribution on its satisfying assignments. This is the most general condition on the predicates under which the corresponding MAX-CSP problem is known to be approximation resistant.

We show that for random instances of such CSPs on n variables, even after $\Omega(n)$ rounds of the LS+ hierarchy, the integrality gap remains equal to the approximation ratio achieved by a random assignment. In particular, this also shows that LS+ refutations for such instances require rank $\Omega(n)$. We also show the stronger result that refutations for such instances in the static LS+ proof system requires size $\exp(\Omega(n))$.

• On Medium-Uniformity and Circuit Lower Bounds
Author(s): Rahul Santhanam (University of Edinburgh), Ryan Williams (Stanford University)
Abstract: We explore relationships between circuit complexity, the complexity of generating circuits, and algorithms for analyzing circuits. Our results can be roughly divided into two parts:

(1) {Lower Bounds Against Medium-Uniform Circuits.} Informally, a circuit class is medium uniform'' if it can be generated by an algorithmic process that is somewhat complex (stronger than LOGTIME) but not infeasible. We prove several new unconditional lower bounds against medium uniform circuit classes, including:

(a) For every $k$, $\P$ is not contained in $\P\uniform \SIZE(n^k)$. That is, for all $k$ there is a language $L_k \in \P$ that does not have $O(n^k)$-size circuits constructible in polynomial time. This improves Kannan's lower bound from 1982 that $\NP$ is not in $\P\uniform \SIZE(n^k)$ for any fixed $k$.

(b) For every $k$, $\NP$ is not in $\P^{\NP}_{||}\uniform \SIZE(n^k)$. This also improves Kannan's theorem, but in a different way: the uniformity condition on the circuits is stronger than that on the language itself.

(c) For every $k$, $\L$ does not have $\L\uniform$ branching programs of size $n^{k}$.

These lower bounds use a new kind of indirect diagonalization argument in which the opposite assumption is applied multiple times: to a given language $L$, and to another language defining circuits for $L$. This allows us to translate medium-uniform classes into low-uniform classes with small advice, leading to a contradiction.

(2) {Eliminating Non-Uniformity and (Non-Uniform) Circuit Lower Bounds.} We complement these results by showing how to convert any simulation of LOGTIME-uniform $\NC^1$ in $\ACC^0/\poly$ or $\TC^0/\poly$ into a medium-uniform simulation using small advice. This lemma can be used to simplify the proof that faster SAT algorithms imply $\NEXP$ circuit lower bounds, and leads to the following new connection between improving exhaustive search and (non-uniform) circuit lower bounds:

Consider the promise problem: Given a $\TC^0$ circuit $C$ of $n^{O(1)}$ size which is promised to either be unsatisfiable or have at least $2^{n-1}$ satisfying assignments, determine which is the case. Clearly, this problem can be solved efficiently using randomness. If this problem can be solved deterministically in $2^{n-\omega(\log n)}$ time, then $\NEXP \not\subset \TC^0/\poly$.

The lemma can also be used to derandomize randomized $\TC^0$ simulations of $\NC^1$ on almost all inputs: Suppose $\NC^1 \subseteq \BP\TC^0$. Then for every $\eps > 0$ and every language $L$ in $\NC^1$, there is a (uniform) $\TC^0$ circuit family of polynomial size recognizing a language $L'$ such that $L$ and $L'$ differ on at most $2^{n^{\eps}}$ inputs of length $n$, for all $n$.

• Composition limits and separating examples for some Boolean function complexity measures
Author(s): Justin Gilmer (Department of Mathematics, Rutgers University), Michael Saks (Department of Mathematics, Rutgers University), Srikanth Srinivasan (Department of Mathematics, IIT Bombay)
Abstract: Block sensitivity, certificate complexity and fractional certificate complexity are three fundamental combinatorial measures of complexity for boolean functions. For a boolean function f, let bs(f), C(f), and C*(f) denote these measures for f. It has long been known that bs(f) <= C*(f) <= C(f) = O(bs(f)^2), but it was not known how separated these measures can be. We provide an infinite family of examples for which C(f) grows quadratically in C*(f) (and also bs(f)) giving optimal separations between these measures. Previously the biggest separation known was C(f)=C*(f)^{log_{4.5}5}. We also give a family of examples for which bs(f) = ~O(C*(f)^{2/3}).

These examples are obtained by composing functions in various ways. Here the composition f o g of boolean function f with boolean function g is the function obtained by substituting for each variable of f a copy of g on disjoint sets of variables. To construct and analyze these examples we systematically investigate the behavior under function composition of these measures and also that of the sensitivity measure s(f). In particular, we study the composition limit of various measures, where the composition of measure m at function f, m^lim(f) is the limit as k grows of m(f^(k))^{1/k}, where f^(k) is the iterated composition of f with itself k-times. The measures s(f), C(f) and C*(f) are nicely behaved under composition: for example, it was known that they are submultiplicative (where measure m is submultiplicative if m(f o g) <= m(f)m(g)) with equality holding under some fairly general conditions. We formally define a general class of complexity measures called well-behaved measures which includes these three measures and give a characterization of the composition limit of such a measure on a function f as a minimax of eigenvalues of certain sets of two by two matrices associated with the measure and the function.

We also investigate the measure bs(f) under composition. It is not well-behaved; in particular it is not submultiplicative, and we correct some errors that appeared in previous papers. We prove that the composition limit bs(f) is equal to the composition limit of C*(f). (These results for block sensititivity were independently discovered by Tal [Avishay Tal: Properties and Applications of Boolean Function Composition.ECCC 19: 163 (2012)]).

• On the Lattice Smoothing Parameter Problem
Author(s): Kai-Min Chung (Cornell University), Daniel Dadush (New York University), Feng-Hao Liu (Brown University), Chris Peikert (Georgia Institute of Technology)
Abstract: The \emph{smoothing parameter} $\eta_{\epsilon}(\lat)$ of a Euclidean lattice $\lat$, introduced by Micciancio and Regev (FOCS'04; SICOMP'07), is (informally) the smallest amount of Gaussian noise that smooths out'' the discrete structure of $\lat$ (up to error $\epsilon$). It plays a central role in the best known worst-case/average-case reductions for lattice problems, a wealth of lattice-based cryptographic constructions, and (implicitly) the tightest known transference theorems for fundamental lattice quantities.

In this work we initiate a study of the complexity of approximating the smoothing parameter to within a factor $\gamma$, denoted $\gamma$-$\gapsp$. We show that (for $\eps = 1/\poly(n)$):

\item $(2+o(1))$-$\gapsp \in \AM$, via a Gaussian analogue of the classic Goldreich-Goldwasser protocol (STOC'98);

\item $(1+o(1))$-$\gapsp \in \coAM$, via a careful application of the Goldwasser-Sipser (STOC'86) set size lower bound protocol to thin shells in $\R^{n}$;

\item $(2+o(1))$-$\gapsp \in \SZK \subseteq \AM \cap \coAM$ (where $\SZK$ is the class of problems having statistical zero-knowledge proofs), by constructing a suitable instance-dependent commitment scheme (for a slightly worse $o(1)$-term);

\item $(1+o(1))$-$\gapsp$ can be solved in deterministic $2^{O(n)} \polylog(1/\epsilon)$ time and $2^{O(n)}$ space.

As an application, we demonstrate a tighter worst-case to average-case reduction for basing cryptography on the worst-case hardness of the $\gapspp$ problem, with $\tilde{O}(\sqrt{n})$ smaller approximation factor than the $\gapsvp$ problem. Central to our results are two novel, and nearly tight, characterizations of the magnitude of discrete gaussian sums over $\lat$: the first relates these directly to the Gaussian measure of the Voronoi cell of $\lat$, and the second to the fraction of overlap between Euclidean balls centered around points of $\lat$.