**Schedule** Sunday and Monday have four sessions:
9-10:30, 11-12:30, 14:30-16, and 16:30-17:{15,30}.
Tuesday and Wednesday have two sessions:
9-10:30, and 11-13. All slots are 30 minutes (including 5 minutes for
questions and changing speakers) except those for the best paper
award winner and the invited talk. The invited talk is on Tuesday at 16:30.
There is a business meeting Sunday at 17:30,
and a rump session Monday at 17:45.

**Workshops** Satellite workshops are held the day before the
conference (May 28) in Tokyo, and for two days after the
conference (June 2-3) in Kyoto.
See the local
arrangements page for details.

**Online Proceedings**
Click the title of a paper in the program to view the full paper, and
the word "abstract"
to show or hide the abstract.
You may also download
the complete Proceedings of CCC'16,
as published with open access by the Leibniz International Proceedings in Informatics (LIPIcs).

**Slides**
Click the word "slides" in the program to see the presentations
that the authors agreed to post on this website.

18:00-20:00 | Reception |

9:00 | Average-case lower bounds and satisfiability algorithms for small threshold circuits (abstract, slides) Ruiwen Chen (University of Edinburgh), Rahul Santhanam (University of Edinburgh and University of Oxford), Srikanth Srinivasan (IIT Bombay) We show average-case lower bounds for explicit Boolean functions against bounded-depth threshold circuits with a superlinear number of wires. We show that for each integer $d > 1$, there is $\varepsilon_d > 0$ such that Parity has correlation at most $1/n^{\Omega(1)}$ with depth-$d$ threshold circuits which have at most $n^{1+\varepsilon_d}$ wires, and the Generalized Andreev Function has correlation at most $1/2^{n^{\Omega(1)}}$ with depth-$d$ threshold circuits which have at most $n^{1+\varepsilon_d}$ wires. Previously, only worst-case lower bounds in this setting were known. We use our ideas to make progress on several related questions. We give satisfiability algorithms beating brute force search for depth-$d$ threshold circuits with a superlinear number of wires. These are the first such algorithms for depth greater than 2. We also show that Parity cannot be computed by polynomial-size $\AC^0$ circuits with $n^{o(1)}$ \emph{general} threshold gates. Previously no lower bound for Parity in this setting could handle more than $\log(n)$ gates. This result also implies subexponential-time learning algorithms for $\AC^0$ with $n^{o(1)}$ threshold gates under the uniform distribution. In addition, we give almost optimal bounds for the number of gates in a depth-$d$ threshold circuit computing Parity on average, and show average-case lower bounds and non-trivial satisfiability algorithms for threshold formulas of {\it any} depth. Our techniques include adaptive random restrictions, anti-concentration and the structural theory of linear threshold functions, and bounded-read Chernoff bounds. |

9:30 | Strong ETH breaks with Merlin and Arthur: short non-interactive proofs of batch evaluation (abstract, slides) Ryan Williams (Stanford University) We present an efficient proof system for {\sc Multipoint Arithmetic Circuit Evaluation}: for any arithmetic circuit $C(x_1,\ldots,x_n)$ of size $s$ and degree $d$ over a field $F$, and any inputs $a_1,\ldots,a_K \in F^n$, * the Prover sends the Verifier the values $C(a_1), \ldots, C(a_K) \in F$ and a proof of $\tilde{O}(K \cdot d)$ length, and * the Verifier tosses $poly(\log(dK|F|/\eps))$ coins and can check the proof in about $\tilde{O}(K \cdot(n + d) + s)$ time, with probability of error less than $\eps$. For small degree $d$, this ``Merlin-Arthur'' proof system (a.k.a. MA-proof system) runs in nearly-linear time, and has many applications. For example, we obtain MA-proof systems that run in $c^{n}$ time (for various $c < 2$) for the Permanent, $\#$Circuit-SAT for all sublinear-depth circuits, counting Hamiltonian cycles, and infeasibility of $0$-$1$ linear programs. In general, the value of any polynomial in Valiant's class ${\sf VP}$ can be certified faster than ``exhaustive summation'' over all possible assignments. These results strongly refute a Merlin-Arthur Strong ETH and Arthur-Merlin Strong ETH posed by Russell Impagliazzo and others. We also give a three-round (AMA) proof system for quantified Boolean formulas running in $2^{2n/3+o(n)}$ time, nearly-linear time MA-proof systems for counting orthogonal vectors in a collection and finding Closest Pairs in the Hamming metric, and a MA-proof system running in $n^{k/2+O(1)}$-time for counting $k$-cliques in graphs. We point to some potential future directions for refuting the Nondeterministic Strong ETH. |

10:00 | Toward the KRW composition conjecture: cubic formula lower bounds via communication complexity (abstract, slides) Irit Dinur (Weizmann Institute of Science), Or Meir (Haifa University) One of the major challenges of the research in circuit complexity is proving super-polynomial lower bounds for de-Morgan formulas. Karchmer, Raz, and Wigderson suggested to approach this problem by proving the conjecture that formula complexity behaves ``as expected'' with respect to the composition of functions~$f\circ g^{m}$. They showed that this conjecture, if proved, would imply super-polynomial formula lower bounds. The first step toward proving the KRW conjecture was made by Edmonds, Rudich, Impagliazzo and Sgall, who proved an analogue of the conjecture for the composition of ``universal relations''. In this work, we extend their argument further to $f\circ g^{m}$ for any function $f$ but where $g=\oplus$, that is, we prove the KRW conjecture for the the special case where $g$ is the parity function and $f$ is an arbitrary function. While this case was already proved implicitly in H{\aa}stad's work on random restrictions, our proof uses an entirely different approach based on communication complexity, and seems more likely to be generalizable to other cases of the conjecture. In addition, our proof gives a new structural result, which roughly says that the naive way for computing $f\circ g^{m}$ is the \emph{only} optimal way. Along the way, we obtain a new proof of the state-of-the-art formula lower bound of $n^{3-o(1)}$ of H{\aa}stad's. |

10:30 | Coffee break |

11:00 | Nearly optimal separations between communication (or query) complexity and partitions (abstract, slides) Andris Ambainis (University of Latvia), Mārtiņš Kokainis (University of Latvia), Robin Kothari (MIT) We show nearly quadratic separations between two pairs of complexity measures: 1) We show that there is a Boolean function f with D(f)=\Omega((D^{sc}(f))^{2-o(1)}) where D(f) is the deterministic query complexity of f and D^{sc} is the subcube partition complexity of f; 2) As a consequence, we obtain that there is f(x, y) such that D^{cc}(f)=\Omega(\log^{2-o(1)}\chi(f)) where D^{cc}(f) is the deterministic 2-party communication complexity of f (in the standard 2-party model of communication) and \chi(f) is the partition number of f. Both of those separations are nearly optimal: it is well known that D(f)=O((D^{sc}(f))^{2}) and D^{cc}(f)=O(\log^2\chi(f)). |

11:30 | A composition theorem for conical juntas (abstract, slides) Mika Göös (University of Toronto), T.S. Jayram (IBM Almaden) We describe a general method of proving degree lower bounds for conical juntas (nonnegative combinations of conjunctions) that compute recursively defined boolean functions. Such lower bounds are known to carry over to communication complexity. We give two applications: -- AND-OR trees. We show a near-optimal Ω(n^{0.753...}) randomised communication lower bound for the recursive NAND function (a.k.a. AND-OR tree). This answers an open question posed by Beame and Lawry cf. Beame and Lawry (1992) and Lawry (1993). -- Majority trees. We show an Ω(2.59^k) randomised communication lower bound for the 3-majority tree of height k. This improves over the state-of-the-art already in the context of randomised decision tree complexity. |

12:00 | Tight bounds for communication assisted agreement distillation (abstract, slides) Venkatesan Guruswami (Carnegie Mellon University), Jaikumar Radhakrishnan (Tata Institute of Fundamental Research) Suppose Alice holds a uniformly random string $X \in \{0,1\}^N$ and Bob holds a noisy version $Y$ of $X$ where each bit of $X$ is flipped independently with probability $\eps \in [0,\tfrac{1}{2}]$. Alice and Bob would like to extract a common random string of min-entropy at least $k$. In this work, we establish the communication versus success probability trade-off for this problem by giving a protocol and a matching lower bound (under the restriction that the string to be agreed upon is determined by Alice's input $X$). Specifically, we prove that in order for Alice and Bob to agree on a common string with probability $2^{-\gamma k}$ ($\gamma k \geq 1$), the optimal communication (up to $o(k)$ terms, and achievable for large $N$) is precisely $(C (1-\gamma) - 2 \sqrt{ C(1-C)\gamma})k$, where $C := 4\eps(1-\eps)$. In particular, the optimal communication to achieve $\Omega(1)$ agreement probability approaches $4\eps(1-\eps) k$. We also consider the case when $Y$ is the output of the binary erasure channel on $X$, where each bit of $Y$ equals the corresponding bit of $X$ with probability $1-\eps$ and is otherwise erased (that is, replaced by a `?'). In this case, the communication required becomes $(\eps(1-\gamma) - 2 \sqrt{\eps(1-\eps)\gamma}) k$. In particular, the optimal communication to achieve $\Omega(1)$ agreement probability approaches $\eps k$, and with no communication the optimal agreement probability approaches $2^{- \frac{1-\sqrt{1-\eps}}{1+\sqrt{1-\eps}} k}$. Our protocols are based on covering codes and extend the approach of (Bogdanov and Mossel, 2011) for the zero-communication case. Our lower bounds rely on hypercontractive inequalities. For the model of bit-flips, our argument extends the approach of (Bogdanov and Mossel, 2011) by allowing communication; for the erasure model, to the best of our knowledge the needed hypercontractivity statement was not studied before, and it was established (given our application) by (Nair 2015). We also obtain information complexity lower bounds for these tasks, and together with our protocol, they shed light on the recently popular "most informative Boolean function" conjecture of Courtade and Kumar. |

12:30 | Lunch (on your own) |

14:30 | Pseudorandomness when the odds are against you (abstract, slides) Sergei Artemenko (University of Haifa), Russell Impagliazzo (UC San Diego), Valentine Kabanets (Simon Fraser University), Ronen Shaltiel (University of Haifa) Impagliazzo and Wigderson \cite{IW97} showed that if E is hard for exponential size circuits (meaning that there exists $L \in \text{E}=\text{DTIME}(2^{O(n)})$ and a constant $\beta>0$ such that for every sufficiently large $n$, circuits of size $2^{\beta \cdot n}$ cannot compute the characteristic function of $L$ on inputs of length $n$) then every time $T$ randomized algorithm with constant error probability can be simulated in deterministic time $\poly(T)$, and BPP=P. \smallskip There are many randomized algorithms in the literature for which a polynomial slowdown is a deal breaker. For example, randomized algorithms solving NP complete problems, typically run in time $2^{\alpha n}$ for some constant $\alpha$ slightly smaller than one. Paturi and Pudlak \cite{PP} observed that many such algorithms are based on one sided error randomized algorithms which they call OPP algorithms: These are randomized polynomial time algorithms with success probability $\eps(n)=2^{-\alpha \cdot n}$. OPP algorithms can be repeated $1/\eps(n)$ times to yield a time $\poly(n)/\eps(n)=\poly(n) \cdot 2^{\alpha \cdot n}$ randomized algorithm with constant success probability. \smallskip We show that if E is hard for exponential size nondeterministic circuits, then there is a $\poly(n)$-time $\eps(n)$-HSGs $H:\B^{O(\log n) + \log(1/\eps)} \ar \B^n$. Hitting set generators with these parameters derandomize OPP algorithms in time $\poly(n)/\eps(n)=\poly(n) \cdot 2^{\alpha \cdot n}$ that matches the running time of the randomized algorithm that is the consequence of the OPP algorithm. As a consequence we obtain (for example) that for $k\ge 4$, the fastest known algorithm for k-SAT by Paturi et al. \cite{PPSZ} can be made deterministic with essentially the same time bound. This is the first hardness versus randomness tradeoff that applies to algorithms for NP-complete problems. We address the necessity of our assumption by showing that HSGs with very low error, imply hardness for nondeterministic circuits with ``few'' nondeterministic bits. \smallskip Applebaum et al. \cite{AASY15} showed that ``black-box techniques'' cannot achieve $\poly(n)$-time computable $\eps$-PRGs for $eps=n^{-\omega(1)}$ even if we allow the circuits in the hardness assumption to have oracle to an arbitrary language in the polynomial time hierarchy. We introduce weaker variants of PRGs with \emph{relative error}, that do follow under the latter hardness assumptions. Specifically, we say that a function $G:\B^r \ar \B^n$ is an $(\eps,\delta)$-re-PRG for a circuit $C$ if \[(1-\eps) \cdot \Pr[C(U_n)=1] - \delta \le \Pr[C(G(U_r)=1] \le (1-\eps) \cdot \Pr[C(U_n)=1] + \delta \] We construct $\poly(n)$-time computable $(\eps,\delta)$-re-PRGs with arbitrary polynomial stretch, $\eps=n^{-O(1)}$ and $\delta=2^{-n^{\Omega(1)}}$. We also construct PRGs with relative error that fool \emph{non-boolean distinguishers} (in the sense introduced by Dubrov and Ishai \cite{DI06}). \smallskip Our techniques use ideas from \cite{PP,TV,AASY15}. Common themes in our proofs are ``composing'' a PRG/HSG with a combinatorial object such as dispersers and extractors, and the use of nondeterministic reductions in the spirit of Feige and Lund \cite{FeigeLund}. |

15:00 | New non-uniform lower bounds for uniform classes (abstract, slides) Lance Fortnow (Georgia Institute of Technology), Rahul Santhanam (University of Edinburgh and University of Oxford) We strengthen the hierarchy theorem of [Cook, Seiferas-Fischer-Meyer, Zak] for non- deterministic polynomial time to show that the lower bound holds against sub-linear advice. More formally, we show that for any constants d and d' such that 1 < d < d', and for any time-constructible bound t = o(n^d), there is a language in NTIME(n^d ) which is not in NTIME(t)/n^{1/d'} . The best known earlier separation could only handle o(log(n)) bits of advice in the lower bound, and was not tight with respect to the time bounds. We generalize our hierarchy theorem to work for other syntactic complexity measures between polynomial time and polynomial space, including alternating polynomial time with any fixed number of alternations. We also use our technique to derive an almost-everywhere hierarchy theorem for non-deterministic classes which use a sub-linear amount of non-determinism, i.e., the lower bound holds on all but finitely many input lengths rather than just on infinitely many. As one application of our main result, we derive a new lower bound for NP against NP-uniform non-deterministic circuits of size O(n^k) for any fixed k. This result is a significant strengthening of a result of Kannan [12], which states that not all of NP can be solved with P-uniform circuits of size O(n^k) for any fixed k. As another application, we show strong non-uniform lower bounds for the complexity class RE of languages decidable in randomized linear exponential time with one sided error. |

15:30 | Coffee break |

16:30 | Best paper award winner
Marco Carmosino (UC San Diego), Russell Impagliazzo (UC San Diego), Valentine Kabanets (Simon Fraser University) and Antonina Kolokolova (Memorial University of Newfoundland) Circuit analysis algorithms such as learning, SAT, minimum circuit size, and compression imply circuit lower bounds. We show a generic implication in the opposite direction: natural properties (in the sense of Razborov and Rudich) imply randomized learning and compres- sion algorithms. This is the first such implication outside of the derandomization setting. As an application, we use known natural lower bounds for AC0[p] circuits (due to Razborov and Smolensky) to get the first quasi-polynomial time algorithm for learning AC0[p] functions, in the PAC model over the uniform distribution, with membership queries. |

17:15 | Break |

17:45 | Business meeting |

9:00 | Decoding Reed-Muller codes over product sets (abstract, slides) John Kim (Rutgers University), Swastik Kopparty (Rutgers University) We give a polynomial time algorithm to decode multivariate polynomial codes of degree d up to half their minimum distance, when the evaluation points are an arbitrary product set S^m, for every d < |S|. Previously known algorithms could achieve this only if the set S has some very special algebraic structure, or if the degree d is significantly smaller than |S|. We also give a near-linear time algorithm, which is based on tools from list-decoding, to decode these codes from nearly half their minimum distance, provided d < (1-epsilon)|S| for constant epsilon > 0. Our result gives an m-dimensional generalization of the well known decoding algorithms for Reed-Solomon codes, and can be viewed as giving an algorithmic version of the Schwartz-Zippel lemma. |

9:30 | Lower bounds for constant query affine-invariant LCCs and LTCs (abstract, slides) Arnab Bhattacharyya (Indian Institute of Science), Sivakanth Gopi (Princeton University) Affine-invariant codes are codes whose coordinates form a vector space over a finite field and which are invariant under affine transformations of the coordinate space. They form a natural, well-studied class of codes; they include popular codes such as Reed-Muller and Reed-Solomon. A particularly appealing feature of affine-invariant codes is that they seem well-suited to admit local correctors and testers. In this work, we give lower bounds on the length of locally correctable and locally testable affine-invariant codes with constant query complexity. We show that if a code $C \subset \Sigma^{\K^n}$ is an $r$-query locally correctable code (LCC), where $\K$ is a finite field and $\Sigma$ is a finite alphabet, then the number of codewords in $C$ is at most $\exp(O_{\K, r, |\Sigma|}(n^{r-1}))$. Also, we show that if $C \subset \Sigma^{\K^n}$ is an $r$-query locally testable code (LTC), then the number of codewords in $\cC$ is at most $\exp(O_{\K, r, |\Sigma|}(n^{r-2}))$. The dependence on $n$ in these bounds is tight for constant-query LCCs/LTCs, since Guo, Kopparty and Sudan (ITCS `13) construct affine-invariant codes via lifting that have the same asymptotic tradeoffs. Note that our result holds for non-linear codes, whereas previously, Ben-Sasson and Sudan (RANDOM `11) assumed linearity to derive similar results. Our analysis uses higher-order Fourier analysis. In particular, we show that the codewords corresponding to an affine-invariant LCC/LTC must be far from each other with respect to Gowers norm of an appropriate order. This then allows us to bound the number of codewords, using known decomposition theorems which approximate any bounded function in terms of a finite number of low-degree non-classical polynomials, upto a small error in the Gowers norm. |

10:00 | Degree and sensitivity: tails of two distributions (abstract, slides) Parikshit Gopalan (Microsoft Research), Rocco Servedio (Columbia University), Avi Wigderson (Institute for Advanced Study) The sensitivity of a Boolean function $f$ is the maximum, over all inputs $x$, of the number of sensitive coordinates of $x$ (namely the number of Hamming neighbors of $x$ with different $f$-value). The well-known sensitivity conjecture of Nisan (see also Nisan and Szegedy) states that every sensitivity-$s$ Boolean function can be computed by a polynomial over the reals of degree $\poly(s)$. The best known upper bounds on degree, however, are exponential rather than polynomial in $s$. Our main result is an approximate version of the conjecture: every Boolean function with sensitivity $s$ can be $\eps$-approximated (in $\ell_2$) by a polynomial whose degree is $s polylog(1/eps)$. This is the first improvement on the folklore bound of $s/eps$. We prove this via a new switching lemma for low-sensitivity functions which establishes that a random restriction of a low-sensitivity function is very likely to have low decision tree depth. This is analogous to the well-known switching lemma for $AC^0$ circuits. Our proof analyzes the combinatorial structure of the graph $G_f$ of sensitive edges of a Boolean function $f$. Understanding the structure of this graph is of independent interest as a means of understanding Boolean functions. We propose several new complexity measures for Boolean functions based on this graph, including tree sensitivity and component dimension, which may be viewed as relaxations of worst-case sensitivity, and we introduce some new techniques, such as proper walks and shifting, to analyze these measures. We use these notions to show that the graph of a function of full degree must be sufficiently complex, and that random restrictions of low-sensitivity functions are unlikely to lead to such complex graphs. We postulate a robust analogue of the sensitivity conjecture: if most inputs to a Boolean function $f$ have low sensitivity, then most of the Fourier mass of $f$ is concentrated on small subsets. We prove a lower bound on tree sensitivity in terms of decision tree depth, and show that a polynomial strengthening of this lower bound implies the robust conjecture. We feel that studying the graph $G_f$ is interesting in its own right, and we hope that some of the notions and techniques we introduce in this work will be of use in its further study. |

10:30 | Coffee break |

11:00 | New hardness results for graph and hypergraph colorings (abstract, slides) Joshua Brakensiek (Carnegie Mellon University), Venkatesan Guruswami (Carnegie Mellon University) Finding a proper coloring of a t-colorable graph G with t colors is a classic NP-hard problem when t >= 3. In this work, we investigate the approximate coloring problem in which the objective is to find a proper c-coloring of G where c >= t. We show that for all t >= 3, it is NP-hard to find a c-coloring when c <= 2t-2. In the regime where t is small, this improves on the previously best known hardness result of c <= max{2t- 5, t + 2*floor(t/3) - 1} (Garey and Johnson 1976; Khanna, Linial, Safra, 1993; Guruswami, Khanna, 2000). For example, we show that 6-coloring a 4-colorable graph is NP-hard, improving on the NP-hardness of 5-coloring a 4-colorable graph. We also generalize this to related problems on the strong coloring of hypergraphs. A k-uniform hypergraph H is t-strong colorable (where t >= k) if there is a t-coloring of the vertices such that no two vertices in each hyperedge of H have the same color. We show that if t = ceiling(3k/2), then it is NP-hard to find a 2-coloring of the vertices of H such that no hyperedge is monochromatic. We establish the NP-hardness of these problems by reducing from the hardness of the Label Cover problem, via a "dictatorship test" gadget graph. By combinatorially classifying all possible colorings of this graph, we can infer labels to provide to the label cover problem. This approach generalizes the "weak polymorphism" framework of (Austrin, Guruswami, Hastad, 2014), though interestingly our results are "PCP-free" in that they do not require any approximation gap in the starting Label Cover instance. |

11:30 | Invariance principle on the slice (abstract) Yuval Filmus (Technion), Guy Kindler (Hebrew University of Jerusalem), Elchanan Mossel (Pennsylvania University and UC Berkeley), Karl Wimmer (Duquesne University) The non-linear invariance principle of Mossel, O'Donnell and Oleszkiewicz establishes that if f(x_1,...,x_n) is a multilinear low-degree polynomial with low influences then the distribution of f(B_1,...,B_n) is close (in various senses) to the distribution of f(G_1,...,G_n), where B_i in {-1,1} are independent Bernoulli random variables and G_i ~ N(0,1) are independent standard Gaussians. The invariance principle has seen many application in theoretical computer science, including the *Majority is Stablest* conjecture, which shows that the Goemans--Williamson algorithm for MAX-CUT is optimal under the Unique Games Conjecture. More generally, MOO's invariance principle works for any two vectors of hypercontractive random variables (X_1,...,X_n),(Y_1,...,Y_n) such that (i) *Matching moments*: X_i and Y_i have matching first and second moments, (ii) *Independence*: the variables X_1,...,X_n are independent, as are Y_1,...,Y_n. The independence condition is crucial to the proof of the theorem, yet in some cases we would like to use distributions X_1,...,X_n in which the individual coordinates are not independent. A common example is the uniform distribution on the *slice* C([n],k) which consists of all vectors (x_1,...,x_n) in {0,1}^n with Hamming weight k. The slice shows up in theoretical computer science (hardness amplification, direct sum testing), extremal combinatorics (Erdos-Ko-Rado theorems) and coding theory (in the guise of the Johnson association scheme). Our main result is an invariance principle in which (X_1,...,X_n) is the uniform distribution on a slice C([n],pn) and (Y_1,...,Y_n) consists either of n independent Ber(p) random variables, or of n independent N(p,p(1-p)) random variables. As applications, we prove a version of *Majority is Stablest* for functions on the slice, a version of Bourgain's tail theorem, a version of the Kindler-Safra structural theorem, and a stability version of the t-intersecting Erdos-Ko-Rado theorem, combining techniques of Wilson and Friedgut. Our proof relies on a combination of ideas from analysis and probability, algebra and combinatorics. In particular, we make essential use of recent work of the first author which describes an explicit Fourier basis for the slice. and Harmonicity and Invariance on Slices of the Boolean Cube (abstract, slides) Yuval Filmus (Technion), Elchanan Mossel (Pennsylvania University and UC Berkeley) In a recent work with Kindler and Wimmer we proved an invariance principle for the slice for low-influence, low-degree functions. Here we provide an alternative proof for *general* low-degree functions, with no constraints on the influences. We show that any real-valued function on the slice, whose degree when written as a harmonic multi-linear polynomial is o(\sqrt{n}), has approximately the same distribution under the slice and cube measure. Our proof is based on a novel decomposition of random increasing paths in the cube in terms of martingales and reverse martingales. While such decompositions have been used in the past for stationary reversible Markov chains, ours decomposition is applied in a non-reversible non-stationary setup. We also provide simple proofs for some known and some new properties of harmonic functions which are crucial for the proof. Finally, we provide independent simple proofs for the facts that 1) one cannot distinguish between the slice and the cube based on functions of o(n) coordinates and 2) Boolean symmetric functions on the cube cannot be approximated under the uniform measure by functions whose sum of influences is o(\sqrt{n}). |

12:00 | On the sum-of-squares degree of symmetric quadratic functions (abstract, slides) Troy Lee (Nanyang Technological University), Anupam Prakash (Nanyang Technological University), Ronald de Wolf (CWI and University of Amsterdam), Henry Yuen (MIT) We study how well functions over the boolean hypercube of the form $f_k(x)=(|x|-k)(|x|-k-1)$ can be approximated by sums of squares of low-degree polynomials, obtaining good bounds for the case of approximation in $\ell_{\infty}$-norm as well as in $\ell_1$-norm. We describe three complexity-theoretic applications: (1) a proof that the recent breakthrough lower bound on the positive semidefinite extension complexity of the correlation and TSP polytopes of Lee, Raghavendra, and Steurer~\cite{LRS15} cannot be improved further by showing better sum of squares degree lower bounds on $\ell_1$-approximation of~$f_k$; (2) a proof that Grigoriev's lower bound on the degree of Positivstellensatz refutations for the knapsack problem is optimal, answering an open question from~\cite{G01}; (3) bounds on the query complexity of quantum algorithms whose expected output approximates such functions. |

12:30 | Lunch (on your own) |

14:30 | Limits of minimum circuit size problem as oracle (abstract, slides) Shuichi Hirahara (University of Tokyo), Osamu Watanabe (Tokyo Institute of Technology) The Minimum Circuit Size Problem (MCSP) is known to be hard for statistical zero knowledge via a BPP-reduction (Allender and Das, 2014), whereas establishing NP-hardness of MCSP via a polynomial-time many-one reduction is difficult (Murray and Williams, 2015) in the sense that it implies ZPP \neq EXP, which is a major open problem in computational complexity. In this paper, we provide strong evidences that current techniques cannot establish NP-hardness of MCSP, even under polynomial-time Turing reductions or randomized reductions: Specifically, we introduce the notion of black-box reduction to MCSP, which captures all the currently known reductions. We say that a reduction to MCSP is black-box if the reduction can be generalized to a reduction to MCSP^A for any oracle A, where MCSP^A denotes an oracle version of MCSP. We prove that essentially no language is reducible to MCSP via a polynomial-time Turing reduction in a black-box way. We also show that the class of languages reducible to MCSP via a black-box randomized reduction that makes at most one query is contained in AM and coAM. Thus, NP-hardness of MCSP cannot be established via such black-box reductions unless the polynomial hierarchy collapses. We also extend the previous results to the case of more general reductions: We prove that establishing NP-hardness of MCSP via a polynomial-time nonadaptive reduction implies ZPP \neq EXP, and that establishing NP-hardness of approximating circuit complexity via a polynomial-time Turing reduction also implies ZPP \neq EXP. Along the way, we prove that approximating Levin's Kolmogorov complexity is provably not EXP-hard under a polynomial-time Turing reduction, which is of independent interest. |

15:00 | New extractors for interleaved sources (abstract) Eshan Chattopadhyay (UT Austin), David Zuckerman (UT Austin) We study how to extract randomness from a $C$-interleaved source, that is, a source comprised of $C$ independent sources whose bits or symbols are interleaved. We describe a simple approach for constructing such extractors that yields: (1) For some $\delta>0, c > 0$, explicit extractors for $2$-interleaved sources on $\{ 0,1\}^{2n}$ when one source has min-entropy at least $ (1-\delta)n$ and the other has min-entropy at least $c\log n$. The best previous construction, by Raz and Yehudayoff \cite{RY08}, worked only when both sources had entropy rate $1-\delta$. (2) For some $c>0$ and any large enough prime $p$, explicit extractors for $2$-interleaved sources on $[p]^{2n}$ when one source has min-entropy rate at least .51 and the other source has min-entropy rate at least $(c\log n)/n$. We use these to obtain the following applications: (1) We introduce the class of any-order-small-space sources, generalizing the class of small-space sources studied by Kamp et al.\ \cite{KRVZ06}. We construct extractors for such sources with min-entropy rate close to $1/2$. Using the Raz-Yehudayoff construction would require entropy rate close to 1. (2) For any large enough prime $p$, we exhibit an explicit function $f:[p]^{2n} \rightarrow \{ 0,1\} $ such that the randomized best-partition communication complexity of $f$ with error $1/2-2^{-\Omega(n)}$ is at least $.24 n \log p$. Previously this was known only for a tiny constant instead of $.24$, for $p=2$ \cite{RY08}. (3) We introduce non-malleable extractors in the interleaved model. For any large enough prime $p$, we give an explicit construction of a weak-seeded non-malleable extractor for sources over $[p]^n$ with min-entropy rate .51. Nothing was known previously, even for almost full min-entropy. In subsequent work, Chattopadhyay and Li \cite{CL15} constructed extractors for $C$-interleaved sources at polylogarithmic min-entropy, for some large enough constant $C$. |

15:30 | New characterizations in turnstile streams with applications (abstract, slides) Yuqing Ai (Tsinghua University), Wei Hu (Tsinghua University), Yi Li (Facebook Inc.), David P. Woodruff (IBM Almaden) Recently, [Li, Nguyen, Woodruff, 2014] showed any $1$-pass constant probability streaming algorithm for computing a relation $f$ on a vector $x \in \{-m, -(m-1), \cdots, m\}^n$ presented in the turnstile data stream model can be implemented by maintaining a linear sketch $A \cdot x \bmod q$, where $A$ is an $r \times n$ integer matrix and $q = (q_1, \cdots, q_r)$ is a vector of positive integers. The space complexity of maintaining $A \cdot x \bmod q$, not including the random bits used for sampling $A$ and $q$, matches the space of the optimal algorithm. (Note the [LNW14] reduction does not lose a $\log m$ factor in space as they claim if it maintains $A \cdot x \bmod q$ rather than $A \cdot x$ over the integers.) We give multiple strengthenings of this reduction, together with new applications. In particular, we show how to remove the following shortcomings of their reduction: 1. The Box Constraint. Their reduction applies only to algorithms that must be correct even if $\|x\|_{\infty} = \max_{i \in [n]} |x_i|$ is allowed to be much larger than $m$ at intermediate points in the stream, provided that $x \in \{-m, -(m-1), \cdots, m\}^n$ at the end of the stream. We give a condition under which the optimal algorithm is a linear sketch even if it works only when promised that $x \in \{-m, -(m-1), \cdots, m\}^n$ at all points in the stream. Using this, we show the first super-constant, $\Omega(\log m)$ bit lower bound for the problem of maintaining a counter up to an additive $m/4$ error in a turnstile stream. Previous lower bounds are based on communication complexity and are only for relative error approximation; interestingly, we do not know how to prove our result using communication complexity. This implies the first super-constant, optimal lower bounds for additive approximation of $\ell_p$-norms. 2. Negative Coordinates. Their reduction allows $x_i$ to be negative while processing the stream. We show an equivalence of $1$-pass algorithms and linear sketches $A \cdot x \bmod q$ in dynamic graph streams, or more generally, the strict turnstile model, in which for all $i$, $x_i \geq 0$ at all points in the stream. Combined with [Assadi, Khanna, Li, Yaroslavtsev, 2015], this resolves the $1$-pass space complexity of approximating the maximum matching in a dynamic graph stream, answering a question in that work. 3. $1$-Pass Restriction. Their reduction only applies to $1$-pass data stream algorithms in the turnstile model, while there exist algorithms for heavy hitters and for low rank approximation which provably do better with multiple passes. We extend the reduction to algorithms which make any number of passes, showing the optimal algorithm is to choose a new linear sketch at the beginning of each pass, based on the output of previous passes. |

16:00 | Coffee break |

16:30 | Invited talkNisheeth Vishnoi (EPFL) |

17:30 | Non-malleable extractors - new tools and improved constructions (abstract, slides) Gil Cohen (California Institute of Technology) A non-malleable extractor is a seeded extractor with a very strong guarantee - the output of a non-malleable extractor obtained using a typical seed is close to uniform even conditioned on the output obtained using any other seed. The first contribution of this paper consists of two new and improved constructions of non-malleable extractors: * We construct a non-malleable extractor with seed-length $O(\log{n} \cdot \log\log{n})$ that works for entropy $\Omega(\log{n})$. This improves upon a recent exciting construction by Chattopadhyay, Goyal, and Li (ECCC'15) that has seed length $O(\log^{2}{n})$ and requires entropy $\Omega(\log^{2}{n})$. * Secondly, we construct a non-malleable extractor with optimal seed length $O(\log{n})$ for entropy $n/polylog n$. Prior to this construction, non-malleable extractors with a logarithmic seed length, due to Li (FOCS'12), required entropy $0.49n$. Even non-malleable condensers with seed length $O(\log{n})$, by Li (STOC'12), could only support linear entropy. \end{itemize} We further devise several tools for enhancing a given non-malleable extractor in a black-box manner. One such tool is an algorithm that reduces the entropy requirement of a non-malleable extractor at the expense of a slightly longer seed. A second algorithm increases the output length of a non-malleable extractor from constant to linear in the entropy of the source. We also devise an algorithm that transforms a non-malleable extractor to the so-called $t$-non-malleable extractor for any desired $t$. Besides being useful building blocks for our constructions, we consider these modular tools to be of independent interest. |

18:00 | Break |

18:30 | Rump session |

9:00 | Tight SoS-degree bounds for approximate Nash equilibria (abstract) Aram Harrow (MIT), Anand Natarajan (MIT), Xiaodi Wu (University of Oregon) Nash equilibria always exist, but are widely conjectured to require time to find that is exponential in the number of strategies, even for two-player games. By contrast, a simple quasi-polynomial time algorithm, due to Lipton, Markakis and Mehta (LMM), can find approximate Nash equilibria, in which no player can improve their utility by more than epsilon by changing their strategy. The LMM algorithm can also be used to find an approximate Nash equilibrium with near-maximal total welfare. Matching hardness results for this optimization problem were found assuming the hardness of the planted-clique problem (by Hazan and Krauthgamer) and assuming the Exponential Time Hypothesis (by Braverman, Ko and Weinstein). In this paper we consider the application of the sum-squares (SoS) algorithm from convex optimization to the problem of optimizing over Nash equilibria. We show the first unconditional lower bounds on the number of levels of SoS needed to achieve a constant factor approximation to this problem. While it may seem that Nash equilibria do not naturally lend themselves to convex optimization, we also describe a simple LP (linear programming) hierarchy that can find an approximate Nash equilibrium in time comparable to that of the LMM algorithm, although neither algorithm is obviously a generalization of the other. This LP can be viewed as arising from the SoS algorithm at log(n) levels -- matching our lower bounds. The lower bounds involve a modification of the Braverman-Ko-Weinstein embedding of CSPs into strategic games and techniques from sum-of-squares proof systems. The upper bound (i.e.,analysis of the LP) uses information-theory techniques that have been recently applied to other linear- and semidefinite-programming hierarchies. |

9:30 | Understanding PPA-completeness (abstract) Xiaotie Deng (Shanghai Jiao Tong University), Jack Edmonds, Zhe Feng (Shanghai Jiao Tong University), Zhengyang Liu (Shanghai Jiao Tong University), Qi Qi (Hong Kong University of Science and Technology), Zeying Xu (Shanghai Jiao Tong University) We consider the problem of finding a fully colored base triangle on the 2-dimensional \Mobius{} band under the standard boundary condition, proving it to be \ppac{}. The proof {\gr is} based on a construction for the \dpzp{} problem, that {\gr of} finding a zero point under a discrete version of continuity condition. It further derives \ppac{} for versions on the \Mobius{} band of other related discrete fixed point type problems, and a special version of the \tucker{} problem, finding an edge such that if the value of one end vertex is $x$, the other is $-x$, given a special anti-symmetry boundary condition. More generally, this applies to other non-orientable spaces, including the projective plane and the Klein bottle. However, since those models have a closed boundary, we rely on a version of the \ppa{} that states it as to find another fixed point giving a fixed point. This model also makes it presentationally simple for an extension to a high dimensional discrete fixed point problem on a non-orientable (nearly) hyper-grid with a constant side length. |

10:00 | Polynomial bounds for decoupling, with applications (abstract, slides) Ryan O'Donnell (Carnegie Mellon University), Yu Zhao (Carnegie Mellon University) Let f(x) = f(x_1, ..., x_n) = \sum_{|S| \leq k} a_S \prod_{i \in S} x_i be a real n-variate multilinear polynomial of degree at most k, where S \subseteq {1, 2, ..., n}. For its "one-block decoupled" version, f~(y,z) = \sum_{|S| \leq k} a_S \sum_{i \in S} y_i \prod_{j in S\i} z_j, we show tail-bound comparisons of the form Pr[|f~(y,z)| > C_k t] \leq D_k Pr[|f(x)| > t]. Our constants C_k, D_k are significantly better than those known for "full decoupling". For example, when x, y, z are independent Gaussians we obtain C_k = D_k = O(k); when x, y, z +/-1 random variables we obtain C_k = O(k^2), D_k = k^{O(k)}. By contrast, for full decoupling only C_k = D_k = k^{O(k)} is known in these settings. We describe consequences of these results for query complexity (related to conjectures of Aaronson and Ambainis) and for analysis of Boolean functions (including an optimal sharpening of the DFKO Inequality). |

10:30 | Coffee break |

11:00 | Polynomials, quantum query complexity, and Grothendieck's inequality (abstract, slides) Scott Aaronson (MIT), Andris Ambainis (University of Latvia), Jānis Iraids (University of Latvia), Mārtiņš Kokainis (University of Latvia), Juris Smotrovs (University of Latvia) We show an equivalence between 1-query quantum algorithms and representations by degree-2 polynomials. Namely, a partial Boolean function f is computable by a 1-query quantum algorithm with error bounded by \epsilon<1/2 iff f can be approximated by a degree-2 polynomial with error bounded by \epsilon'<1/2. This result holds for two different notions of approximation by a polynomial: the standard definition of Nisan and Szegedy and the approximation by block-multilinear polynomials recently introduced by Aaronson and Ambainis (STOC'2015). We also show two results for polynomials of higher degree. First, there is a total Boolean function which requires \tilde{\Omega}(n) quantum queries but can be represented by a block-multilinear polynomial of degree \tilde{O}(\sqrt{n}). Thus, in the general case (for an arbitrary number of queries), block-multilinear polynomials are not equivalent to quantum algorithms. Second, for any constant degree k, the two notions of approximation by a polynomial (the standard and the block-multilinear) are equivalent. As a consequence, we solve an open problem by Aaronson and Ambainis (STOC'2015), showing that one can estimate the value of any bounded degree-k polynomial p:{0, 1}^n \rightarrow [-1, 1] with O(n^{1-\frac{1}{2k}}) queries. |

11:30 | Sculpting quantum speedups (abstract) Scott Aaronson (MIT), Shalev Ben-David (MIT) Given a problem which is intractable for both quantum and classical algorithms, can we find a sub-problem for which quantum algorithms provide an exponential advantage? For example, quantum algorithms are believed to be unable to quickly evaluate 3SAT instances; but is there a restricted class of 3SAT instances which is particularly quantum-friendly? We refer to this problem as the "sculpting problem." In this work, we give a full characterization of sculptable functions in the query complexity setting. We show that a total function f can be restricted to a promise P such that Q(f|_P)=O(log N) and R(f|_P)=N^{Omega(1)}, if and only if f has a large number of inputs with large certificate complexity. The proof uses some interesting techniques: for one direction, we introduce new relationships between randomized and quantum query complexity in various settings, and for the other direction, we make use of a recent result from communication complexity due to Klartag and Regev. We also characterize sculpting for other query complexity measures, such as R(f) vs. R_0(f) and R_0(f) vs. D(f). Along the way, we prove some new relationships for quantum query complexity: for example, a nearly quadratic relationship between Q(f) and D(f) whenever the promise of f is small. This contrasts the recent super-quadratic query complexity separations, showing that the maximum gap between classical and quantum query complexities is indeed quadratic in various settings - just not for total functions! Lastly, we investigate sculpting in the Turing machine model. We show that if there is any BPP-bi-immune language in BQP, then *every* language outside BPP can be restricted to a promise which places it in PromiseBQP but not in PromiseBPP. Under a weaker assumption, that some problem in BQP is hard on average for P/poly, we show that every *paddable* language outside BPP is sculptable in this way. |

12:00 | A linear time algorithm for quantum 2-SAT (abstract, slides) Niel de Beaudrap (University of Oxford), Sevag Gharibian (Virginia Commonwealth University) The Boolean constraint satisfaction problem 3-SAT is arguably the canonical NP-complete problem. In contrast, 2-SAT can not only be decided in polynomial time, but in fact in deterministic linear time. In 2006, Bravyi proposed a physically motivated generalization of k-SAT to the quantum setting, defining the problem "quantum k-SAT". He showed that quantum 2-SAT is also solvable in polynomial time on a classical computer, in particular in deterministic time O(n^4), assuming unit-cost arithmetic over a field extension of the rational numbers, where n is number of variables. In this paper, we present an algorithm for quantum 2-SAT which runs in linear time, i.e. deterministic time O(n+m) for n and m the number of variables and clauses, respectively. Our approach exploits the transfer matrix techniques of Laumann et al. [QIC, 2010] used in the study of phase transitions for random quantum 2-SAT, and bears similarities with both the linear time 2-SAT algorithms of Even, Itai, and Shamir (based on backtracking) [SICOMP, 1976] and Aspvall, Plass, and Tarjan (based on strongly connected components) [IPL, 1979]. |

12:30 | Complexity classification of two-qubit commuting hamiltonians (abstract, slides) Adam Bouland (MIT), Laura Mancinska (National University of Singapore), Xue Zhang (MIT) We classify two-qubit commuting Hamiltonians in terms of their computational complexity. Suppose one has a two-qubit commuting Hamiltonian H which one can apply to any pair of qubits, starting in a computational basis state. We prove a dichotomy theorem: either this model is efficiently classically simulable or it allows us to sample from probability distributions which cannot be sampled from classically unless the polynomial hierarchy collapses. Furthermore, the only simulable Hamiltonians are those which fail to generate entanglement. This shows that generic two-qubit commuting Hamiltonians can be used to perform computational tasks which are intractable for classical computers under plausible assumptions. Our proof makes use of a structure theorem for commuting Hamiltonians, new postselection gadgets, and Lie theory. |

13:00 | Lunch and Excursion |

18:00 | Banquet |

9:00 | Identity testing for constant-width, and commutative, read-once oblivious ABPs (abstract, slides) Rohit Gurjar (IIT Kanpur), Arpita Korwar (IIT Kanpur), Nitin Saxena (IIT Kanpur) We give improved hitting-sets for two special cases of Read-once Oblivious Arithmetic Branching Programs (ROABP). First is the case of an ROABP with known variable order. The best hitting-set known for this case had cost (nw)^{O(log n)}, where n is the number of variables and w is the width of the ROABP. Even for a constant-width ROABP, nothing better than a quasi-polynomial bound was known. We improve the hitting-set complexity for the known-order case to n^{O(log w)}. In particular, this gives the first polynomial time hitting-set for constant-width ROABP (known-order). However, our hitting-set works only over those fields whose characteristic is zero or quasi-polynomially large. To construct the hitting-set, we use the concept of the rank of partial derivative matrix. Unlike previous approaches whose basic building block is a monomial map, we use a polynomial map. The second case we consider is that of commutative ROABP. The best known hitting-set for this case had cost d^{O(log w)}(nw)^{O(log log w)}, where d is the individual degree. We improve this hitting-set complexity to (ndw)^{O(log log w)}. We get this by achieving rank concentration more efficiently. |

9:30 | Identity testing and lower bounds for read-k oblivious algebraic branching programs (abstract, slides) Matthew Anderson (Union College), Michael A. Forbes (Princeton University), Ramprasad Saptharishi (Tel Aviv University), Amir Shpilka (Tel Aviv University), Ben Lee Volk (Tel Aviv University) Read-$k$ oblivious algebraic branching programs are a natural generalization of the well-studied model of read-once oblivious algebraic branching program (ROABPs). In this work, we give an exponential lower bound of $\exp(n/k^{O(k)})$ on the width of any read-$k$ oblivious ABP computing some explicit multilinear polynomial $f$ that is computed by a polynomial size depth-$3$ circuit. We also study the polynomial identity testing (PIT) problem for this model and obtain a white-box subexponential-time PIT algorithm. The algorithm runs in time $2^{\tilde{O}(n^{1-1/2^{k-1}})}$ and needs white box access only to know the order in which the variables appear in the ABP. |

10:00 | Reconstruction of real depth-3 circuits with top fan-in 2 (abstract) Gaurav Sinha (California Institute of Technology) Reconstruction of arithmertic circuits has been heavily studied in the past few years and has connections to proving lower bounds and deterministic identity testing. In this paper we present a polynomial time randomized algorithm for reconstructing ΣΠΣ(2) circuits over R, i.e. depth-3 circuits with fan-in 2 at the top addition gate and having real coefficients. The algorithm needs only a blackbox query access to the polynomial f \in R[x_1,...,x_n] of degree d, computable by a ΣΠΣ(2) circuit C. In addition, we assume that the "simple rank" of this polynomial (essential number of variables after removing the gcd of the two multiplication gates) is bigger than a fixed constant. Our algorithm runs in time poly(n,d) and returns an equivalent ΣΠΣ(2) circuit(with high probability). The problem of reconstructing ΣΠΣ(2) circuits over finite fields was first proposed by Shpilka [24]. The generalization to ΣΠΣ(k) circuits, k = O(1) (over finite fields) was addressed by Karnin and Shpilka in [15]. The techniques in these previous involve iterating over all objects of certain kinds over the ambient field and thus the running time depends on the size of the field F. Their reconstruction algorithm uses lower bounds on the lengths of Linear Locally Decodable Codes with 2 queries. In our settings, such ideas immediately pose a problem and we need new ideas to handle the case of the field R. Our main techniques are based on the use of Quantitative Syslvester Gallai Theorems from the work of Barak et.al. [3] to find a small collection of "nice" subspaces to project onto. The heart of our paper lies in subtle applications of the Quantitative Sylvester Gallai theorems to prove why projections w.r.t. the "nice" subspaces can be"glued". We also use Brill's Equations from [8] to construct a small set of candidate linear forms (containing linear forms from both gates). Another important technique which comes very handy is the polynomial time randomized algorithm for factoring multivariate polynomials given by Kaltofen [14]. |

10:30 | Coffee break |

11:00 | Proof complexity lower bounds from algebraic circuit complexity (abstract, slides) Michael A. Forbes (Princeton University), Amir Shpilka (Tel Aviv University), Iddo Tzameret (Royal Holloway, University of London), Avi Wigderson (Institute for Advanced Study) We give upper and lower bounds on the power of subsystems of the Ideal Proof System (IPS), the algebraic proof system recently proposed by Grochow and Pitassi~\cite{GrochowPitassi14}, where the circuits comprising the proof come from various restricted algebraic circuit classes. This mimics an established research direction in the boolean setting for subsystems of Frege proofs whose lines are circuits from restricted boolean circuit classes. Essentially all of the subsystems considered in this paper can simulate the well-studied Nullstellensatz proof system, and prior to this work there were no known lower bounds when measuring proof size by the algebraic complexity of the polynomials (except with respect to degree, or to sparsity). Our main contributions are two general methods of converting certain algebraic lower bounds into proof complexity ones. Both require stronger arithmetic lower bounds than common, which should hold not for a specific polynomial but for a whole family defined by it. These may be likened to some of the methods by which Boolean circuit lower bounds are turned into related proof-complexity ones, especially the ``feasible interpolation'' technique. We establish algebraic lower bounds of these forms for several explicit polynomials, against a variety of classes, and infer the relevant proof complexity bounds. These yield separations between IPS subsystems, which we complement by simulations to create a partial structure theory for IPS systems. Our first method is a \emph{functional lower bound}, a notion of Grigoriev and Razborov~\cite{GrigorievRazborov00}, which is a function $\hat{f}:\bits^n\to\F$ such that any polynomial $f$ agreeing with $\hat{f}$ on the boolean cube requires large algebraic circuit complexity. We develop functional lower bounds for a variety of circuit classes (sparse polynomials, depth-3 powering formulas, roABPs and multilinear formulas) where $\hat{f}(\vx)$ equals $\nicefrac{1}{p(\vx)}$ for a constant-degree polynomial $p$ depending on the relevant circuit class. We believe these lower bounds are of independent interest in algebraic complexity, and show that they also imply lower bounds for the size of the corresponding IPS refutations for proving that the relevant polynomial $p$ is non-zero over the boolean cube. In particular, we show super-polynomial lower bounds for refuting variants of the subset-sum axioms in these IPS subsystems. Our second method is to give \emph{lower bounds for multiples}, that is, to give explicit polynomials whose all (non-zero) multiples require large algebraic circuit complexity. By extending known techniques, we give lower bounds for multiples for various restricted circuit classes such sparse polynomials, sums of powers of low-degree polynomials, and roABPs. These results are of independent interest, as we argue that lower bounds for multiples is the correct notion for instantiating the algebraic hardness versus randomness paradigm of Kabanets and Impagliazzo~\cite{KabanetsImpagliazzo04}. Further, we show how such lower bounds for multiples extend to lower bounds for refutations in the corresponding IPS subsystem. |

11:30 | Functional lower bounds for arithmetic circuits and connections to boolean circuit complexity (abstract, slides) Michael A. Forbes (Princeton University), Mrinal Kumar (Rutgers University), Ramprasad Saptharishi (Tel Aviv University) We say that a circuit $C$ over a field $\F$ \emph{functionally} computes an $n$-variate polynomial $P \in \F[x_1, x_2, \ldots, x_n]$ if for every $\vecx \in \{0,1\}^n$ we have that $C(x) = P(x)$. This is in contrast to \emph{syntactically} computing $P$, when $C \equiv P$ as formal polynomials. In this paper, we study the question of proving lower bounds for homogeneous depth-$3$ and depth-$4$ arithmetic circuits for functional computation. We prove the following results : \begin{itemize} \itemsep 0pt \item Exponential lower bounds homogeneous depth-$3$ arithmetic circuits for a polynomial in $\VNP$. \item Exponential lower bounds for homogeneous depth-$4$ arithmetic circuits with bounded individual degree for a polynomial in $\VNP$. \end{itemize} Our main motivation for this line of research comes from our observation that strong enough functional lower bounds for even very special depth-$4$ arithmetic circuits for the Permanent imply a separation between $\sp$ and $\acc$. Thus, improving the second result to get rid of the \emph{bounded individual degree} condition could lead to substantial progress in boolean circuit complexity. Besides, it is known from a recent result of Kumar and Saptharishi~\cite{KumarSaptharishi15} that over constant sized finite fields, strong enough \emph{average case} functional lower bounds for homogeneous depth-$4$ circuits imply superpolynomial lower bounds for homogeneous depth-$5$ circuits. Our proofs are based on a family of new complexity measures called \emph{shifted evaluation dimension}, and might be of independent interest. |

12:00 | Arithmetic circuits with locally low algebraic rank (abstract, slides) Mrinal Kumar (Rutgers University), Shubhangi Saraf (Rutgers University) In recent years there has been a flurry of activity proving lower bounds for homogeneous depth-4 arithmetic circuits~\cite{GKKS12, FLMS13, KLSS14, KS-full}, which has brought us very close to statements that are known to imply VP $\neq$ VNP. It is a big question to go beyond homogeneity, and in this paper we make progress towards this by considering depth-4 circuits of {\it low algebraic rank}, which are a natural extension of homogenous depth-4 arithmetic circuits. A depth-4 circuit is a respresentation of an $N$-variate, degree $n$ polynomial $P$ as \[ P = \sum_{i = 1}^T Q_{i1}\cdot Q_{i2}\cdot \cdots Q_{it} \] where the $Q_{ij}$ are given by their monomial expansion. Homogeneity adds the constraint that for every $i \in [T]$, $\sum_{j} \text{degree}(Q_{ij}) = n$. We study an extension where, for every $i \in [T]$, the {\it algebraic rank} of the set of polynomials $\{Q_{i1}, Q_{i2}, \ldots ,Q_{it}\}$ is at most some parameter $k$. We call this the class of $\spnew$ circuits. Already for $k = n$, these circuits are a strong generalization of the class of homogeneous depth-4 circuits, where in particular $t \leq n$ (and hence $k \leq n$). We study lower bounds and polynomial identity tests for such circuits and prove the following results. \begin{enumerate} \item {\bf Lower bounds : } We give an explicit family of polynomials $\{P_n\}$ of degree $n$ in $N = n^{O(1)}$ variables in $\VNP$, such that any $\spnewn$ circuit computing $P_n$ has size at least $\exp{(\Omega(\sqrt{n}\log N))}$. This strengthens and unifies two lines of work: it generalizes the recent exponential lower bounds for {\it homogeneous} depth-4 circuits~\cite{KLSS14, KS-full} as well as the Jacobian based lower bounds of Agrawal et al~\cite{ASSS12} which worked for $\spnew$ circuits in the restricted setting where $T\cdot k \leq n$. \item {\bf Hitting sets : } Let $\spnewbounded$ be the class of $\spnew$ circuits with bottom fan-in at most $d$. We show that if $d$ and $k$ are at most $\text{poly}(\log N)$, then there is an explicit hitting set for $\spnewbounded$ circuits of size quasipolynomial in $N$ and the size of the circuit. This strengthens a result of Forbes~\cite{Forbes-personal} which showed such quasipolynomial sized hitting sets in the setting where $d$ and $t$ are at most $\text{poly}(\log N)$. %This is similar in spirit to a result of Beecken et al~\cite{BMS11} and Agrawal et al.~\cite{ASSS12} who showed polynomial sized hitting sets for when $T.k = O(1)$, although their results hold for the case unbounded $d$ and hence seems incomparable to ours. \end{enumerate} A key technical ingredient of the proofs is a result which states that over any field of characteristic zero (or sufficiently large characteristic), upto a translation, every polynomial in a set of algebraically dependent polynomials can be written as a function of the polynomials in the transcendence basis. We believe this may be of independent interest. We combine this with shifted partial derivative based methods to obtain our final results. |

12:30 | Sums of products of polynomials in few variables : lower bounds and polynomial identity testing (abstract, slides) Mrinal Kumar (Rutgers University), Shubhangi Saraf (Rutgers University) We study the complexity of representing polynomials as a sum of products of polynomials in few variables. More precisely, we study representations of the form $$P = \sum_{i = 1}^T \prod_{j = 1}^d Q_{ij}$$ such that each $Q_{ij}$ is an arbitrary polynomial that depends on at most $s$ variables. We prove the following results. \begin{itemize} \item Over fields of characteristic zero, for every constant $\mu$ such that $0 \leq \mu < 1$, we give an explicit family of polynomials $\{P_{N}\}$, where $P_{N}$ is of degree $n$ in $N = n^{O(1)}$ variables, such that any representation of the above type for $P_{N}$ with $s = N^{\mu}$ requires $Td \geq n^{\Omega(\sqrt{n})}$. This strengthens a recent result of Kayal and Saha~\cite{KayalSaha14} which showed similar lower bounds for the model of sums of products of linear forms in few variables. It is known that any asymptotic improvement in the exponent of the lower bounds (even for $s = \sqrt{n}$) would separate $\VP$ and $\VNP$~\cite{KayalSaha14}. \item We obtain a deterministic subexponential time blackbox polynomial identity testing (PIT) algorithm for circuits computed by the above model when $T$ and the individual degree of each variable in $P$ are at most $\log^{O(1)} N$ and $s \leq N^{\mu}$ for any constant $\mu < 1/2$. We get quasipolynomial running time when $s < \log^{O(1)} N$. The PIT algorithm is obtained by combining our lower bounds with the hardness-randomness tradeoffs developed in~\cite{DSY09, KI04}. To the best of our knowledge, this is the first nontrivial PIT algorithm for this model (even for the case $s=2$), and the first nontrivial PIT algorithm obtained from lower bounds for small depth circuits.~\footnote{In a recent independent work, Forbes~\cite{Forbes-personal} does blackbox identity testing for another subclass of depth four circuits using shifted partial derivative based methods. To the best of our understanding, the results in these two papers are incomparable even though both rely on similar techniques.} \end{itemize} |

13:00 | End of the conference |