Dutch Research Council
The no-free-lunch theorems promote a skeptical conclusion that all possible machine learning algorithms equally lack justification. But how could this leave room for a learning theory, that shows that some algorithms are better than others? Drawing parallels to the philosophy of induction, we point out that the no-free-lunch results presuppose a conception of learning algorithms as purely data-driven. On this conception, every algorithm must have an inherent inductive bias, that wants justification. We argue that many standard learning algorithms should rather be understood as model-dependent: in each application they also require for input a model, representing a bias. Generic algorithms themselves, they can be given a model-relative justification.
Researchers prove that the k-point bound, an approximation method for the independence number of topological packing graphs, rigorously converges to the true independence number as k increases. The work also establishes a relationship between this bound and the copositive hierarchy, showing that the k-point bound is exact when k is at least two greater than the independence number.
For a subfamily F2[n]{F}\subseteq 2^{[n]} of the Boolean lattice, consider the graph GFG_{F} on F{F} based on the pairwise inclusion relations among its members. Given a positive integer tt, how large can F{F} be before GFG_{F} must contain some component of order greater than tt? For t=1t=1, this question was answered exactly almost a century ago by Sperner: the size of a middle layer of the Boolean lattice. For t=2nt=2^n, this question is trivial. We are interested in what happens between these two extremes. For t=2gt=2^{g} with g=g(n)g=g(n) being any integer function that satisfies g(n)=o(n/logn)g(n)=o(n/\log n) as nn\to\infty, we give an asymptotically sharp answer to the above question: not much larger than the size of a middle layer. This constitutes a nontrivial generalisation of Sperner's theorem. We do so by a reduction to a Tur\'an-type problem for rainbow cycles in properly edge-coloured graphs. Among other results, we also give a sharp answer to the question, how large can F{F} be before GFG_{F} must be connected?
Witsenhausen's problem asks for the maximum fraction αn\alpha_n of the nn-dimensional unit sphere that can be covered by a measurable set containing no pairs of orthogonal points. The best upper bounds for αn\alpha_n are given by extensions of the Lovász theta number. In this paper, optimization hierarchies based on the Lovász theta number, like the Lasserre hierarchy, are extended to Witsenhausen's problem and similar problems. These hierarchies are shown to converge and are used to compute the best upper bounds for αn\alpha_n in low dimensions.
There are no more papers matching your filters at the moment.