INRIA Rocquencourt
Mochon's proof [Moc07] of existence of quantum weak coin flipping with arbitrarily small bias is a fundamental result in quantum cryptography, but at the same time one of the least understood. Though used several times as a black box in important follow-up results [Gan09, CK09, AS10, CK11, KZ13] the result has not been peer-reviewed, its novel techniques (and in particular Kitaev's point game formalism) have not been applied anywhere else, and an explicit protocol is missing. We believe that truly understanding the existence proof and the novel techniques it relies on would constitute a major step in quantum information theory, leading to deeper understanding of entanglement and of quantum protocols in general. In this work, we make a first step in this direction. We simplify parts of Mochon's construction considerably, making about 20 pages of analysis in the original proof superfluous, clarifying some other parts of the proof on the way, and presenting the proof in a way which is conceptually easier to grasp. We believe the resulting proof of existence is easier to understand, more readable, and certainly verifiable. Moreover, we analyze the resources needed to achieve a bias ϵ\epsilon and show that the number of qubits is O(log1/ϵ)O(\log 1/\epsilon), while the number of rounds is (1/ϵ)O(1/ϵ)(1/\epsilon)^{O(1/\epsilon)}. A true understanding of the proof, including Kitaev's point game techniques and their applicability, as well as completing the task of constructing an explicit (and also simpler and more efficient) protocol, are left to future work.
From the free surface Navier-Stokes system, we derive the non-hydrostatic Saint-Venant system for the shallow waters including friction and viscosity. The derivation leads to two formulations of growing complexity depending on the level of approximation chosen for the fluid pressure. The obtained models are compared with the Boussinesq models.
In some real world situations, linear models are not sufficient to represent accurately complex relations between input variables and output variables of a studied system. Multilayer Perceptrons are one of the most successful non-linear regression tool but they are unfortunately restricted to inputs and outputs that belong to a normed vector space. In this chapter, we propose a general recoding method that allows to use symbolic data both as inputs and outputs to Multilayer Perceptrons. The recoding is quite simple to implement and yet provides a flexible framework that allows to deal with almost all practical cases. The proposed method is illustrated on a real world data set.
We investigate a decentralised approach to committing transactions in a replicated database, under partial replication. Previous protocols either re-execute transactions entirely and/or compute a total order of transactions. In contrast, ours applies update values, and orders only conflicting transactions. It results that transactions execute faster, and distributed databases commit in small committees. Both effects contribute to preserve scalability as the number of databases and transactions increase. Our algorithm ensures serializability, and is live and safe in spite of faults.
A systematic numerical approach to approximate high dimensional Lindblad equations is described. It is based on a deterministic rank m approximation of the density operator, the rank m being the only parameter to adjust. From a known initial value, this rank m approximation gives at each time-step an estimate of the largest m eigen-values with their eigen-vectors of the density operator. A numerical scheme is proposed. Its numerical efficiency in the case of a rank 12 approximation is demonstrated for oscillation revivals of 50 atoms interacting resonantly with a slightly damped coherent quantized field of 200 photons. The approach may be employed for other similar equations. We in particularly show how to adapt such low-rank approximation for Riccati differential equations appearing in Kalman filtering.
Deduction modulo is a way to express a theory using computation rules instead of axioms. We present in this paper an extension of deduction modulo, called Polarized deduction modulo, where some rules can only be used at positive occurrences, while others can only be used at negative ones. We show that all theories in propositional calculus can be expressed in this framework and that cuts can always be eliminated with such theories.
This paper presents a proof that existence of a polynomial Lyapunov function is necessary and sufficient for exponential stability of sufficiently smooth nonlinear ordinary differential equations on bounded sets. The main result states that if there exists an n-times continuously differentiable Lyapunov function which proves exponential stability on a bounded subset of R^n, then there exists a polynomial Lyapunov function which proves exponential stability on the same region. Such a continuous Lyapunov function will exist if, for example, the right-hand side of the differential equation is polynomial or at least n-times continuously differentiable. The proof is based on a generalization of the Weierstrass approximation theorem to differentiable functions in several variables. Specifically, we show how to use polynomials to approximate a differentiable function in the Sobolev norm W^{1,\infty} to any desired accuracy. We combine this approximation result with the second-order Taylor series expansion to find that polynomial Lyapunov functions can approximate continuous Lyapunov functions arbitrarily well on bounded sets. Our investigation is motivated by the use of polynomial optimization algorithms to construct polynomial Lyapunov functions.
Consider a tree network TT, where each edge acts as an independent copy of a given channel MM, and information is propagated from the root. For which TT and MM does the configuration obtained at level nn of TT typically contain significant information on the root variable? This problem arose independently in biology, information theory and statistical physics. For all bb, we construct a channel for which the variable at the root of the bb-ary tree is independent of the configuration at level 2 of that tree, yet for sufficiently large B>b, the mutual information between the configuration at level nn of the BB-ary tree and the root variable is bounded away from zero. This is related to certain secret-sharing protocols. We improve the upper bounds on information flow for asymmetric binary channels (which correspond to the Ising model with an external field) and for symmetric qq-ary channels (which correspond to Potts models). Let \lam2(M)\lam_2(M) denote the second largest eigenvalue of MM, in absolute value. A CLT of Kesten and Stigum~(1966) implies that if b |\lam_2(M)|^2 >1, then the {\em census} of the variables at any level of the bb-ary tree, contains significant information on the root variable. We establish a converse: if b |\lam_2(M)|^2 < 1, then the census of the variables at level nn of the bb-ary tree is asymptotically independent of the root variable. This contrasts with examples where b |\lam_2(M)|^2 <1, yet the {\em configuration} at level nn is not asymptotically independent of the root variable.
4
In porous media physics, calibrating model parameters through experiments is a challenge. This process is plagued with errors that come from modelling, measurement and computation of the macroscopic observables through random homogenization -- the forward problem -- as well as errors coming from the parameters fitting procedure -- the inverse problem. In this work, we address these issues by considering a least-square formulation to identify parameters of the microscopic model on the basis on macroscopic observables. In particular, we discuss the selection of the macroscopic observables which we need to know in order to uniquely determine these parameters. To gain a better intuition and explore the problem without a too high computational load, we mostly focus on the one-dimensional case. We show that the Newton algorithm can be efficiently used to robustly determine optimal parameters, even if some small statistical noise is present in the system.
In many applications, input data are sampled functions taking their values in infinite dimensional spaces rather than standard vectors. This fact has complex consequences on data analysis algorithms that motivate modifications of them. In fact most of the traditional data analysis tools for regression, classification and clustering have been adapted to functional inputs under the general name of functional Data Analysis (FDA). In this paper, we investigate the use of Support Vector Machines (SVMs) for functional data analysis and we focus on the problem of curves discrimination. SVMs are large margin classifier tools based on implicit non linear mappings of the considered data into high dimensional spaces thanks to kernels. We show how to define simple kernels that take into account the unctional nature of the data and lead to consistent classification. Experiments conducted on real world data emphasize the benefit of taking into account some functional aspects of the problems.
There are no more papers matching your filters at the moment.