Université d’Artois
We develop the theory of Milnor-Witt motives and motivic cohomology. Compared to Voevodsky's theory of motives and his motivic cohomology, the first difference appears in our definition of Milnor-Witt finite correspondences, where our cycles come equipped with quadratic forms. This yields a weaker notion of transfers and a derived category of motives that is closer to the stable homotopy theory of schemes. We prove a cancellation theorem when tensoring with the Tate object, we compare the diagonal part of our Milnor-Witt motivic cohomology to Minor-Witt K-theory and we provide spectra representing various versions of motivic cohomology in the A1\mathbb{A}^1-derived category or the stable homotopy category of schemes.
A comprehensive picture of three Bethe-Kikuchi variational principles including their relationship to belief propagation (BP) algorithms on hypergraphs is given. The structure of BP equations is generalized to define continuous-time diffusions, solving localized versions of the max-entropy principle (A), the variational free energy principle (B), and a less usual equilibrium free energy principle (C), Legendre dual to A. Both critical points of Bethe-Kikuchi functionals and stationary beliefs are shown to lie at the non-linear intersection of two constraint surfaces, enforcing energy conservation and marginal consistency respectively. The hypersurface of singular beliefs, accross which equilibria become unstable as the constraint surfaces meet tangentially, is described by polynomial equations in the convex polytope of consistent beliefs. This polynomial is expressed by a loop series expansion for graphs of binary variables.
In this study, we propose an imitation learning framework designed to enhance the Benders decomposition method. Our primary focus is addressing degeneracy in subproblems with multiple dual optima, among which Magnanti-Wong technique identifies the non-dominant solution. We develop two policies. In the first policy, we replicate the Magnanti-Wong method and learn from each iteration. In the second policy, our objective is to determine a trajectory that expedites the attainment of the final subproblem dual solution. We train and assess these two policies through extensive computational experiments on a network design problem with flow subproblem, confirming that the presence of such learned policies significantly enhances the efficiency of the decomposition process.
In this paper, we address the Bounded Cardinality Hub Location Routing with Route Capacity wherein each hub acts as a transshipment node for one directed route. The number of hubs lies between a minimum and a maximum and the hub-level network is a complete subgraph. The transshipment operations take place at the hub nodes and flow transfer time from a hub-level transporter to a spoke-level vehicle influences spoke- to-hub allocations. We propose a mathematical model and a branch-and-cut algorithm based on Benders decomposition to solve the problem. To accelerate convergence, our solution framework embeds an efficient heuristic producing high-quality solutions in short computation times. In addition, we show how symmetry can be exploited to accelerate and improve the performance of our method.
In this paper, we address the Bounded Cardinality Hub Location Routing with Route Capacity wherein each hub acts as a transshipment node for one directed route. The number of hubs lies between a minimum and a maximum and the hub-level network is a complete subgraph. The transshipment operations take place at the hub nodes and flow transfer time from a hub-level transporter to a spoke-level vehicle influences spoke- to-hub allocations. We propose a mathematical model and a branch-and-cut algorithm based on Benders decomposition to solve the problem. To accelerate convergence, our solution framework embeds an efficient heuristic producing high-quality solutions in short computation times. In addition, we show how symmetry can be exploited to accelerate and improve the performance of our method.
The analysis of trail-running performance appears to be complex and cardio-respiratory and muscular factors could have a variable importance depending on the inclination. Our study aims to determine the role of these parameters in performance. 13 subjects with heterogeneous levels participated in the study. They carried out 7 visits including 3 maximal aerobic speed (MAS) test at 1, 10 and 25% slope on treadmill, 3 endurance tests at 100% of the MAS reached at 1, 10 and 25% and an evaluation on isokinetic ergometer at different speeds (60-180-240 {\textdegree}/s). Gas exchange measured during the incremental tests. We were able to identify 2 groups, a performance and a recreational group. We observe a difference in VO2max, MAS at 1 and 10%, and maximal aerobic ascensional speed (MAaS) at 25%, between the 2 groups but no difference in VO2max and exhaustion time at 100% MAS between the different conditions (1-10-25%). Interestingly, at ventilatory thresholds the metabolic parameters, expressed as absolute or relative values, are similar between conditions (10-25%) while the ascensional speed are different. This study suggests that the measurement of ascensional speed is not as relevant as heart rate for controlling intensity given the variety of slope gradients in trail-running races.
In the paper, we consider the competitive facility location problem with limited choice rule (CFLPLCR), which attempts to open a subset of facilities to maximize the net profit of a newcomer company, requiring customers to patronize only a limited number of opening facilities and an outside option. We propose an efficient branch-and-cut (B&C) approach for the CFLPLCR based on newly proposed mixed integer linear programming (MILP) formulations. Specifically, by establishing the submodularity of the probability function, we develop an MILP formulation for the CFLPLCR using the submodular inequalities. For the special case where each customer patronizes at most one open facility and the outside option, we show that the submodular inequalities can characterize the convex hull of the considered set and provide a compact MILP formulation. Moreover, for the general case, we strengthen the submodular inequalities by sequential lifting, resulting in a class of facet-defining inequalities. The proposed lifted submodular inequalities are shown to be stronger than the classic submodular inequalities, enabling to obtain another MILP formulation with a tighter linear programming (LP) relaxation. By extensive numerical experiments, we show that the proposed B&C approach outperforms the state-of-the-art generalized Benders decomposition approach by at least one order of magnitude. Furthermore, it enables to solve CFLPLCR instances with 10000 customers and 2000 facilities.
Decision trees have long been recognized as models of choice in sensitive applications where interpretability is of paramount importance. In this paper, we examine the computational ability of Boolean decision trees in deriving, minimizing, and counting sufficient reasons and contrastive explanations. We prove that the set of all sufficient reasons of minimal size for an instance given a decision tree can be exponentially larger than the size of the input (the instance and the decision tree). Therefore, generating the full set of sufficient reasons can be out of reach. In addition, computing a single sufficient reason does not prove enough in general; indeed, two sufficient reasons for the same instance may differ on many features. To deal with this issue and generate synthetic views of the set of all sufficient reasons, we introduce the notions of relevant features and of necessary features that characterize the (possibly negated) features appearing in at least one or in every sufficient reason, and we show that they can be computed in polynomial time. We also introduce the notion of explanatory importance, that indicates how frequent each (possibly negated) feature is in the set of all sufficient reasons. We show how the explanatory importance of a feature and the number of sufficient reasons can be obtained via a model counting operation, which turns out to be practical in many cases. We also explain how to enumerate sufficient reasons of minimal size. We finally show that, unlike sufficient reasons, the set of all contrastive explanations for an instance given a decision tree can be derived, minimized and counted in polynomial time.
1
The adoption of human oversight measures makes it possible to regulate, to varying degrees and in different ways, the decision-making process of Artificial Intelligence (AI) systems, for example by placing a human being in charge of supervising the system and, upstream, by developing the AI system to enable such supervision. Within the global governance of AI, the requirement for human oversight is embodied in several regulatory formats, within a diversity of normative sources. On the one hand, it reinforces the accountability of AI systems' users (for example, by requiring them to carry out certain checks) and, on the other hand, it better protects the individuals affected by the AI-based decision (for example, by allowing them to request a review of the decision). In the European context, the AI Act imposes obligations on providers of high-risk AI systems (and to some extent also on professional users of these systems, known as deployers), including the introduction of human oversight tools throughout the life cycle of AI systems, including by design (and their implementation by deployers). The EU legislator is therefore going much further than in the past in "spelling out" the legal requirement for human oversight. But it does not intend to provide for all implementation details; it calls on standardisation to technically flesh out this requirement (and more broadly all the requirements of section 2 of chapter III) on the basis of article 40 of the AI Act. In this multi-level regulatory context, the question of the place of humans in the AI decision-making process should be given particular attention. Indeed, depending on whether it is the law or the technical standard that sets the contours of human oversight, the "regulatory governance" of AI is not the same: its nature, content and scope are different. This analysis is at the heart of the contribution made (or to be made) by legal experts to the central reflection on the most appropriate regulatory governance -- in terms of both its institutional format and its substance -- to ensure the effectiveness of human oversight and AI trustworthiness.
Recent advances in time series, where deterministic and stochastic modelings as well as the storage and analysis of big data are useless, permit a new approach to short-term traffic flow forecasting and to its reliability, i.e., to the traffic volatility. Several convincing computer simulations, which utilize concrete data, are presented and discussed.
We present an elementary branch and bound algorithm with a simple analysis of why it achieves worstcase optimality for join queries on classes of databases defined respectively by cardinality or acyclic degree constraints. We then show that if one is given a reasonable way for recursively estimating upper bounds on the number of answers of the join queries, our algorithm can be turned into algorithm for uniformly sampling answers with expected running time O(UP/OUT)O(UP/OUT) where UPUP is the upper bound, OUTOUT is the actual number of answers and O()O(\cdot) ignores polylogarithmic factors. Our approach recovers recent results on worstcase optimal join algorithm and sampling in a modular, clean and elementary way.
We introduce a novel language for reasoning about agents' cognitive attitudes of both epistemic and motivational type. We interpret it by means of a computationally grounded semantics using belief bases. Our language includes five types of modal operators for implicit belief, complete attraction, complete repulsion, realistic attraction and realistic repulsion. We give an axiomatization and show that our operators are not mutually expressible and that they can be combined to represent a large variety of psychological concepts including ambivalence, indifference, being motivated, being demotivated and preference. We present a dynamic extension of the language that supports reasoning about the effects of belief change operations. Finally, we provide a succinct formulation of model checking for our languages and a PSPACE model checking algorithm relying on a reduction into TQBF. We present some experimental results for the implemented algorithm on computation time in a concrete example.
"Fluffy" hydrogenated amorphouscarbon(a-C:H)wassynthesizedusingadielectric barrier discharge plasma, driven by nanosec ond high voltage pulses at 1 kHz frequency in a helium-butane mixture. The a-C:H samples were characterized by scanning and transmission electron microscopy, laser-assisted and secondary ion mass spectrometry, and Raman and Fourier-transform infrared spectroscopy. We find that a-C:H samples exhibit infrared absorption features in good agreement with those observed for carbonaceous dust in IRAS 08572+3915 galaxy. We discuss their nano to microscale structure and derive their hydrogen to carbon (H/C) ratios from the results obtained by three distinct experimental characterization techniques. Relying on the average H/C value determined by mass spectrometry and Raman spectroscopy, we can then constrain the absorption strengths values to those best corresponding to our dust analogue, and calculate the H/C ratio from the infrared spectra. Altogether, we find that our dust analogue consists of a dominant hydrogen-rich aliphatic network, with small, isolated, aromatic regions. The a-C:H dust analogue was then irradiated with 3 MeV H+ and subsequently analyzed ex situ. Morphological and chemical changes, including the evolution of H/C, CH2/CH3, and sp2/sp3 ratios, were observed with increasing proton fluence, indicating dehydrogenation and graphitization. Proton bombardment shifted the initial location of a-C:H in the hydrocarbon ternary phase diagram toward the central region defined by IRAS 08572+3915 observations. The decay of the 3.4 {\mu}m band with proton fluence was used to calculate CH destruction cross-sections, results consistent with a direct effect of cosmic rays on the disappearance of the 3.4 {\mu}m band.
Within the formal setting of the Lockean thesis, an agent belief set is defined in terms of degrees of confidence and these are described in probabilistic terms. This approach is of established interest, notwithstanding some limitations that make its use troublesome in some contexts, like, for instance, in belief change theory. Precisely, Lockean belief sets are not generally closed under (classical) logical deduction. The aim of the present paper is twofold: on one side we provide two characterizations of those belief sets that are closed under classical logic deduction, and on the other we propose an approach to probabilistic update that allows us for a minimal revision of those beliefs, i.e., a revision obtained by making the fewest possible changes to the existing belief set while still accommodating the new information. In particular, we show how we can deductively close a belief set via a minimal revision.
We extend description logics (DLs) with non-monotonic reasoning features. We start by investigating a notion of defeasible subsumption in the spirit of defeasible conditionals as studied by Kraus, Lehmann and Magidor in the propositional case. In particular, we consider a natural and intuitive semantics for defeasible subsumption, and investigate KLM-style syntactic properties for both preferential and rational subsumption. Our contribution includes two representation results linking our semantic constructions to the set of preferential and rational properties considered. Besides showing that our semantics is appropriate, these results pave the way for more effective decision procedures for defeasible reasoning in DLs. Indeed, we also analyse the problem of non-monotonic reasoning in DLs at the level of entailment and present an algorithm for the computation of rational closure of a defeasible ontology. Importantly, our algorithm relies completely on classical entailment and shows that the computational complexity of reasoning over defeasible ontologies is no worse than that of reasoning in the underlying classical DL ALC.
To build new generalisations of Multiple Zeta Values, we define new spaces of formal series and formal integrals. We show that they are tridendriform and dendriform algebras. This allows us to reinterpret the fact that Multiple Zeta Values are algebra morphisms for shuffles of words in terms of finer tridendriform and dendriform structures. Applying universal properties of Schroeder trees we obtain generalisations of Multiple Zeta Values that are algebra morphisms for associative products. Hence we find new properties of Arborified Zeta Values and state how this enables the computation of some Shintani Zeta Values.
In this paper we introduce the notion of cyclic (f(t),σ,δf(t),\sigma,\delta)-codes for f(t)\Oref(t)\in \Ore. These codes generalize the θ\theta-codes as introduced by D. Boucher, F. Ulmer, W. Geiselmann \cite{BGU}. We construct generic and control matrices for these codes. As a particular case the (\si,\de\si,\de)-WW-code associated to a Wedderburn polynomial are defined and we show that their control matrices are given by generalized Vandermonde matrices. All the Wedderburn polynomials of Fq[t;θ]\mathbb F_q[t;\theta] are described and their control matrices are presented. A key role will be played by the pseudo-linear transformations.
In this paper, we prove that the orbit space B and the Euler class of an action of the circle S^1 on X determine both the equivariant intersection cohomology of the pseudomanifold X and its localization. We also construct a spectral sequence converging to the equivariant intersection cohomology of X whose third term is described in terms of the intersection cohomology of B.
In this note we study morphisms from P2 to Gr(2,C4) from the point of view of the cohomology class they represent in the Grassmannian. This leads to some new result about projection of d-uple imbedding of P2 to P5.
This work reviews how database theory uses tractable circuit classes from knowledge compilation. We present relevant query evaluation tasks, and notions of tractable circuits. We then show how these tractable circuits can be used to address database tasks. We first focus on Boolean provenance and its applications for aggregation tasks, in particular probabilistic query evaluation. We study these for Monadic Second Order (MSO) queries on trees, and for safe Conjunctive Queries (CQs) and Union of Conjunctive Queries (UCQs). We also study circuit representations of query answers, and their applications to enumeration tasks: both in the Boolean setting (for MSO) and the multivalued setting (for CQs and UCQs).
There are no more papers matching your filters at the moment.