On monday, at 2pm - UPEC CMC - Room P2-131
It seems pretty obvious to add random choice to your
preferred higher-order functional language.
One can then give it an operational semantics in the form
of an infinite Markov chain whose states are
machine configurations, and with easily understandable
Denotational semantics provide helpful invariants to
reason about programs, and Jones and Plotkin’s
probabilistic powerdomain (1990) models random choice
One can then interpret higher-order probabilistic
programs in the category of dcpos (directed-complete
partial orders), and that works perfectly well…
except that some dcpos are rather pathological,
and that prevents us from proving all the theorems
we would like to have. As a case in point, it
is unknown whether Fubini’s theorem holds on all
dcpos, which means that drawing x then y at random
is not known to be equivalent with drawing y then x.
Such problems do not occur with so-called continuous
dcpos, but then we must face the Jung-Tix problem (1998):
we do not know of any category of continuous dcpos
that can handle both higher-order features and
the probabilistic powerdomain.
We will show that there is a simple way of getting
around the Jung-Tix problem, relying on a variant
of Levy’s call-by-push-value paradigm (2003), and provided
we also include a form of demonic non-deterministic
choice (related to must-termination, operationally).
We will argue that the language satisfies adequacy:
on programs of base type (int), the denotational semantics
computes a subprobability distribution whose
mass at any given number n is exactly the minimal probability
that the output of the program will be n.
If we have time, we will then study full abstraction,
namely the relation between denotational (in)equality
and the observational preorder. Our language is
not fully abstract. One reason is expected: the absence
of parallel conditionals. Another has surfaced
more recently, and that is the absence of so-called
statistical termination testers. With both added,
however, our language is fully abstract.
In 1941, Claude Shannon introduced a continuous-time analog model of
computation, namely the General Purpose Analog Computer (GPAC). The
GPAC is a physically feasible model in the sense that it can be
implemented in practice through the use of analog electronics or
mechanical devices. It can be proved that the functions computed by a
GPAC are precisely the solutions of a special class of differential
equations where the right-hand side is a polynomial. Analog computers
have since been replaced by digital counterpart. Nevertheless, one can
wonder how the GPAC could be compared to Turing
A few years ago, it was shown that Turing-based paradigms and
the GPAC have the same computational power. However, this result did
not shed any light on what happens at a computational complexity
level. In other words, analog computers do not make a difference about
what can be computed; but maybe they could compute faster than a
digital computer. A fundamental difficulty of continuous-time model is
to define a proper notion of complexity. Indeed, a troubling problem is
that many models exhibit the so-called Zeno’s phenomenon, also known as
In this talk, I will present results from my thesis that give several
fundamental contributions to these questions. We show that the GPAC has
the same computational power as the Turing machine, at the complexity
level. We also provide as a side effect a purely analog, machine-
independent characterization of P and Computable Analysis.
I will present some recent work on the universality of polynomial
differential equations. We show that when we impose no restrictions at
all on the system, it is possible to build a fixed equation that
is universal in the sense it can approximate arbitrarily well any
continuous curve over R, simply by changing the initial condition of
If time allows, I will also mention some recent application of this
work to show that chemical reaction networks are strongly Turing
complete with the differential semantics.
La conjecture de Nivat dit que toute configuration (coloration de Z^2) de faible complexité (le nombre de motifs qui y apparaissent est “faible”) est nécessairement périodique.
En 2015, Michal Szabados et Jarkko Kari ont publié leur premier article utilisant une nouvelle approche pour s’attaquer à cette conjecture: une approche algébrique.
Leur idée est de représenter une configuration comme une série formelle, et en étudiant la structures de certains objets qui lui sont liés (tels que des idéaux polynomiaux), ils parviennent à utiliser des théorèmes d’algèbre pour se rapprocher de la conjecture de Nivat.
Dans cet exposé, je présenterai les travaux que j’ai effectué avec Jarkko Kari dans le continuation de la thèse de Michal Szabados. Je présenterai deux théorèmes utilisant ces outils algébriques pour se rapprocher encore une fois de la conjecture de Nivat, dans deux sens différents: Le premier montre que la conjecture de Nivat est vraie pour une certaine classe de sous-shifts, tandis que le second prouve la décidabilité du problème du domino pour les sous-shift de faible complexité (résultat que la conjecture de Nivat impliquerait de manière presque immédiate).
We study the computational complexity of solving mean payoff games. This class of games can be seen as an extension of parity games, and they have similar complexity status: in both cases solving them is in NP and coNP and not known to be in P. In a breakthrough result Calude, Jain, Khoussainov, Li, and Stephan constructed in 2017 a quasipolynomial time algorithm for solving parity games, which was quickly followed by two other algorithms with the same complexity. Our objective is to investigate how these techniques can be extended to the study of mean payoff games. We construct two new algorithms for solving mean payoff games. Our first algorithm depends on the largest weight N (in absolute value) appearing in the graph and runs in sublinear time in N, improving over the previously known linear dependence in N . Our second algorithm runs in polynomial time for a fixed number k of weights.
In this talk I will present a probabilistic study of Sturmian words. Sturmian words come up
naturally as discrete codings of irrational lines, and the study of their (finite) factors is of key interest.
In particular, the recurrence function measures the gaps between occurrences of these
factors. During the talk we will give a brief overview of the fundamental facts about Sturmian
words, the classical extreme case results for their recurrence function and finally
our study under a natural probabilistic model.
Based on joint work with Brigitte Vallée (CNRS, Univ. Caen).
The problem of ontology-mediated query answering (OMQA) has gained significant interest in recent years. One popular ontology language for OMQA is OWL 2 QL, a W3C standardized language based upon the DL-Lite description logic. This language has the desirable property that OMQA can be reduced to database query evaluation by means of query rewriting. In this talk, I will consider two fundamental questions about OMQA with OWL 2 QL ontologies: 1) How does the worst-case complexity of OMQA vary depending on the structure of the ontology-mediated query (OMQ)? In particular, under what conditions can we guarantee tractable query answering? 2) Is it possible to devise query rewriting algorithms that produce polynomial-size rewritings? More generally, how does the succinctness of rewritings depend on OMQ structure and the chosen format of the rewritings? After classifying OMQs according to the shape of their conjunctive queries (treewidth, the number of leaves) and the existential depth of their ontologies, we will determine, for each class, the combined complexity of OMQ answering, and whether all OMQs in the class have polynomial-size first-order, positive existential and nonrecursive datalog rewritings. We obtain the succinctness results using hypergraph programs, a new computational model for Boolean functions, which makes it possible to connect the size of OMQ rewritings and circuit complexity.
This talk is based upon a recent JACM paper jointly authored with Stanislav Kikot, Roman Kontchakov, Vladimir Podolskii, and Michael Zakharyaschev.
We provide a finite set of axioms for identity-free Kleene lattices, which we prove sound and com-
plete for the equational theory of their relational models. This equational theory was previously
proved to coincide with that of language models and to be ExpSpace-complete; expressions of
the corresponding syntax moreover make it possible to denote precisely those languages of graphs
that can be accepted by Petri automata. Finite axiomatisability was missing to obtain the same
picture as for Kleene algebra, regular expressions, and (word) automata.
Our proof builds on the completeness theorem for Kleene algebra, and on a novel automata
construction that makes it possible to extract axiomatic proofs using a Kleene-like algorithm.