On monday, at 2pm - UPEC CMC - Room P2-131
Enumeration algorithms are algorithms whose goal is to
output the set of all solutions to a given problem. There exists
different measures for the quality of such algorithm, whose relevance
depends on what the user wants to do with the solutions set.
If the goal of the user is to explore a subset of solutions or to
transform the solutions as they are outputted with a stream-like
algorithm, a relevant measure of the complexity in this case is the
delay, that is, the maximal time between the output of two distinct
solutions. Following this line of thoughts, significant efforts have
been made by the community to design polynomial delay algorithms, that
is, algorithms whose delay between the output of two new solutions is
polynomial in the size of the input.
While this measure is interesting, it is not always completely
necessary to have a bound on the delay and it is enough to ask for a
guarantee that running the algorithm for O(t poly(n)) will produce at
least t solutions. While algorithms having this property can be
transformed into polynomial delay algorithm, the naive transformation
may result in a blow up in the space used by the enumerator.
In this talk, we will present a new technique that allow to transform
such algorithms into polynomial delay algorithm with only a polynomial
overhead on the space.
One important research endeavour at the intersection of circuit complexity theory, algebraic automata theory and logic is the classification of regular languages according to their localisation within the internal structure of NC1, the class of languages decided by Boolean circuits of polynomial size, logarithmic depth and with gates of constant fan-in. In some sense, the search for such a classification concentrates most of the open questions we have about the relationship between NC1 and its well-studied subclasses.
While many questions are still open, one of the greatest successes of this research endeavour has been the characterisation of the regular languages in AC0, the subclass of NC1 corresponding to Boolean circuits of polynomial length, constant depth and with gates of unbounded fan-in. This characterisation takes the form of a triple languages-algebra-logic correspondence: a regular language is in AC0 if, and only if, its syntactic morphism is quasi-aperiodic if, and only if, it is definable in first-order logic over words with linear order and modular predicates.
It is natural to try to extend such results to classes of formal languages greater than the class of regular languages. A well studied and robust such class is given by visibly pushdown languages (VPLs): languages recognised by pushdown automata where the stack-height-behaviour only depends on the letters read from the input. Over the previous decade, a series of works concentrated on the fine complexity of VPLs, with several achievements: one of those was a characterisation of the class of visibly counter languages (basically VPLs recognised by visibly pushdown automata with only one stack symbol) in AC0 by Krebs, Lange and Ludwig. However, the characterisation of the VPLs in AC0 still remains open.
In this talk, I shall present a conjectural characterisation of the VPLs in AC0 obtained with Stefan Göller at the Universität Kassel. It is inspired by the conjectural characterisation given by Ludwig in his Ph.D. thesis as a generalisation of the characterisation for visibly counter languages, but that is actually false. In fact, we give a more precise general conjectural characterisation that builds upon recognisability by morphisms into Ext-algebras, an extension of recognisability by monoid-morphisms proposed by Czarnetzki, Krebs and Lange to suit the case of VPLs. This characterisation classifies the VPLs into three categories according to precise conditions on the Ext-algebra-morphisms that recognise them:
- those that are TC0-hard;
- those that are in AC0;
- those that correspond to a well-identified class of “intermediate languages” that we believe to be neither in AC0 nor TC0-hard.
Parity games are well-known infinite two-player games played on finitely colored graphs. We define an extension of parity games, called counter games, where each player can at every move, increment or reset a finite number of counters.
We examine several types of winning conditions that are linked to the boundedness of the counters of the game, and to parity conditions. For every type of winning conditions examined, we give algorithmic complexity results, and results on the size of the memory needed to win. For example, we show that deciding whether Eve can force one of the counters to be bounded can be done in polynomial time, and that, if Eve wins, then Eve has a memoryless winning strategy.
The results in this talk are joint work with Emmanuel Filiot.
In this talk I will present ongoing work on Markov decision processes.
Given an MDP and a formula phi, the strategy synthesis problem asks if there exists a strategy for the MDP such that the resulting Markov chain satisfies phi.
This challenging problem is known to be undecidable for the probabilistic temporal logic PCTL.
We study a class of formulae that can be seen as a fragment of PCTL where a local, bounded horizon property is enforced globally. Moreover, we allow for linear expressions
in the probabilistic inequalities.
We show that this class of formulae is at the frontier of decidability, depending on the type of strategies considered. In particular, strategy synthesis becomes decidable when strategies are either deterministic or memoryless, while the general problem remains undecidable.
We compare with results on the entire PCTL logic and consider applications to the PCTL satisfiability problem.
Considérons un système concurrent à événements discrets. Ses exécutions sont naturellement données comme des ordres partiels, et non pas comme des chaînes d’événements. Du point de vue probabiliste, le lien avec les systèmes probabilistes classiques (chaînes de Markov, automates probabilistes par exemple) n’est pas immédiat. Il est pourtant nécessaire d’avoir une théorie probabiliste bien fondée si on s’intéresse à la génération aléatoire de trajectoires dans les systèmes concurrents, notamment dans un but de vérification probabiliste.
Ces motivations me poussent à étudier dans un premier temps la notion de mesure de Bernoulli pour les monoïdes de traces, dont un cas particulier est la mesure uniforme. Il s’agit de généraliser le “jeté de dés répété infiniment” au cadre d’un monoïde dont certaines lettres commutent, à la place d’un monoïde libre. Dans un deuxième temps, on étend cette notion au cadre de systèmes concurrents plus réalistes, tels que les réseaux de Petri. On montrera dans l’exposé le lien entre la combinatoire des monoïdes de traces, qui remonte aux travaux de Cartier et Foata (1969) et de Viennot (1986), et la construction récente de mesures de probabilités en collaboration avec J. Mairesse (2015).
Herbrand theorem is a cornerstone result for giving a computational meaning to first order classical formulas. The validity of a formula is equivalent to having a finite procedure able to choose witnesses to satisfy the formula in any model. Those witnesses are generally called Herbrand witnesses.
Hebrand witnesses can be extracted from proof sequents but unlike them they cannot be composed. This opens the way for investigations: what kind of missing ingredient should be added to the witnesses in order to make them composable? Can we avoid the sequent calculus bureaucracy while doing so?
In this talk I will propose a semantic answer to these questions: I will show how Herbrand witnesses can be viewed as term-strategies in an extended model of causal games, hence inheriting the composition procedure already embedded in this model.
Turing machines and register machines have been used for
decades in theoretical computer science as abstract models of
computation. However, they are not well-suited for modelling
higher-order computations, because accessing the address of a machine
is not a built-in operation, hence functional machines cannot be
easily passed through their reference. We study a class of abstract
machines that we call “addressing machines” since they are only able
to manipulate memory addresses of other machines. The operations
performed by these machines are very elementary: load an address in a
register, apply a machine to another one via their addresses, and call
the address of another machine. We endow addressing machines with an
operational semantics based on leftmost reduction and study their
behaviour. We show that they can be used, for instance, to construct
models of the pure, untyped lambda-calculus. Subsequently, we extend
the syntax of their programs by adding instructions for executing
arithmetic operations on natural numbers, and introduce a reflection
principle allowing certain machines to access their own address and
perform recursive calls. We show that the resulting extended
addressing machines naturally model a weak call-by-name PCF with
explicit substitutions. Finally, we show that they are also well
suited for representing regular PCF programs (closed terms) computing
Les algèbres de Kleene concurrentes (CKA) fournissent un cadre algébrique pour raisonner sur les programmes concurrents. Au cours de ces dernières années, nous avons étudié différentes manières d’enrichir ce modèle afin de capturer une classe plus large de problèmes de vérification.
Dans cet exposé, je présenterai tout d’abord les bases de CKA, avec ses présentations axiomatiques et combinatoires, et les résultats de décidabilité correspondants. Je ferais ensuite un bref panorama de certaines extensions du modèle, en considérant des aspects tels que le flot de contrôle, la cohérence des accès mémoire, et l’exclusion mutuelle.
Ce travail est le fruit de collaborations avec Tobias Kappé, Jana Wagemaker, Simon Docherty, Fabio Zanasi, Jurriaan Rot, Alexandra Silva, David Pym, Damien Pous, et Georg Struth.