Two verification problems are very close in spirit: _model_ checking and

_module_ checking. Model checking typically assumes a “closed” system,

in the sense that all the possible events are included in the model of

the system. In contrast, module checking was designed for verification

of “open” systems, i.e., ones that are coupled with an external

environment whose behavior cannot be fully predicted.

When reasoning about multi-agent strategic interaction is allowed, the

difference seems to be mostly a matter of presentation. In particular,

model checking of the strategic logic ATL appears to be a natural

extension of module checking for the temporal logic CTL. In fact, it was

commonly believed that the former subsumes the latter in a

straightforward way.

We show that, on the contrary, the two problems are fundamentally

different. The way in which behavior of the environment is understood in

module checking cannot be equivalently characterized in ATL. Moreover,

if one wants to embed module checking in ATL then its semantics must be

extended with two essential features, namely nondeterministic strategies

and long-term commitment to strategies.

Finally, we study a variant of module checking where the specification

of objectives is given in the richer language of ATL. We prove that the

resulting “reactive” semantics of ATL increases the expressive power of

the logic. On the other hand, it admits so called non-behavioral

strategies, which is usually considered counterintuitive.

RNA is a chain of ribonucleotides of four kinds (denoted respectively by

the letters A, C, G, U). While being synthesized sequentially from its

template DNA (transcription), it folds upon itself into intricate

higher-dimensional structures in such a way that the free energy is

minimized, that is, the more hydrogen bonds between ribonucletoides or

larger entropy a structure has, the more likely it is chosen, and

furthermore the minimization is done locally. This phenomenon is called

cotranscriptional folding (CF). It has turned out to play significant

roles in in-vivo computation throughout experiments and recently proven

even programmable artificially so as to self-assemble a specific RNA

rectangular tile structure in vitro. The next step is to program a

computation onto DNA in such a way that the computation can be called

by cotranscriptional folding. In this novel paradigm of computation,

what programmers could do is only twofold: designing a template DNA and

setting environmental parameters. Oritatami is an introductory “toy”

model to this paradigm of computation. In this model, programmers are

also allowed to employ an arbitrarily large finite alphabet as well as

an arbitrarily complex rule set for binding over. We shall present known

architectures of computing in the oritatami model from a simple

half-adder to Turing machine along with several programming techniques

of use, with hope that they will inspire in-vivo architectures of

CF-driven self-assemblable computers, which could be even heritable.

In 1941, Claude Shannon introduced a continuous-time analog model of

computation, namely the General Purpose Analog Computer (GPAC). The

GPAC is a physically feasible model in the sense that it can be

implemented in practice through the use of analog electronics or

mechanical devices. It can be proved that the functions computed by a

GPAC are precisely the solutions of a special class of differential

equations where the right-hand side is a polynomial. Analog computers

have since been replaced by digital counterpart. Nevertheless, one can

wonder how the GPAC could be compared to Turing

machines.

A few years ago, it was shown that Turing-based paradigms and

the GPAC have the same computational power. However, this result did

not shed any light on what happens at a computational complexity

level. In other words, analog computers do not make a difference about

what can be computed; but maybe they could compute faster than a

digital computer. A fundamental difficulty of continuous-time model is

to define a proper notion of complexity. Indeed, a troubling problem is

that many models exhibit the so-called Zeno’s phenomenon, also known as

space-time contraction.

In this talk, I will present results from my thesis that give several

fundamental contributions to these questions. We show that the GPAC has

the same computational power as the Turing machine, at the complexity

level. We also provide as a side effect a purely analog, machine-

independent characterization of P and Computable Analysis.

I will present some recent work on the universality of polynomial

differential equations. We show that when we impose no restrictions at

all on the system, it is possible to build a fixed equation that

is universal in the sense it can approximate arbitrarily well any

continuous curve over R, simply by changing the initial condition of

the system.

If time allows, I will also mention some recent application of this

work to show that chemical reaction networks are strongly Turing

complete with the differential semantics.

Graph Edit Distance (GED) is a flexible and powerful technique to measure the amount of dissimilarity between two graphs. The Exact GED problem consists to find the best set of edit operations to transform one graph into the other. The allowed standard operations are substitution, deletion, and insertion; in which they are applied on both vertices and edges.

The power of GED approach appears in two folds, the first, it can be used at the same time to find the Exact and the Inexact Graph Matching where some errors are tolerated between the process of matching the two graphs. The second, the GED can easily integrate the domain specific information about object similarity by means of a specific edit cost function.

Indeed, the Exact-GED is used in various applications, especially in areas related to Pattern Recognition, such as: hand-writing recognition, person identification and authentication (example:fingerprint recognition), documents analysis, and in graph database search. Computing the Exact-GED can also be part of machine learning, Nearest-Neighbor, and data mining area.

This problem is known to be NP-hard and its complexity grows exponentially with the size of computed graphs. To compute the Exact GED, often A-Star and Branch and Bound (B\&B) algorithms are used.

Two-player win/lose games of infinite duration are involved in several disciplines including computer science and logic. If such a game has winning strategies, one may ask how simple they can be. The answer may help with actual implementation, or to win despite incomplete information, or to conceal sensitive information.

Given a game, this article considers equivalence relations over sequences of player actions. A classical restriction used here is that equivalent sequences have equal length. A sufficient condition on the equivalence relation is given such that if a player has a winning strategy, she has one that prescribes the same action at equivalent histories. The proof/derivation has three advantages: it is uniform, it preserves memory finiteness, and a counterexample shows how tight the result is.

It seems pretty obvious to add random choice to your

preferred higher-order functional language.

One can then give it an operational semantics in the form

of an infinite Markov chain whose states are

machine configurations, and with easily understandable

rules.

Denotational semantics provide helpful invariants to

reason about programs, and Jones and Plotkin’s

probabilistic powerdomain (1990) models random choice

elegantly.

One can then interpret higher-order probabilistic

programs in the category of dcpos (directed-complete

partial orders), and that works perfectly well…

except that some dcpos are rather pathological,

and that prevents us from proving all the theorems

we would like to have. As a case in point, it

is unknown whether Fubini’s theorem holds on all

dcpos, which means that drawing x then y at random

is not known to be equivalent with drawing y then x.

Such problems do not occur with so-called continuous

dcpos, but then we must face the Jung-Tix problem (1998):

we do not know of any category of continuous dcpos

that can handle both higher-order features and

the probabilistic powerdomain.

We will show that there is a simple way of getting

around the Jung-Tix problem, relying on a variant

of Levy’s call-by-push-value paradigm (2003), and provided

we also include a form of demonic non-deterministic

choice (related to must-termination, operationally).

We will argue that the language satisfies adequacy:

on programs of base type (int), the denotational semantics

computes a subprobability distribution whose

mass at any given number n is exactly the minimal probability

that the output of the program will be n.

If we have time, we will then study full abstraction,

namely the relation between denotational (in)equality

and the observational preorder. Our language is

not fully abstract. One reason is expected: the absence

of parallel conditionals. Another has surfaced

more recently, and that is the absence of so-called

statistical termination testers. With both added,

however, our language is fully abstract.

La conjecture de Nivat dit que toute configuration (coloration de Z^2) de faible complexité (le nombre de motifs qui y apparaissent est “faible”) est nécessairement périodique.

En 2015, Michal Szabados et Jarkko Kari ont publié leur premier article utilisant une nouvelle approche pour s’attaquer à cette conjecture: une approche algébrique.

Leur idée est de représenter une configuration comme une série formelle, et en étudiant la structures de certains objets qui lui sont liés (tels que des idéaux polynomiaux), ils parviennent à utiliser des théorèmes d’algèbre pour se rapprocher de la conjecture de Nivat.

Dans cet exposé, je présenterai les travaux que j’ai effectué avec Jarkko Kari dans le continuation de la thèse de Michal Szabados. Je présenterai deux théorèmes utilisant ces outils algébriques pour se rapprocher encore une fois de la conjecture de Nivat, dans deux sens différents: Le premier montre que la conjecture de Nivat est vraie pour une certaine classe de sous-shifts, tandis que le second prouve la décidabilité du problème du domino pour les sous-shift de faible complexité (résultat que la conjecture de Nivat impliquerait de manière presque immédiate).

We study the computational complexity of solving mean payoff games. This class of games can be seen as an extension of parity games, and they have similar complexity status: in both cases solving them is in NP and coNP and not known to be in P. In a breakthrough result Calude, Jain, Khoussainov, Li, and Stephan constructed in 2017 a quasipolynomial time algorithm for solving parity games, which was quickly followed by two other algorithms with the same complexity. Our objective is to investigate how these techniques can be extended to the study of mean payoff games. We construct two new algorithms for solving mean payoff games. Our first algorithm depends on the largest weight N (in absolute value) appearing in the graph and runs in sublinear time in N, improving over the previously known linear dependence in N . Our second algorithm runs in polynomial time for a fixed number k of weights.

In this talk I will present a probabilistic study of Sturmian words. Sturmian words come up

naturally as discrete codings of irrational lines, and the study of their (finite) factors is of key interest.

In particular, the recurrence function measures the gaps between occurrences of these

factors. During the talk we will give a brief overview of the fundamental facts about Sturmian

words, the classical extreme case results for their recurrence function and finally

our study under a natural probabilistic model.

Based on joint work with Brigitte Vallée (CNRS, Univ. Caen).

The problem of ontology-mediated query answering (OMQA) has gained significant interest in recent years. One popular ontology language for OMQA is OWL 2 QL, a W3C standardized language based upon the DL-Lite description logic. This language has the desirable property that OMQA can be reduced to database query evaluation by means of query rewriting. In this talk, I will consider two fundamental questions about OMQA with OWL 2 QL ontologies: 1) How does the worst-case complexity of OMQA vary depending on the structure of the ontology-mediated query (OMQ)? In particular, under what conditions can we guarantee tractable query answering? 2) Is it possible to devise query rewriting algorithms that produce polynomial-size rewritings? More generally, how does the succinctness of rewritings depend on OMQ structure and the chosen format of the rewritings? After classifying OMQs according to the shape of their conjunctive queries (treewidth, the number of leaves) and the existential depth of their ontologies, we will determine, for each class, the combined complexity of OMQ answering, and whether all OMQs in the class have polynomial-size first-order, positive existential and nonrecursive datalog rewritings. We obtain the succinctness results using hypergraph programs, a new computational model for Boolean functions, which makes it possible to connect the size of OMQ rewritings and circuit complexity.

This talk is based upon a recent JACM paper jointly authored with Stanislav Kikot, Roman Kontchakov, Vladimir Podolskii, and Michael Zakharyaschev.