We’ll present a simple algorithm for checking language equivalence of

non-deterministic finite automata. This algorithm can be exponentially

faster than the pre-existing ones, and it exploits a technique from

concurrency theory: `bisimulations up to congruence’. We’ll then

present recent developments on the abstract theory underlying such a

technique.

We’ll present a simple algorithm for checking language equivalence of

non-deterministic finite automata. This algorithm can be exponentially

faster than the pre-existing ones, and it exploits a technique from

concurrency theory: `bisimulations up to congruence’. We’ll then

present recent developments on the abstract theory underlying such a

technique.

En géométrie discrète et algorithmique, la complexité d'une famille d'ensemble est souvent étudiée au travers de sa fonction de pulvérisation. J'introduirai à ces fonctions et discuterai de comment leur vitesse de croissance asymptotique peut être controlée par une seule de leurs valeurs, dans l'esprit de la théorie de la "dimension de Vapnik-Chervonenkis" des hypergraphes. Je décrirai en particulier une construction probabiliste qui réfute une conjecture de Bondy et Hajnal. C'est un travail commun avec Boris Bukh (https://arxiv.org/abs/1701.06632). L'exposé ne supposera pas de connaissance particulière.

We introduce a new automata model, hybrid set-vector automata, designed to

accept weighted languages over a field in a more efficient way than Schützenberger’s weighted automata. The space of states for these automata is not a vector space, but rather a union of vector spaces “glued” together along subspaces. We call them hybrid automata,

since they naturally embed both deterministic finite state automata and finite automata

weighted over a field. We prove that these automata can be minimized. However, proving the existence of minimal automata “by hand” is rather complicated. Instead we adopt a more systematic approach and discuss the properties of the category of “glued” vector spaces that render minimization possible.

This is joint work with Thomas Colcombet. The talk can be either in English or French, depending on the audience’s preference.

The schedulability analysis of real-time systems (even on a single

processor) is a very difficult task, which becomes even more complex (or

undecidable) when periods or deadlines become uncertain.

In this work, we propose a unified formalism to model and formally

verify monoprocessor schedulability problems with several types of tasks

(periodic, sporadic, or more complex), most types of schedulers

(including EDF, FPS, SJF), with or without preemption, in the presence

of uncertain timing constants.

Although the general case is undecidable, we exhibit a large

decidable subclass.

We demonstrate the expressive power of our formalism on several

examples, allowing also for robust schedulability.

The schedulability analysis of real-time systems (even on a single

processor) is a very difficult task, which becomes even more complex (or

undecidable) when periods or deadlines become uncertain.

In this work, we propose a unified formalism to model and formally

verify monoprocessor schedulability problems with several types of tasks

(periodic, sporadic, or more complex), most types of schedulers

(including EDF, FPS, SJF), with or without preemption, in the presence

of uncertain timing constants.

Although the general case is undecidable, we exhibit a large

decidable subclass.

We demonstrate the expressive power of our formalism on several

examples, allowing also for robust schedulability.

We are interested in 2-dimensional cellular automata and more precisely in the recognition of langages in small time. The time complexity we consider is called real-time and is rather classic for the study of cellular automata. It consists of the smallest amount of time needed to read the whole imput. It has been shown that this set of langages depend on the neighborhood of the automaton. For example the two most used neighborhoods (Moore and von Neumann ones) are different with respect to this complexity. Our study deals with more generic sets of neighborhoods, round and diamond neighborhoods. We prove that all diamond neighborhoods can recognize the same langages in real time and that the round neighborhoods can recognize stricly less than the diamond ones.