We establish a formal correspondence between resource calculi and appropriate linear multicategories. We consider the cases of (symmetric) representable, symmetric closed and autonomous multicategories. For all these structures, we prove that morphisms of the corresponding free constructions can be presented by means of typed resource terms, up to a reduction relation and a structural equivalence. Thanks to the linearity of the calculi, we can prove strong normalization of the reduction by combinatorial methods, defining appropriate decreasing measures. From this, we achieve a general coherence result: morphisms that live in the free multicategorical structures are the same whenever the normal forms of the associated terms are equal. As further application, we obtain syntactic proofs of Mac Lane’s coherence theorems for (symmetric) monoidal categories.

This talk is about *query evaluation*, where we want to efficiently compute the answers to a query on a database. Since the number of results can be very large, we want to design an efficient *enumeration algorithm*: such an algorithm first preprocesses the data and then efficiently enumerates all answers, with a small delay between any two consecutive answers. This talk will review results on enumerating query results in multiple settings: from conjunctive queries on relational databases to monadic second-order queries over trees. One common theme to such algorithms is the use of circuit classes from knowledge compilation to serve as a compressed representations of the answers to enumerate. We will also present extensions, e.g., how to enumerate results in a specific order, or how to maintain enumeration structures under updates to the underlying data.

Verification of infinite-state Markov chains is still a challenge despite several fruitful numerical or statistical approaches. For decisive Markov chains, there is a simple numerical algorithm that frames the reachability probability as accurately as required (however with an unknown complexity). On the other hand when applicable, statistical model checking is in most of the cases very efficient. Here we study the relation between these two approaches showing first that decisiveness is a necessary and sufficient condition for almost sure termination of statistical model checking. Afterwards we develop an approach with application to both methods that substitutes to a non decisive Markov chain a decisive Markov chain with the same reachability probability. This approach combines two key ingredients: abstraction and importance sampling (a technique that was formerly used for efficiency). We develop this approach on a generic formalism called layered Markov chain (LMC). Afterwards we perform an empirical study on probabilistic pushdown automata (an instance of LMC) to understand the complexity factors of the statistical and numerical algorithms. To the best of our knowledge, this prototype is the first implementation of the deterministic algorithm for decisive Markov chains and has required to solve several qualitative and numerical issues.

L’auto-stabilisation est un paradigme adapté aux systèmes distribués, particulièrement susceptibles de subir des fautes transitoires. Ces fautes peuvent être des erreurs de corruption de mémoire, de messages, la rupture d’un lien de communication, et peuvent plonger le système dans un état incohérent. Un protocole est auto-stabilisant si, quel que soit l’état initial du système, il garantit un retour à un fonctionnement normal en temps fini.

Avec le développement de réseaux d’objets connectés, censés être autonomes, la question de la conception d’algorithmes tolérants aux pannes ayant un faible coût en termes de consommation énergétique devient cruciale. Une des manières d’appréhender ces problèmes est de chercher à réduire la taille des messages échangés entre les différents nœuds du réseau.

Nous présenterons des résultats négatifs, d’impossibilité et de bornes inférieures pour une famille de problèmes de l’algorithmique distribuée.

The topological properties of a set have a strong impact on its computability properties. A striking illustration of this idea is given by spheres, closed manifolds and finite graphs without endpoints : if a set X is homeomorphic to a sphere, a closed manifold or a such graph, then any algorithm that semicomputes X in some sense can be converted into an algorithm that fully computes X. In other words, the topological properties of X enable one to derive full information about X from partial information about X. In that case, we say that X has computable type. Those results have been obtained by Miller and Iljazović and others in the recent years. We give a characterization of finite simplicial complexes having computable type using a local property of stars of vertices. We argue that the stronger, relativized version of computable type, is better behaved. Consequently, we obtain characterizations of strong computable type, related to the descriptive complexity of topological invariants. This leads us to investigate the expressive power of low complexity invariants in the Borel hierarchy and their ability to differentiate between spaces. Notably, we show that they are sufficient for distinguishing finite topological graphs.

What is a model of computation? What is a program, an algorithm? While theoretical computer science has been founded on computability theory, the latter does not answer these questions. Indeed, it is a mathematical theory of computable functions, and does not account for computation itself. A symptomatic consequence is the notion of Turing-completeness. This standard (sole?) equivalence between models of computation is purely extensional: it does only care about what is computed and not how, blind to complexity aspects and the question of algorithmic completeness. More importantly, the theory of computation is continuously growing further from how actual machines compute.

I will present a proposal for alternative foundations more faithful to computer science practice and interests. This mathematisation of computer science is grounded within the theory of dynamical systems, focussing on *how* computation is performed rather than only on *what* is computed. I will argue that it generalises computability theory while still allowing to recover standard results.

If time permits, I can then explain how this theory can be used to (I will let he audience decide which items will be discussed):

• provide a uniform account of several lower bounds from algebraic complexity and strengthen them

• define static analysis methods which can be implemented in usable tools

• define families of linear realisability models (realisability models for linear logic)

• lead to a semantic approach of implicit computational complexity

• propose a formal definition of the notion of algorithm

The plan of the talk will be, first, to give an overview of the Taylor expansion of the lambda-calculus and, then, to give an overview of some aspects of quantitative categorical semantics.

I will in particular highlight some results obtained for a semantics based on the tropical semiring, which appeared in a recent CSL paper, where programs can be seen as locally Lipschitz functions, and the lambda-calculus Taylor expansion gives local approximants of the Lipschitz constants.

Depending on the interest and the common knowledge of the audience, I will spend more time on one topic or another.

Un résultat classique dans l’étude des langages formels est le théorème de Myhill-Nerode, qui donne des conditions nécessaires et suffisantes en terme de langages résiduels pour qu’un langage soit régulier. Dans cet exposé, on essaiera de montrer comment cet outil a été adapté à l’étude des espaces de pavages, où les configurations ne sont plus des mots finis mais des coloriages multidimensionnels infinis. En particulier, on étudiera l’*/entropie d’extension/*, introduite par R.Pavlov et T.French, qui représente le taux de croissance de cet équivalent aux langages résiduels. On donnera plusieurs caractérisations obtenus sur cette entropie grâce à la théorie de la calculabitlité, sur plusieurs clases de pavages mono- et multidimensionnels.

The Bulk Synchronous Parallel (BSP) model is a cost model for parallel computation, which algorithm designers can use to estimate how much time their parallel algorithm will take when using multiple processors on their computer simultaneously. Indirectly, it therefore aids also in design of parallel algorithms. The implementation of such a BSP algorithms may be much more complicated, however, because strange interactions inside middle-ware and hardware may unexpectedly ruin the carefully proven complexity bounds. For that reason, various BSP libraries have been proposed and developed, which programmers can use to implement BSP algorithms. This talk presents some of the techniques that such BSP libraries employ in order to present an ordinary computer as a highly reliable and efficient BSP computer.