TBA

### March 18, 2024

*TBA*

**Thomas Seiller**

Skip to content
# Seminars 2023-24

Contact : seminar@lacl.fr
# On monday, at 2pm - UPEC CMC - Room P2-131

### March 18, 2024

*TBA*

**Thomas Seiller**
(CNRS, LIPN)

### March 11, 2024

*TBA*

**Davide Barbarossa**
(University of Bath)

### March 4, 2024

*TBA*

**Léo Paviet Salomon**
(GREYC, Université Caen Normandie)

### February 26, 2024

*Ingredients for a BSP library*

**Wijnand Suijlen**
(Huawei Technologies)

### February 5, 2024

*Performance Paradox for Matching models with greedy disciplines*

**Jean Michel Fourneau**
(DAVID, UVSQ)

### January 29, 2024

*Analog characterization of complexity classes*

**Riccardo Gozzi**
(LACL)

### January 15, 2024

*Model-checking appuyé par des sur et sous-approximations*

**Yann Thierry-Mieg**
(LIP6)

### January 8, 2024

*Formal verification for quantum programs, challenges and the Qbricks solution*

**Christophe Chareton**
(CEA LIST)

### December 18, 2023

*Paramétrisation et analyse de systèmes naturels à l’aide de vérification statistique*

**Benoît Delahaye**
(LS2N)

### December 11, 2023

*Computational Content of Set Theory*

**Richard Matthews**
(LACL)

TBA

TBA

TBA

The Bulk Synchronous Parallel (BSP) model is a cost model for parallel computation, which algorithm designers can use to estimate how much time their parallel algorithm will take when using multiple processors on their computer simultaneously. Indirectly, it therefore aids also in design of parallel algorithms. The implementation of such a BSP algorithms may be much more complicated, however, because strange interactions inside middle-ware and hardware may unexpectedly ruin the carefully proven complexity bounds. For that reason, various BSP libraries have been proposed and developed, which programmers can use to implement BSP algorithms. This talk presents some of the techniques that such BSP libraries employ in order to present an ordinary computer as a highly reliable and efficient BSP computer.

A matching model is a triple based on a compatibility graph, a set of Poisson processes and a matching discipline. Each node of the graph is associated with a type of objects and the compatibility graph shows which objects interact. The interaction is the immediate deletion of some objects. If an arriving objet does not interact, it enters the system and wait until it can interact with someone. One of the possible applications of matching models is the kidney exchange system organized in many countries. In this talk I will show a performance paradox: adding more flexibility in the compatibility graph (i.e. adding new edges) will, for some graphs and arrival rates, lead to an increase of the total average sojourn time in the system. And this is proved for any greedy disciplines. We show this property holds for some family of graphs and is lifted for some modular constructions of graphs. As this result is mostly based on strong aggregation of Markov chains, I will begin by a short introduction of this property which is used in general to decrease the size of the models.

The purpose of this talk is to characterize complexity classes from standard computational complexity theory using systems of ordinary differential equations. We start by recalling concepts related to the general research field of analog computing, from a physical, theoretical, and abstraction level. We then proceed to provide historical context to the main model of the talk, the GPAC from C. Shannon, which describes the behavior of the analog machine called differential analyser. We then explain the evolution that the model had in literature throughout the years and present the details about the analog characterization for the classes P and FP, which connects the GPAC model with the discrete model of Turing machines. We explain how this equivalence should be intended and what are its main consequences. Finally, we briefly discuss some of the more recent developments on the subject, mentioning how to extend the characterization for the class FEXPTIME, for each level of the Grzegorczyk hierarchy, and for polynomial space complexity classes such as FPSPACE and PSPACE.

L’exposé sera centré sur du model-checking (accessibilité et LTL), avec des stratégies qui mixent des sur-approximations (basées sur SMT) des sous approximations (comme des explorations aléatoires ou dirigée) et des abstractions précises vis à vis de la propriété (comme des réductions structurelles). L’implantation de ces stratégies concrètement pour la vérification de réseaux de Petri permettent à l’outil ITS-Tools de remporter la compétition de model-checking MCC.

While recent progress in quantum hardware open the door for significant speedup in certain key areas (cryptography, biology, chemistry, optimization, machine learning, etc), quantum algorithms are still hard to implement right, and the validation of such quantum programs is a challenge. Moreover, importing the testing and debugging practices at use in classical programming is extremely difficult in the quantum case, due to the destructive aspect of quantum measurement. As an alternative strategy, formal methods are prone to play a decisive role in the emerging field of quantum software. We review the induced challenges for an efficient use of formal methods in quantum computing and introduce our solution, Qbricks, and ongoing efforts.

Dans cet exposé, je reviendrai sur un ensemble de travaux récents dont l’objet est la modélisation et l’analyse de systèmes naturels complexes à l’aide (en partie) de model-checking statistique. La particularité des systèmes considérés est qu’ils sont constitués, au moins en partie, de processus continus modélisés par des équations différentielles. Dans un premier temps, je présenterai des résultats permettant de donner des garanties probabilistes sur la vérification statistique de ces systèmes malgré les erreurs introduites lors de l’intégration numérique des équations différentielles, et je présenterai des applications à la paramétrisation de ces modèles et à l’analyse de leur stabilité. Dans un deuxième temps, je présenterai une abstraction de ces systèmes permettant de les combiner avec des processus discrets de contrôle. Finalement, je présenterai brièvement deux cas d’études concrets de paramétrisation de tels modèles : la croissance de la méduse Pelagia Noctiluca en méditerranée et la repousse de parcelles de forêt amazonienne lorsqu’elle sont soumises à des évènements extrêmes.

The method of realizability was first developed by Kleene and is seen as a way to extract computational content from mathematical proofs via the Curry-Howard Correspondence. The Curry-Howard Correspondence is a way to associate with each mathematical proof a computer program. Then, from a theorem one can extract computational content by analysing the programs associated to the proof of the statement. Traditionally, this method was restricted to producing models which satisfied intuitionistic logic; however it was later extended by Krivine to produce models which satisfy full classical logic and even Zermelo-Fraenkel set theory with choice. In this talk we will discuss Krivine’s method to construct realizability models of ZF and what this reveals about the computational content of set theory with respect to the Curry-Howard Correspondence. We will then present recent results concerning ordinals and cardinals in these realizability models. This is joint work with Laura Fontanella and Guillaume Geoffroy.