Les lundis, à partir de 14h - UPEC CMC - Salle P2-131
It is well-known that one may faithfully approximate the expected value of a random variable by taking the average of many independant observations of the variables. The average quickly converges to the expected value with good probability. This fact is commonly used when computing statistics in data mining. However, in this setting, the assumption that the observations are independant is usually wrong and we have no more guarantees on the convergence of the average toward the expected value. One way of circumventing this problem would be to only use an independant subset of the observations. Finding a large independant subset is however a hard problem and this method tends to discard precious data. An other method proposed by Wang and Ramon starts by assigning weights to each observation. If the weights are solution to some linear program, then we have the guarantee that the weighted average converges toward the expected value. In practice however, when the number of factor of dependencies is large, the size of such a linear program can be huge. In this talk, we will present a way of generating an equivalent compressed linear program when the dependencies are succinctly represented. This is joint work with Nicolas Crosetti, Jan Ramon and Joachim Niehren.
In order to avoid the inherent complexity of differential equation systems, Kauffman (1969) and Thomas (1973) built the basis of a discrete modeling relying on graphs, and which dynamics is a restriction of synchronous automata networks, while ensuring a consistence with the biological systems modeled. New problems then arise such as the automation of the process of creating such discrete or hybrid models, and the analysis of their dynamics. During this talk, after a reminder of the classical Thomas modeling, which is the basis of my works, I will present the two main aspects of my research.
* First, model completion consists in inferring the missing topology or parameters of a model.
– A first approach consisted in using a modified Hoare logic in order to find the value of some dynamical parameters in a hybrid context ; for this, a dynamical biological behavior found in the literature can be expressed as an imperative program and submitted to a weakest precondition calculus, à la Dijkstra.
– Another more recent and experimental approach consists in reconstructing the topology of a networks (especially all the influences between components) by exploiting directly expression data, contrary to existing methods that require a discretization phase beforehand.
* The second aspect is the analysis of the dynamics of such a complex system.
– Analysis with Answer Set Programming (ASP) which is a recent logic programming paradigm for which powerful solvers exist : they show good performances on several classical problems : state reachability, fixed points enumeration and complex attractors enumeration.
– Analysis with µ-calculus, which is an extension of classical temporal logic CTL* : being more expressive, it allows for instance to search for attractors of disruptions, or check models bisimulation.
– Analysis by static analysis with abstract interpretation : this approach, on the other hand, focuses on a single property (here, reachability) but considerably drops the complexity, allowing to obtain results on big models in very little time.
This work will appear in Fundamenta Informaticae as an extended version of Petri nets 2017.
Liveness preserves the possibility to fire any transition after a finite number of other firings, non-deadlockability, also known as deadlock-freeness, is weaker than liveness and states the absence of reachable deadlocks i.e. markings that enable no transition,
and reversibility ensures that the initial marking can be reached from any reachable marking, inducing the strong connectedness of the reachability graph.
In this work, such structural conditions are further developed to analyze (non-)deadlockability, liveness, reversibility and their monotonicity in larger subclasses of weighted Petri nets. Some of these new checking methods are the first ones to operate in polynomial time. This study also delineates more sharply the structural frontier between monotonicity and non-monotonicity for the three behavioral properties.
In this talk I present a logic, called LT, to express properties of transductions, i.e. binary relations from input to output (finite) words. I argue that LT is a suitable candidate as a specification language for verification of non reactive systems, extending the successful approach of verifying synchronous systems via Mealy Machines and MSO.
In LT, the input/output dependencies are modelled via an origin function which associates to any position of the output word, the input position from which it originates. LT is well-suited to express relations (which are not necessarily functional), and can express all regular functional transductions, i.e. transductions definable for instance by deterministic two-way transducers.
Despite its high expressive power, LT has decidable satisfiability problems. The main contribution is a synthesis result: it is always possible to synthesis a regular function which satisfies the specification.
Finally, I explicit a correspondence between transductions and data words. As a side-result, we obtain a new decidable logic for data words.
This talk will present SyTeCi, the first general automated tool to prove contextual equivalence for programs written in an higher-order language with references (i.e. local mutable states).
After introducing the notion of contextual equivalence, we will see on some examples why it is hard to prove such equivalences (reentrant calls, private states).
As we will see, such examples can be found in many different programming languages. In fact, contextual equivalence is undecidable, even in a finitary setting (finite datatypes, no recursion).
Then, we will introduce SyTeCi, a sound tool to automatically prove such equivalences. This tool is based on a reduction of the problem of contextual equivalence of two programs to the problem of non-reachability in transition systems of memory configurations.
This reduction is based on the generation, from two programs P1,P2, of a “characteristic formula” F, written in a temporal logic. We have that F is satisfiable w.r.t. such a transition system iff P1,P2 are contextually equivalent.
Contextual equivalence being undecidable, so does the non-reachability problem. However, one can apply model-checking techniques (predicate abstraction, summarization of pushdown systems) to check non-reachability via approximations. This allows to prove automatically many non-trivial examples of the literature, that could only be proved by hand before.
When it comes to reasoning on quantum evolutions, it is convenient to turn to quantum circuits, for they provide a graphical and compact visualisation of said evolutions. However two different circuits might represent the same evolution -- or matrix: for instance two consecutive Hadamard gates or CNOT gates result in the identity, respectively on one and two qubits. One might then want to equip circuits with a set of transformation rules, that change the circuit, but not the corresponding matrix. The ZX-Calculus is another diagrammatic language well-fitted for quantum computation. Even though it is closely related to circuits, it is laxer, which simplifies the research for transformation rules. Indeed, here, some non-unitary operations are allowed, and the wires can be bent at will, which greatly changes our conception of the mathematical objects we handle, and that we call here diagrams. The ZX-Calculus can be used for reasoning on quantum circuits: these can easily be transformed into ZX-diagrams, and there exists a method that characterises ``circuit-like'' diagrams. ZX-Calculus can hence be used to simplify a circuit, or check if two circuits are equivalent. The applications of the language are not limited to circuitry. For instance the generators of the ZX-Calculus match exactly the operations of lattice surgery, a promising model designed for error correction. A restriction of the language -- that we call fragment -- is complete when two semantically equivalent diagrams of this fragment can be turned into one another using only the rules of the ZX-Calculus. Completeness guarantees that the language captures all the computational power of quantum computation. Some fragment were proven to be complete, but none of them was universal, and they can be efficiently simulated with a classical computer. We proposed the first complete axiomatisation of the ZX-Calculus for an approximately universal fragment, namely Clifford+T.
La musique mixte «électronique-instrumentale» fait cohabiter lors du concert des instrumentistes et un ordinateur calculant en temps-réel une réponse sonore aux actions des musiciens humains. Pour qu’il y ait un véritable échange musical humain-machine, cette dernière doit percevoir les relations musicales qui se construisent dans le vif du concert et être capable d’y répondre et y insérer son interprétation.
La synchronisation musicale entre un humain et une machine est un phénomène complexe qui induit des rétroactions difficiles à anticiper, l’humain s’adaptant à la machine qui cherche à s’adapter à l’humain. De plus, les relations temporelles en jeu échappent à une simple description chronométrique : c’est un ensemble de lignes de temps événementielles et duratives, dont l’avancement se construit de manière distribuée à travers tous les acteurs de la performance, en relation les unes aux autres.
Dans cette présentation nous décrirons les notions temporelles développées dans Antescofo, un système de suivi de partition couplé à un langage informatique temps-réel, pour répondre à ces problématiques.
Information Centric Networking (ICN) aims to be the next Internet architecture. It brings many features such as network scalability and in-network caching by moving from the networking paradigm from the current host centric to content centric where the resources are important not their location. In order to benefit from the ICN properties, Big Data architecture needs to be adapted to comply with this new Internet architecture.
In this talk, our current effort to define Big Data architecture operating on Named Data Networking (NDN) and capitalizing on its properties. We present a fully distributed, resilient, secure and adaptable distributed file system NDFS (NDN Distributed File System) which is the first layer (Data Layer) in the Big Data stack. To perform computation on the data replicated using a Distributed File System, a Big Data architecture must include a Compute Layer. Based on NDN, N-MapReduce is a new way to process large datasets of information. It has been designed to leverage the features of the data layer.