La vérification de la résistance aux attaques des implémentations embarquées des vérifieurs de code intermédiaire Java Card est une tâche complexe. Les méthodes actuelles n’étant pas suffisamment efficaces, seule la génération de tests manuelle est possible. Pour automatiser ce processus, nous proposons une méthode appelée VTG («Vulnerability Test Generation», génération de tests de vulnérabilité). En se basant sur une représentation formelle des comportements fonctionnels du système sous test, un ensemble de tests d’intrusions est généré. Cette méthode s’inspire des techniques de mutation et de test à base de modèle. Dans un premier temps, le modèle est muté selon des règles que nous avons définies afin de représenter les potentielles attaques. Les tests sont ensuite extraits à partir des modèles mutants. Deux modèles Event-B ont été proposés. Le premier représente les contraintes structurelles des fichiers d’application Java Card. Le VTG permet en quelques secondes de générer des centaines de tests abstraits. Le second modèle est composé de 66 événements permettant de représenter 61 instructions Java Card. La mutation est effectuée en quelques secondes. L’extraction des tests permet de générer 223 tests en 45 min. Chaque test permet de vérifier une précondition ou une combinaison de préconditions d’une instruction. Cette méthode nous a permis de tester différents mécanismes d’implémentations de vérifieur de code intermédiaire Java Card. Bien que développée pour notre cas d’étude, la méthode proposée est générique et a été appliquée à d’autres cas d’études.
Les lundis, à partir de 14h - UPEC CMC - Salle P2-131
For a fixed base b, any integer can be encoded as a finite word of alphabet of digits. In dimension d>0, a vector of d integers is encoded as a word of alphabet of vector of d digits. A set of vector of integers is thus encoded as a language whose alphabet is the set of vector of digits. Thus, an automaton whose alphabet is the set of vector of digits recognizes a set of integers. Similarly, a Büchi automaton recognizes a set of vector of real.
It is then natural to consider algorithms which decide whether the set of vectors of numbers accepted by a finite automaton admits some properties. For example, Honkala proved in 1986 that it is decidable whether an automaton recognize a FO[N,+,<]-definable set of integers. Muchnik proved a similar result for automata recognizing sets of vectors of reals. A polynomial-time algorithm was then given by Leroux in 2006, and a quasi-linear time algorithm for the case of dimension 1 was given in 2013 by Marsault and Sakarovitch.
We state that it is decidable in linear time:
-whether a set of reals recognized by a given finite minimal weak Büchi automaton is FO[R,+,<]-definable.
-whether a set of vectors recognized by a minimal finite deterministic automaton can be defined in some logics less expressive than FO[N,+], such as FO[N,<,mod].
Furthermore, formulas which defines those sets can be computed in linear time and cubic time respectively.
Furthermore, is shown that it is decidable whether a set of vector of real or of integers accepted by a (weak Büchi) automaton:
-is definable in a logic which admits quantifier-elimination. For example, if the set is definable in FO[R,+,<], FO[R,Z,+,<,mod,floor] where mod is the set of modular predicate, FO[N,<,mod] or FO[<].
-satisfies a first-order formula in some formalism. For example, whether a set is a submonoid/subsemigroup of (R^d,+)
In this talk, I intend to:
-introduce automata recognizing set of vector of numbers,
-characterize the set of numbers which are FO[N,<,mod]- and FO[R,+,<]-definable,
-introduce and generalizes the methods used by Honkala, Muchnik and Marsault-Sakarovitch
-explain how how those methods can be applied to the above-mentioned problems.
In this talk, we present infinite time Turing machines (ITTM), from
the original definition of the model to some new infinite time
We will present algorithmic techniques that allow to highlight some
properties of the ITTM-computable ordinals. In particular, we will
study gaps in ordinal computation times, that is to say, ordinal times
at which no infinite time program halts.
In this talk we will consider membrane (P) systems working with
multisets with integer multiplicities. We will focus on a model in
which rule applicability is not influenced by the contents of the
membrane. We show that this variant is closely related to blind
register machines and integer vector addition systems. Furthermore, we
describe the computational power of these models in terms of linear and
semilinear sets of integer vectors.
We introduce a new setting where a population of agents, each modelled by a finite-state system, are controlled uniformly: the controller applies the same action to every agent. The framework is largely inspired by the control of a biological system, namely a population of yeasts, where the controller may only change the environment common to all cells. In this talk, we will describe a sure synchronization problem for such populations: no matter how individual agents react to the actions of the controller, the controller aims at driving all agents synchronously to a goal set of states. The agents are naturally represented by a non-deterministic finite state automaton, the same for every agent, and the whole system is encoded as a 2-player game. The first player chooses actions, and the second player resolves non-determinism for each agent. The game with m agents is called the m-population game. A natural parametrized control problem, given the automaton representing the agents, is whether player one wins the m-population game for any population size m. We show that if the answer is negative, there exists a cut-off, that is, a population size m0 such that for populations of size m < m0 there exists a winning controller, and there is none for populations of size m >m0. Surprisingly, we show that this cut-off can be doubly exponential in the number of states of the NFA. While this suggests a high complexity for the parameterized control problem, we actually show that it can be solved in EXPTIME and is PSPACE-hard.
To achieve scalability, modern Internet services often rely on
distributed databases with consistency models for transactions
weaker than serializability.
At present, application programmers often lack techniques to ensure
that the weakness of these consistency models does not violate
In this talk I will present criteria to check whether applications
that rely on a database providing only weak consistency are robust,
i.e., behave as if they used a database providing serializability,
and I will focus on a consistency model called Parallel Snapshot Isolation.
The results I will outline handle systematically and uniformly several
recently proposed weak consistency models, as well as a mechanism for
strengthening consistency in parts of an application.
We focus on stochastic games,
which can model interaction with an adverse environment,
as well as probabilistic behaviour arising from uncertainties.
Our contribution is twofold.
First, we study long-run specifications
expressed as quantitative multi-dimensional
mean-payoff and ratio objectives.
We then develop an algorithm to synthesise epsilon-optimal strategies for conjunctions of almost sure satisfaction for mean payoffs
and ratio rewards (in general games) and Boolean combinations of expected
mean-payoffs (in controllable multi-chain games).
Second, we propose a compositional framework, together with assume-guarantee rules,
which enables winning strategies synthesised for individual components to be composed to a winning strategy for the composed game.
The framework applies to a broad class of properties, which also include expected total rewards,
and has been implemented in the software tool PRISM-games.
It seems that all statisticians know the concept of p-value is but none is willing to explain it. The only reference that we have been able to find is the 1974 book « Theoretical Statistics » by Cox and Hinkley where the authors define the concept, sort of, but do not use the term « p-value. »
We explain the concept presupposing only rudimentary probability theory; we examine the concept and we discuss the use of it.
For simplicity, our explanation is focused on the discrete case with no outcomes of zero probability but we’ll say a few words on the general case as well.
The talk builds on joint work with Vladimir Vovk of the University of London.