Séminaire Systèmes embarqués critiques

Séminaire Systèmes embarqués critiques

Le séminaire LTCI sur les systèmes embarqués critiques est un séminaire conjoint entre l’équipe ACES, l’équipe SSH et l’équipe LabSoCC du LTCI. Il propose des conférences sur la sécurité des systèmes intégrés, les systèmes en temps réel et la conception basée sur des modèles.

Séminaire entièrement en anglais.

 

Séminaire Mars 2019

March 7th 2019 at 10 a.m. in room B603.

The presentations will be given by Julien De Antoni, and Hana Mkaouar. You can find the detailed program with abstracts below.

10h00 – 10h45 : Presentation of Julien De Antoni
10h45 – 11h00 : Coffee break
11h00 – 11h45 : Presentation of Hana Mkaouar.

Towards Formal System Engineering

Julien De Antoni, Associate professor@Polytech’Nice and researcher@I3S/INRIA KAIROS team

The complexity of modern systems implies a development process involvingdifferent stakeholders. These stakeholders are usually using Domain Specific Languages, tailored both syntactically and semantically to their needs.

In order to embrace such development process, we proposed to make explicit the operational semantics of DSLs as well as the semantics of their interactions. More precisely we developped formal meta languages
from which the synthesis of an interpreter, a model checker and a compiler is automatic.

The talk will present some of the challenges towards formal system engineering and the (partial) solution we proposed. It will also present a small demo on the GEMOC studio, an open source software platform hosted by the Eclipse consortium to foster research on (heterogeneous) Language Engineering.

A Formal Approach for Real-time Systems Engineering

Hana Mkaouar, Post-doc at TELECOM ParisTech/LTCI, team ACES

The verification of real-time systems is a complex and delicate task, especially when they are used in safety-critical domains.  In this context , the formal methods have become one of the advocated techniques in safety-critical software engineering. Yet, they are applied on specific formalisms such as Petri nets and process algebras, based on formal semantics, to be analyzed by dedicated tools. Indeed, the formal verification techniques can not be directly applied on design languages (such as AADL), since they lack of formal semantics. For this reason, a common approach is to transform design models into formal specifications to be verified by analysis tools. In this thesis, we aim at integrating the formal methods in an AADL model-based development process. To do this, we provide a mapping of the AADL model into a formal model. Indeed, we have defined and implemented a model transformation, in order to allow the model-checking of a set of structural and behavioral properties of real-time systems.

 

Séminaire Juillet 2018

July 5th 2018 at 10 a.m. in room B603.

The presentations will be given by Alexander Schaub and Roberto Medina. You can find the detailed program with abstracts below.
10h00 – 10h45 : Presentation of Alexander Schaub
10h45 – 11h00 : Coffee break
11h00 – 11h45 : Presentation of Roberto Medina.

Reliability and Entropy of Delay PUFs: A Theoretical Analysis

Alexander Schaub (joint work with Jean-Luc Danger, Sylvain Guilley, and Olivier Rioul)

Abstract: Silicon physically unclonable functions (PUF) are used in various applications requiring robust authentication. These systems exploit unpredictable process variations in electronic circuits. These process variations uniquely identify the produced hardware, which exhibit distinct properties in terms, for example, of delay propagations inside the circuit. By measuring and exploiting these properties, one can determine a « fingerprint » of the circuit, which can not be physically replicated. This fingerprint can then be used, for instance, to produce a cryptographic key. The advantage is that this key does not need to be explicitly stored, which reduces the security risk. Other applications include challenge-response protocols, where the responses are determined from the physical properties of the circuit.

The reliability of the PUF is crucial because the cryptographic key or identifier generated by the PUF should remain steady over its life period. So far, reliability was assessed empirically for all the silicon PUFs and is relatively poor for bit error rates (BER) greater than 4%. Therefore, it is necessary to enhance reliability by a post-processing stage using error correcting codes. However, there was no predictive model to characterize the raw reliability level of PUFs. Such a formal knowledge would be particularly useful for the designer to calibrate the post-processing complexity and compare different PUF architectures without having recourse to a costly silicon implementation.
In this work, we develop a predictive framework which enables us to derive a closed form expression of both entropy and reliability for several families of delay PUFs: the RO PUF, the RO sum PUF as well as the Loop PUF. Improving these delay PUFs with bit filtering, we can provide an explicit trade-o\u001B between complexity, reliability and entropy. Error rates about 10−9 or even lower can be achieved by this method.

Deployment of Mixed-Criticality Data Driven Applicationson Multi-core Architectures

Roberto Medina

Abstract: In safety-critical systems, due to safety requirements, only functionalities with the same level of criticality should share resources. However, this practice often leads to a waste of computation power, more so when mutli-core architectures are considered. Mixed-Criticality proposes a solution to this problem: by defining modes of execution for the system, critical and non-critical tasks share a common execution platform. Many contributions in this domain have been proposed in the literature, nonetheless, most of them only consider independent task set models. At an industrial level, independent task sets are restrictive: methods used to develop reactive safety-critical systems, often model such systems as data-dependent graphs. We consider Mixed-Criticality applications modeled as Directed Acyclic Graphs representing data dependencies among tasks. Our works have led to scheduling techniques outperforming the state of the art. In addition, since non-critical tasks’ execution ensure the system’s quality-of-service, we have also proposed availability analyses for this type of tasks.

 

Séminaire Février 2018

February 22nd, 2018 at 10 a.m. in room B115

The presentations will be given byJean Leneutre and Benjamin Dauphin. You can find the detailed program with abstracts below.

Program:
10h00 – 10h45 : Presentation of Jean Leneutre
10h45 – 11h00 : Coffee break
11h00 – 11h45 : Presentation of Benjamin Dauphin

Towards a strategic approach to security based on game theory

Jean Leneutre

Abstract: In the recent years, a large number of works have proposed to use game theory as a decision support framework for security. Problems tackled in these works include:
– how to optimize the allocation of security resources?
– how to configure in an optimal way protection or monitoring
resources?
– how to select the best security response to an incident?

Along this line of research, we will present in this talk a recent contribution related to the integrity and availability of outsourced data. A Service Level Agreement (SLA) is usually signed between the Cloud Provider and the customer. For redundancy purposes, it is important to verify the Cloud Provider’s compliance with data backup requirements in the SLA. There exists a number of security mechanisms to check the integrity and availability of outsourced data. This task can be performed by the customer or be delegated to an independent entity that we will refer to as the Verifier. However, checking the availability of data introduces extra costs, which can discourage the customer of performing data verification too often. The interaction between the Verifier and the Cloud provider can be captured using game theory in order to find an optimal data verification strategy. We formulate this problem as a two player non-cooperative one-shot game. We consider the case in which each type of data is replicated a number of times that can depend on a set of parameters including, among others, its size and sensitivity. We analyze the strategies of the cloud provider and the verifier at the Nash Equilibrium and derive the expected behavior of both players. We also consider a second formulation of the problem as a Stackelberg game in which there are a leader and a follower. While, providing some valuable insights for decision making in security, these models based on game theory are not without limitations, in particular regarding the high level of abstraction that it implies.  In a second part of the talk, we will discuss these limitations, and propose future research directions to improve the applicability of game theoretic approaches to security.

Automating the Measurementsand Identifying Power MOSFET Devices

Benjamin Dauphin (LabSoc team)

Power MOSFET devices are a primary component of power converters, and so are widely used in electronic systems. This widespread usage makes the study of the devices an important domain.  Understanding the way these devices function enables to design more performant systems. Also, due to their extensive use, ensuring the security of embedded system involves ensuring the authenticity of the power MOSFET devices they contain. The authenticity must be ensured because a refurbished or a counterfeit device could affect performances and/or security of critical systems such as automobiles.

To achieve device authentication we:
– Designed an automatic measurement system to measure more easily the devices, and
– Used Machine Learning on the measurement curves obtained in order to identify devices