Benjamin Rouxel

Tightening contention delays while scheduling parallel applications on multi-core architectures

EMSOFT'2017 pdf pdf (slides) pdf (poster)

Abstract BibTeX

Multi-core systems are increasingly interesting candidates for executing parallel real-time applications, in avionic, space or automotive industries, as they provide both computing capabilities and power eciency. However, ensuring that timing constraints are met on such platforms is challenging, because some hardware resources are shared between cores.
Assuming worst-case contentions when analyzing the schedulability of applications may result in systems mistakenly declared unschedulable, although the worst-case level of contentions can never occur in practice. In this paper, we present two contention-aware scheduling strategies that produce a time-triggered schedule of the application’s tasks. Based on knowledge of the application’s structure, our scheduling strategies precisely estimate the effective contentions, in order to minimize the overall makespan of the schedule. An Integer Linear Programming (ILP) solution of the scheduling problem is presented, as well as a heuristic solution that generates schedules very close to ones of the ILP (5 % longer on average), with a much lower time complexity. Our heuristic improves by 19% the overall makespan of the resulting schedules compared to a worst-case contention baseline.

@article{Rouxel:2017:TCD:3145508.3126496,
author = {Rouxel, Benjamin and Derrien, Steven and Puaut, Isabelle},
title = {Tightening Contention Delays While Scheduling Parallel Applications on Multi-core Architectures},
journal = {ACM Trans. Embed. Comput. Syst.},
issue_date = {October 2017},
volume = {16},
number = {5s},
month = sep,
year = {2017},
issn = {1539-9087},
pages = {164:1--164:20},
articleno = {164},
numpages = {20},
url = {http://doi.acm.org/10.1145/3126496},
doi = {10.1145/3126496},
acmid = {3126496},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {Real-time system, contention-aware scheduling},
}

STR2RTS: Refactored StreamIT benchmarks into statically analyzable parallel benchmarks for WCET estimation & real-time scheduling

WCET Workshop in ECRTS'2017 pdf pdf (slides)

Abstract BibTeX

We all had quite a time to find non-proprietary architecture-independent exploitable parallel benchmarks for Worst-Case Execution Time (WCET) estimation and real-time scheduling. However, there is no consensus on a parallel benchmark suite, when compared to the single-core era and the Mälardalen benchmark suite [12]. This document bridges part of this gap, by presenting a collection of benchmarks with the following good properties: (i) easily analyzable by static WCET estimation tools (written in structured C language, in particular neither goto nor dynamic memory allocation, containing flow information such as loop bounds); (ii) independent from any particular run-time system (MPI, OpenMP) or real-time operating system. Each benchmark is composed of the C source code of its tasks, and an XML description describing the structure of the application (tasks and amount of data exchanged between them when applicable). Each benchmark can be integrated in a full end-to-end empirical method validation protocol on multi-core architecture. This proposed collection of benchmarks is derived from the well known StreamIT [21] benchmark suite and will be integrated in the TACleBench suite [11] in a near future. All these benchmarks are available at https://gitlab.inria.fr/brouxel/STR2RTS.

@InProceedings{rouxel_et_al:OASIcs:2017:7304,
author ={Benjamin Rouxel and Isabelle Puaut},
title ={{STR2RTS: Refactored StreamIT Benchmarks into Statically Analyzable Parallel Benchmarks for WCET Estimation & Real-Time Scheduling}},
booktitle ={17th International Workshop on Worst-Case Execution Time Analysis (WCET 2017)},
pages ={1:1--1:12},
series ={OpenAccess Series in Informatics (OASIcs)},
ISBN ={978-3-95977-057-6},
ISSN ={2190-6807},
year ={2017},
volume ={57},
editor ={Jan Reineke},
publisher ={Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
address ={Dagstuhl, Germany},
URL ={http://drops.dagstuhl.de/opus/volltexte/2017/7304},
URN ={urn:nbn:de:0030-drops-73047},
doi ={10.4230/OASIcs.WCET.2017.1},
annote ={Keywords: Parallel benchmarks, Tasks scheduling, Worst-Case Execution Time estimation}
}

The Heptane Static Worst-Case Execution Time Estimation Tool

WCET Workshop'2017 pdf pdf (slides)

Abstract BibTeX

Estimation of worst-case execution times (WCETs) is required to validate the temporal behavior of hard real time systems. Heptane is an open-source software program that estimates upper bounds of execution times on MIPS and ARM v7 architectures, offered to the WCET estimation community to experiment new WCET estimation techniques. The software architecture of Heptane was designed to be as modular and extensible as possible to facilitate the integration of new approaches. This paper is devoted to a description of Heptane, and includes information on the analyses it implements, how to use it and extend it.

@InProceedings{hardy_et_al:OASIcs:2017:7303,
author ={Damien Hardy and Benjamin Rouxel and Isabelle Puaut},
title ={{The Heptane Static Worst-Case Execution Time Estimation Tool}},
booktitle ={17th International Workshop on Worst-Case Execution Time Analysis (WCET 2017)},
pages ={8:1--8:12},
series ={OpenAccess Series in Informatics (OASIcs)},
ISBN ={978-3-95977-057-6},
ISSN ={2190-6807},
year ={2017},
volume ={57},
editor ={Jan Reineke},
publisher ={Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
address ={Dagstuhl, Germany},
URL ={http://drops.dagstuhl.de/opus/volltexte/2017/7303},
URN ={urn:nbn:de:0030-drops-73033},
doi ={10.4230/OASIcs.WCET.2017.8},
annote ={Keywords: Worst-Case Execution Time Estimation, Static Analysis, WCET Estimation Tool, Implicit Path Enumeration Technique}
}

Resource-aware task graph scheduling using ILP on multi-core

ACACES'2016 pdf pdf (poster)

Abstract

Multi-core usage have increased in real-time embedded system. Despite of the different existing multi-core architecture, there is a real need for mapping and scheduling applications on those architecture. Many techniques already exist to achieve it but all of them take the worst-case latency when dealing with shared resource. We propose here to study the impact of synchronisation on the global WCET. By adding synchronisation on tasks that use a shared resources we are able to decrease the effect of contention on shared resources.

Symbolic Evaluation and Disassembling x86 Low-Level Code

Master Thesis 2015 pdf

Abstract

Malwares have been existing from the beginning of computer science. Nowadays malwares authors use more and more sophisticated obfuscations to hide their code. Analyzing malwares is not an easy task as their authors use of imagination to find obfuscations that defeat standard disassemblers.
The major tool they use is a packer which warps the malicious code by series of self-modifying steps. The key tools to study or detect malwares, are unpackers and disassemblers. Most of the time, those tools are studied independently and thus not compatible between each other. In addition, nowadays we have to process a huge amount of suspicious softwares, in this context existing solutions are not ecient.
This document focuses on x86 malwares and how to build a disassembler in conjunction of an unpacker which can handle self-modifying code and other obfuscation techniques. It uses dynamic introspection and static analysis to reconstruct the program in a high level of abstraction. The proposed solution implements the work of [1, 2, 3]. The method is evaluated as sound and ecient. Results are encouraging but further work will show if the accuracy of the disassembler is improvable.

Symbolic Evaluation and x86 Disassembling

Master thesis, bibliography study 2015 pdf

Abstract

Cyberterrorists do not only use Denial Of Service attacks. Past years have seen emerged a lot of very evolved viruses such as Duqu’s driver or Stuxnet. In order to facilitate the reverse-engineering process of such programs, researchers need powerful binary analysis platforms as the source code is not available. The first step of such platform is : disassembling the binary file. This consists in translating low-level instruction code to a higher level of abstraction. However viruses are not easy to disassemble. Indeed developers use techniques to obfuscate their code in order to make the reverse-engineering process much harder. They also use self-modifying code which makes most of the binary analysis tools inefficient. This document presents existing approaches and tools to reconstruct program structure called Control Flow Graph from binary code, and gives a hint on their usefulness when dealing with code obfuscation and self-modifying code. This work is part of the ANR BINSEC1 project which aims to build an efficient binary analysis platform with such programming techniques. At first this platform will support x86 assembly code. However the final goal is to be architecture independent in order to support all kind of binary code with the help of an Intermediate Language.

Informations Traceability in Control Flow to Compute WCET with Compiler Optimizations

First year of Master 2014 pdf

Abstract

Real-time systems are omnipresent in our life. Such kind of systems require more attention as human lifes depend on them. Designers need to compute the Worst Case Execution Time (WCET) in order to guarantee they respect their timing constraints. Many WCET techniques exist, the safest one is based on static analysis. This method analyzes the source code structure, and model a target architecture to compute the WCET. To compute a WCET close to the reality, it must be done at the binary level (target platform modeling is easier at this level). Also, information flow are required to improve the precision of the WCET. Infeasible paths property is one of them. It may come from mutually exclusion between conditional branch in the execution flow of a program. Those properties are extracted at a high-level language design. As most of real-time applications are compiled to C code, flow informations must be propagated through compiler steps to WCET estimation tools. But compilers propose to optimize the code’s structure, and most of the flow information depends on the code structure. So infeasible paths properties become invalid when the compiler optimizes the code. In order to reconcile real-time system developers, we created a framework to trace infeasible paths properties into compilers. It is designed to be fully optimizations independent t(if an optimization is added/removed or modified, our framework will still be working properly). The resulted WCET estimation are very encouraging, our traceability allowed to improve the WCET by around 30% in our test case.

Embedded-Scilab : A Scilab Compiler Designed To Be Used With Embedded System

First year of Master 2014 pdf (french)

Abstract

Scilab est un langage très populaire pour le prototypage d’application. Il est cependant impossible d’utiliser ces prototypes sur des architectures embarquées. En effet, ceux-ci ne peuvent embarquer un interpréteur Scilab, entre autre car l’interprétation est généralement moins efficace que l’exécution d’un code compilé en langage machine. Une solution est donc de compiler préalablement Scilab en C. Scilab étant un langage à typage dynamique, une des étapes cruciales de la compilation est l’inférence de type des variables. Ce document introduit les concepts nécessaires de la compilation, puis présente des solutions pour la compilation de Scilab en C dans le but d’obtenir un code efficace, tant en temps d’exécution qu’en occupation mémoire.