« Synthèse des prospectives 2019 » : différence entre les versions
Ligne 14 : | Ligne 14 : | ||
=== Confidentiality === |
=== Confidentiality === |
||
An increasing number of applications --especially those used in the construction of AI systems-- are data-driven, and it is necessary to ensure that the privacy of individuals |
An increasing number of applications --especially those used in the construction of AI systems-- are data-driven, and it is necessary to ensure that the data used by these applications does not compromise the privacy of individuals. |
||
=== System (software, hardware and cyber-physical) safety === |
=== System (software, hardware and cyber-physical) safety === |
Version du 22 février 2019 à 10:11
Introduction
Computing devices have become ubiquitous in everyday life, and this has led to growing concerns about their reliability. Because errors or malfunctions in these can have dramatic consequences, it is expected that these devices should be guaranteed to be safe, trusted, and have an explainable behavior. The goal of the teams in the "Formal methods, models and languages" research area is to develop tools that can be used all the way from the rigorous design of computing devices to the formal proof that they behave as expected.
Societal challenges
Adapting digital technology to the ecological transition
Digital technology needs to be adapted to the major challenge that is the ecological transition:
- Methods to better understand the environmental and societal impact of digital technology
- More important need for resiliency
- Need to recreate systems and infrastructures that require less energy, resources, maintainance, etc.
Confidentiality
An increasing number of applications --especially those used in the construction of AI systems-- are data-driven, and it is necessary to ensure that the data used by these applications does not compromise the privacy of individuals.
System (software, hardware and cyber-physical) safety
Challenges (Problems to solve)
Certified construction of systems
Error correction
Explainability and accountability of embarked and cyber-physical systems
The increasing complexity and autonomy of CPS have made accountability and explainability requirements more crucial: in order to be socially acceptable, the behaviors of CPS must be explainable. This is particularly the case for systems in which decision making is based on AI, but it is not limited to this case. Along with safety, the respect - and violation - of properties such as security and privacy must be explainable. This explainability should in particular be used to understand were the blame lies from a legal point of view when a CPS malfunctions.
Research themes
Concepts, languages and tools for modelling and verifying complex systems
- Graph rewriting
- Automated reasoning
Embedded IA
"Embedded IA" and more specifically "embedded machine learning". There are now FPGA and ASIC implementations for ML and CNN (convolutional neural networks), as well as multicore deployments.
Proof formalisation in applied mathematics and theoretical computer science
E.g.:
- Development of libraries of certified definitions
- Theorems and algorithms for financial mathematics, logic, etc.
Quantum information science
Typed high-level languages for data science
The preparation of data (clean-up, requests, distributed handling, etc.) is of paramount importance in the construction of AI applications. Construction of type systems for the analysis and handling of data. Construction of analysis methods and optimized compilation for these languages; in particular for requests.