Tools

Reference Traces

REASSURE produced a series of datasets obtained by measuring power traces on various physical platforms. These datasets are freely available, and can be used as a basis to test attacks, leakage detection methods, or for any other purpose. Our only requirement is to acknowledge the sources when communicating about subsequent activities (see license files in repositories).

Software tools                 

REASSURE developed various tools to help develop or assess side-channel-resistant implementations. These tools are released under public licenses (see license files in repositories). Here too, our main requirement is to acknowledge the sources when communicating about subsequent activities:

Publications

Reducing a Masked Implementation’s Effective Security Order with Setup Manipulations And an Explanation Based on Externally-Amplified Couplings

By Itamar Levi, Davide Bellizia and François-Xavier Standaert, published in IACR Transactions on Cryptographic Hardware and Embedded Systems, 2019(2)

Couplings are a type of physical default that can violate the independence assumption needed for the secure implementation of the masking countermeasure. Two recent works by De Cnudde et al. put forward qualitatively that couplings can cause information leakages of lower order than theoretically expected. However, the (quantitative) amplitude of these lower-order leakages (e.g., measured as the amplitude of a detection metric such as Welch’s T statistic) was usually lower than the one of the (theoretically expected) d th order leakages. So the actual security level of these implementations remained unaffected. In addition, in order to make the couplings visible, the authors sometimes needed to amplify them internally ( e.g., by tweaking the placement and routing or iterating linear operations on the shares). In this paper, we first show that the amplitude of low-order leakages in masked implementations can be amplified externally, by tweaking side-channel measurement setups in a way that is under control of a power analysis adversary.

Our experiments put forward that the “effective security order” of both hardware (FPGA) and software (ARM-32) implementations can be reduced, leading to concrete reductions of their security level. For this purpose, we move from the detection-based analyzes of previous works to attack-based evaluations, allowing to confirm the exploitability of the lower-order leakages that we amplify. We also provide a tentative explanation for these effects based on couplings, and describe a model that can be used to predict them in function of the measurement setup’s external resistor and implementation’s supply voltage. We posit that the effective security orders observed are mainly due to “externally-amplified couplings” that can be systematically exploited by actual adversaries.
Read more

Study of Deep Learning Techniques for Side-Channel Analysis and Introduction to ASCAD Database

By Ryad Benadjila, Emmanuel Prouff, Rémi Strullu, Eleonora Cagli and Cécile Dumas, published in IACR Cryptology eprint archive 2018

To provide insurance on the resistance of a system against side-channel analysis, several national or private schemes are today promoting an evaluation strategy, common in classical cryptography, which is focussing on the most powerful adversary who may train to learn about the dependency between the device behaviour and the sensitive data values. Several works have shown that this kind of analysis, known as Template Attacks in the side-channel domain, can be rephrased as a classical Machine Learning classification problem with learning phase. Following the current trend in the latter area, recent works have demonstrated that deep learning algorithms were very efficient to conduct security evaluations of embedded systems and had many advantages compared to the other methods. Unfortunately, their hyper-parametrization has often been kept secret by the authors who only discussed on the main design principles and on the attack efficiencies.

This is clearly an important limitation of previous works since (1) the latter parametrization is known to be a challenging question in Machine Learning and (2) it does not allow for the reproducibility of the presented results. This paper aims to address these limitations in several ways. First, completing recent works, we propose a comprehensive study of deep learning algorithms when applied in the context of side-channel analysis and we discuss the links with the classical template attacks. Secondly, we address the question of the choice of the hyper-parameters for the class of multi-layer perceptron networks and convolutional neural networks. Several benchmarks and rationales are given in the context of the analysis of a masked implementation of the AES algorithm. To enable perfect reproducibility of our tests, this work also introduces an open platform including all the sources of the target implementation together with the campaign of electromagnetic measurements exploited in our benchmarks. This open database, named ASCAD, has been specified to serve as a common basis for further works on this subject. Our work confirms the conclusions made by Cagli et al. at CHES 2017 about the high potential of convolutional neural networks. Interestingly, it shows that the approach followed to design the algorithm VGG-16 used for image recognition seems also to be sound when it comes to fix an architecture for side-channel analysis.
Read more

Masking Proofs Are Tight and How to Exploit it in Security Evaluations

By Vincent Grosso and François-Xavier Standaert, presented at Advances in Cryptology – EUROCRYPT 2018

Evaluating the security level of a leaking implementation against side-channel attacks is a challenging task. This is especially true when countermeasures such as masking are implemented since in this case: (i) the amount of measurements to perform a key recovery may become prohibitive for certification laboratories, and (ii) applying optimal (multivariate) attacks may be computationally intensive and technically challenging. In this paper, we show that by taking advantage of the tightness of masking security proofs, we can significantly simplify this evaluation task in a very general manner.

More precisely, we show that the evaluation of a masked implementation can essentially be reduced to the one of an unprotected implementation. In addition, we show that despite optimal attacks against masking schemes are computationally intensive for large number of shares, heuristic (soft analytical side-channel) attacks can approach optimality efficiently.As part of this second contribution,we also improve over the recent multivariate (aka horizontal) side-channel attacks proposed at CHES 2016 by Battistello et al.
Read more

Leakage Detection with the χ 2 -Test

By Amir Moradi , Bastian Richter , Tobias Schneider and François-Xavier Standaert , published in IACR Transactions on Cryptographic Hardware and Embedded Systems, 2018(1)

We describe how Pearson’s χ 2 test can be used as a natural complement to Welch’s t-test for black box leakage detection. In particular, we show that by using these two tests in combination, we can mitigate some of the limitations due to the moment-based nature of existing detection techniques based on Welch’s t-test (e.g., for the evaluation of higher-order masked implementations with insufficient noise). We also show that Pearson’s χ 2 test is naturally suited to analyze threshold implementations with information lying in multiple statistical moments, and can be easily extended to a distinguisher for key recovery attacks. As a result, we believe the proposed test and methodology are interesting complementary ingredients of the side-channel evaluation toolbox, for black box leakage detection and non-profiled attacks, and as a preliminary before more demanding advanced analyses.

Start Simple and then Refine: Bias-Variance Decomposition as a Diagnosis Tool for Leakage Profiling

By Liran Lerman, Nikita Veshchikov, Olivier Markowitch, Franc¸ois-Xavier Standaert, published in IEEE Transactions on Computers 2018

Evaluating the resistance of cryptosystems to side-channel attacks is an important research challenge. Profiled attacks reveal the degree of resilience of a cryptographic device when an adversary examines its physical characteristics. So far, evaluation laboratories launch several physical attacks (based on engineering intuitions) in order to find one strategy that eventually extracts secret information (such as a secret cryptographic key). The certification step represents a complex task because in practice the evaluators have tight memory and time constraints. In this paper, we propose a principled way of guiding the design of the most successful evaluation strategies thanks to the (bias-variance) decomposition of a security metric of profiled attacks. Our results show that we can successfully apply our framework on unprotected and protected algorithms implemented in software and hardware.

Towards Sound and Optimal Leakage Detection Procedure

By Liwei Zhang , A. Adam Ding , Francois Durvaux , Francois-Xavier Standaert , and Yunsi Fei, presented at CARDIS 2017

Evaluation of side channel leakage for the embedded crypto systems requires sound leakage detection procedures. We relate the test vector leakage assessment (TVLA) procedure to the statistical minimum p-value (mini-p) procedure, and propose a sound method of deciding leakage existence in the statistical hypothesis setting. To improve detection, an advanced statistical procedure Higher Criticism (HC) is applied. The detection of leakage existence and the identification of exploitable leakage are separated when there are multiple leakage points.

For leakage detection, the HC-based procedure is shown to be optimal in that, for a given number of traces with given length, it detects existence of leakage at the signal level as low as possibly detectable by any statistical procedure. We provide theoretical proof of the optimality of the HC procedure. Numerical studies show that the HC-based procedure perform as well as the mini-p based procedure when leakage signals are very sparse, and can improve the leakage detection significantly when there are multiple leakages.
Read more

Very High Order Masking: Efficient Implementation and Security Evaluation

By Anthony Journault, François-Xavier Standaert, presented at CHES 2017

In this paper, we study the performances and security of recent masking algorithms specialized to parallel implementations in a 32-bit embedded software platform, for the standard AES Rijndael and the bitslice cipher Fantomas. By exploiting the excellent features of these algorithms for bitslice implementations, we first extend the recent speed records of Goudarzi and Rivain (presented at Eurocrypt 2017) and report realistic timings for masked implementations with 32 shares. We then observe that the security level provided by such implementations is uneasy to quantify with current evaluation tools.

We therefore propose a new “multi-model” evaluation methodology which takes advantage of different (more or less abstract) security models introduced in the literature. This methodology allows us to both bound the security level of our implementations in a principled manner and to assess the risks of overstated security based on well understood parameters. Concretely, it leads us to conclude that these implementations withstand worst-case adversaries with > 2 64 measurements under falsifiable assumptions.
Read more

Private Multiplication over Finite Fields

By Sonia Belaïd, Fabrice Benhamouda, Alain Passelègue, Emmanuel Prouff,, Adrian Thillard, and Damien Vergnaud,, presented at CRYPTO 2017

The notion of privacy in the probing model, introduced by Ishai, Sahai, and Wagner in 2003, is nowadays frequently involved to assess the security of circuits manipulating sensitive information. However, provable security in this model still comes at the cost of a significant overhead both in terms of arithmetic complexity and randomness complexity. In this paper, we deal with this issue for circuits processing multiplication over finite fields. Our contributions are manifold. Extending the work of Belaïd, Benhamouda, Passelègue, Prouff, Thillard, and Vergnaud at Eurocrypt 2016, we introduce an algebraic characterization of the privacy for multiplication in any finite field and we propose a novel algebraic characterization for non-interference (a stronger security notion in this setting). Then, we present two generic constructions of multiplication circuits in finite fields that achieve non-interference in the probing model.

Denoting by d the number of probes used by the adversary, the first proposal reduces the number of bilinear multiplications (i.e., of general multiplications of two non-constant values in the finite field) to only 2d + 1 whereas the state-of-the-art was O(d2). The second proposal reduces the randomness complexity to d random elements in the underlying finite field, hence improving the O(d log d) randomness complexity achieved by Belaïd et al. in their paper. This construction is almost optimal since we also prove that d/2 is a lower bound. Eventually, we show that both algebraic constructions can always be instantiated in large enough finite fields. Furthermore, for the important cases d ∈ {2, 3}, we illustrate that they perform well in practice by presenting explicit realizations for finite fields of practical interest.
Read more

Getting the Most Out of Leakage Detection Statistical Tools and Measurement Setups Hand in Hand

By Santos Merino del Pozo and François-Xavier Standaert, presented at COSADE 2017

In this work, we provide a concrete investigation of the gains that can be obtained by combining good measurement setups and efficient leakage detection tests to speed up evaluation times. For this purpose, we first analyze the quality of various measurement setups. Then, we highlight the positive impact of a recent proposal for efficient leakage detection, based on the analysis of a (few) pair(s) of plaintexts. Finally, we show that the combination of our best setups and detection tools allows detecting leakages for a noisy threshold implementation of the block cipher PRESENT after an intensive measurement phase, while either worse setups or less efficient detection tests would not succeed in detecting these leakages. Overall, our results show that a combination of good setups and fast leakage detection can turn security evaluation times from days to hours (for first-order secure implementations) and even from weeks to days (for higher-order secure implementations).

Applying Horizontal Clustering Side-Channel Attacks on Embedded ECC Implementations

By Erick Nascimento and Łukasz Chmielewski, presented at CARDIS 2017

Side-channel attacks are a threat to cryptographic algorithms running on embedded devices. Public-key cryptosystems, including elliptic curve cryptography (ECC), are particularly vulnerable because their private keys are usually long-term. Well known countermeasures like regularity, projective coordinates and scalar randomization, among others, are used to harden implementations against common side-channel attacks like DPA.
Horizontal clustering attacks can theoretically overcome these countermeasures by attacking individual side-channel traces. In practice horizontal attacks have been applied to overcome protected ECC implementations on FPGAs. However, it has not been known yet whether such attacks can be applied to protected implementations working on embedded devices, especially in a non-profiled setting.

In this paper we mount non-profiled horizontal clustering attacks on two protected implementations of the Montgomery Ladder on Curve25519 available in the μNaCl library targeting electromagnetic (EM) emanations. The first implementation performs the conditional swap (cswap) operation through arithmetic of field elements (cswap-arith), while the second does so by swapping the pointers (cswap-pointer). They run on a 32-bit ARM Cortex-M4F core.
Our best attack has success rates of 97.64% and 99.60% for cswap-arith and cswap-pointer, respectively. This means that at most 6 and 2 bits are incorrectly recovered, and therefore, a subsequent brute-force can fix them in reasonable time. Furthermore, our horizontal clustering framework used for the aforementioned attacks can be applied against other
protected implementations.
Read more

Connecting and Improving Direct Sum Masking and Inner Product Masking

By Romain Poussier, Qian Guo, François-Xavier Standaert, Sylvain Guilley, Claude Carlet, presented at CARDIS 2017

Direct Sum Masking (DSM) and Inner Product (IP) masking are two types of countermeasures that have been introduced as alternatives to simpler (e.g., additive) masking schemes to protect cryptographic implementations against side-channel analysis. In this paper, we first show that IP masking can be written as a particular case of DSM. We then analyze the improved security properties that these (more complex) encodings can provide over Boolean masking. For this purpose, we introduce a slight variation of the probing model, which allows us to provide a simple explanation to the “security order amplification” for such masking schemes that was put forward at CARDIS 2016. We then use our model to search for new instances of masking schemes that optimize this security order amplification. We finally discuss the relevance of this security order amplification (and its underlying assumption of linear leakages) based on an experimental case study.

A Systematic Approach to the Side-Channel Analysis of ECC Implementations with Worst-Case Horizontal Attacks

By Romain Poussier, Yuanyuan Zhou, François-Xavier Standaert, presented at CHES 2017

The wide number and variety of side-channel attacks against scalar multiplication algorithms makes their security evaluations complex, in particular in case of time constraints making exhaustive analyses impossible. In this paper, we present a systematic way to evaluate the security of such implementations against horizontal attacks. As horizontal attacks allow extracting most of the information in the leakage traces of scalar multiplications, they are suitable to avoid risks of overestimated security levels. For this purpose, we additionally propose to use linear regression in order to accurately characterize the leakage function and therefore approach worst-case security evaluations. We then show how to apply our tools in the contexts of ECDSA and ECDH implementations, and validate them against two targets: a Cortex-M4 and a Cortex-A8 micro-controllers.

Convolutional Neural Networks with Data Augmentation against Jitter-Based Countermeasures – Profiling Attacks without Pre-Processing

By Eleonora Cagli, Cécile Dumas, Emmanuel Prouff, presented at CHES 2017

In the context of the security evaluation of cryptographic implementations, profiling attacks (aka Template Attacks) play a fundamental role. Nowadays the most popular Template Attack strategy consists in approximating the information leakages by Gaussian distributions. Nevertheless this approach suffers from the difficulty to deal with both the traces misalignment and the high dimensionality of the data. This forces the attacker to perform critical preprocessing phases, such as the selection of the points of interest and the realignment of measurements. Some software and hardware countermeasures have been conceived exactly to create such a misalignment.

In this paper we propose an end-to-end profiling attack strategy based on the Convolutional Neural Networks: this strategy greatly facilitates the attack roadmap, since it does not require a previous trace realignment nor a precise selection of points of interest. To significantly increase the performances of the CNN, we moreover propose to equip it with the data augmentation technique that is classical in other applications of Machine Learning. As a validation, we present several experiments against traces misaligned by different kinds of countermeasures, including the augmentation of the clock jitter effect in a secure hardware implementation over a modern chip. The excellent results achieved in these experiments prove that Convolutional Neural Networks approach combined with data augmentation gives a very efficient alternative to the state-of-the-art profiling attacks.
Read more

Towards Practical Tools for Side Channel Aware Software Engineering: ‘Grey Box’ Modelling for Instruction Leakages

By David McCann, Elisabeth Oswald, Carolyn Whitnall, presented at USENIX Security Symposium 2017

Power (along with EM, cache and timing) leaks are of considerable concern for developers who have to deal with cryptographic components as part of their overall software implementation, in particular in the context of embedded devices. Whilst there exist some compiler tools to detect timing leaks, similar progress towards pinpointing power and EM leaks has been hampered by limits on the amount of information available about the physical components from which such leaks originate.

We suggest a novel modelling technique capable of producing high-quality instruction-level power (and/or EM) models without requiring a detailed hardware description of a processor nor information about the used process technology (access to both of which is typically restricted). We show that our methodology is effective at capturing differential data-dependent effects as neighbouring instructions in a sequence vary. We also explore register effects, and verify our models across several measurement boards to comment on board effects and portability. We confirm its versatility by demonstrating the basic technique on two processors (the ARM Cortex-M0 and M4), and use the M0 models to develop ELMO, the first leakage simulator for the ARM Cortex M0.
Read more

Categorising and Comparing Cluster-Based DPA Distinguishers

By Xinping Zhou, Carolyn Whitnall, Elisabeth Oswald, Degang Sun, Zhu Wang, presented at SAC 2017

Side-channel distinguishers play an important role in differential power analysis, where real world leakage information is compared against hypothetical predictions in order to guess at the underlying secret key. A class of distinguishers which can be described as `cluster-based’ have the advantage that they are able to exploit multi-dimensional leakage samples in scenarios where only loose, `semi-profiled’ approximations of the true leakage forms are available. This is by contrast with univariate distinguishers exploiting only single points (e.g. correlation), and Template Attacks requiring concise fitted models which can be overly sensitive to mismatch between the profiling and attack acquisitions.

This paper collects together—to our knowledge, for the first time—the various different proposals for cluster-based DPA (concretely, Differential Cluster Analysis, First Principal Components Analysis, and Linear Discriminant Analysis), and shows how they fit within the robust `semi-profiling’ attack procedure proposed by Whitnall et al. at CHES 2015. We provide discussion of the theoretical similarities and differences of the separately proposed distinguishers as well as an empirical comparison of their performance in a range of (real and simulated) leakage scenarios and with varying parameters. Our findings have application for practitioners constrained to rely on `semi-profiled’ models who wish to make informed choices about the best known procedures to exploit such information.

Read more

Deliverables

Deliverable D2.1: Shortcut Formulas for Side Channel Evaluation

This deliverable surveys a number of approaches to shortcut the effort required to assess implementations and devices with respect to their susceptibility regarding side channel attacks. Our approach aligns with the divide and conquer nature of most side channel attacks and hence we touch on shortcuts that apply the distinguisher statistics (the divide step) and the key rank (the conquer step).

We notice that shortcuts make significant assumptions about leakage characteristics (in particular independence of leakages and equal variances) that do not hold in many of the challenging device evaluation scenarios. In addition, being able to characterise the signal and noise is not always possible: early on in a design this information is not yet available, whereas later on in an evaluation within a set scheme the evaluator often cannot turn off countermeasures to establish the nature of the “original” signal. When it comes to key rank computations it has been shown that the variance of the key rank is huge in those cases where the key rank is most relevant. The tightness by which the average rank is estimated is thus not so important for as long as the estimate also gives some information about the spread of the rank.

Within REASSURE we set ourselves the challenge to assess how such shortcut approaches could nevertheless be leveraged to improve evaluations. This deliverable hence is the  foundational step from which we will continue in two directions. Firstly, within the next 6 months, industrial partners will provide some insight into at what stage shortcuts are particularly appropriate. Secondly, some shortcuts will be implemented within WP3.2 in the context of the REASSURE simulation tool.

Read more

Deliverable D2.2: Interim report on automation

This document represents interim progress of the REASSURE consortium towards the goal of flexible evaluation methods that are usable by non-domain experts.

We begin by clarifying what we mean by automation: the facility to perform a task with a minimum of user input and interaction (expert or otherwise). This implies that the necessary parameters for a sound, fair and reproducible analysis are derived or estimated from the available data as far as possible. It also requires that the assumptions and limitations of the analysis be made transparent in the output, in order to guide (potentially non-expert) users towards sound conclusions.

Deliverable 1.1 identifies three main components of a typical evaluation process: measurement and (raw) trace processing, detection/mapping, and exploitation. These seldom proceed in a linear workflow, as information gained in later steps is used to repeat and refine earlier steps. For example, it can be hard to confirm the suitability of acquired traces without actually performing some sort of detection or attack, so that the measurement stage may need to be revisited in the light of observed outcomes. Such backtracking creates challenges for automation, in particular implying the need for improved quality metrics at every stage of evaluation — a matter of ongoing investigation.

On the basis that detection and mapping tasks are the most promising for automation, we then provide a more detailed account of statistical hypothesis testing and of how to perform the multiple comparison corrections and statistical power analyses required to carry out such procedures fairly and transparently. In the case of leakage detection strategies based on simple assumptions, such as those relying on t-tests, it is possible to arrive at analytical formulae for the latter; for complex methods, such as those estimating information theoretic quantities, the only options currently available (as far as we are aware) involve referring to empirically-derived indicators from (hopefully) representative scenarios. A number of currently used methods (in particular, those based on ANOVA-like tests, and those based on correlation) sit in a middle ground, where the side-channel literature has not yet fully explored the solutions already proposed in the statistics literature. This represents an avenue for future work, which could potentially make it possible to safely automate a wider range of tests, as well as providing insights into the trade-offs between them.

We discuss the particular challenges of extending known results for univariate, `first order’ leakages to the sorts of higher-order univariate and multivariate leakages arising from protected schemes. Tricks to `shift’ information into distribution means via pre-processing inevitably violate the assumptions which make the statistical power analysis of t-tests straightforward. Meanwhile, unless evaluators have control over (or at least access to) any randomness, as well as sufficient details about the implementation specification, the task of searching for jointly leaking tuples can quickly become infeasible as the leakage order increases. We consider that more work needs to be done from a basic methodological perspective before higher-order detection can realistically be considered a candidate for automation.

Finally, we propose that there exists a need to devise measures of `coverage’, inspired by the notion of code coverage in software testing. We offer some suggestions of what this might look like: input-based scores indicating the proportion of the total population sampled for the experiment (under different assumptions about the form of the leakage); intermediate value-based scores indicating the fraction of total sensitive intermediates targeted by the tests; or even strategies whereby profiling information is used to design and test specific corner cases.

We intend to make the formal definition of coverage metrics an avenue for exploration during the remainder of the project, along with other outstanding questions identified in this document. A future deliverable (D2.5) will report on progress made to that end, and finalise our findings and recommendations.

Read more

Deliverable D2.3: Shortcut Formulas for Side-Channel Evaluation

This deliverable surveys a number of approaches to shortcut the effort required to assess implementations and devices with respect to their susceptibility to side channel attacks. Our approach aligns with the divide and conquer nature of most side channel attacks and hence we touch on shortcuts that apply to the distinguisher statistics (the divide step) and the key rank (the conquer step).

We notice that shortcuts make significant assumptions about leakage characteristics (in particular independence of leakages and equal variances) that do not hold in many of the more challenging device evaluation scenarios. In addition, it is not always possible to characterise the signal and noise as  required: early on in a design this information is not yet available, whereas later on in an evaluation within a set scheme the evaluator often cannot turn off countermeasures to establish the nature of the “original” signal. When it comes to key rank computations it has been shown that the variance of the key rank is huge in those cases where the key rank is most relevant. The tightness by which the average rank is estimated is thus not so important as long as the estimate also gives some information about the spread of the rank.

Within REASSURE we set ourselves the challenge to assess how such shortcut approaches could nevertheless be leveraged to improve evaluations. This deliverable attempts to highlight at what stage shortcuts are particularly appropriate. Secondly, some shortcuts will be implemented within WP3.2 in the context of the REASSURE simulation tool.

Read more

Deliverable D 2.4: Report on Instruction Level Profiling

This document reports on interim progress of the REASSURE consortium towards methodologies to create representative instruction-level profiles of microprocessors of at least medium complexity.

We begin by reviewing possible approaches to modelling leakage, ranging from detailed transistor level simulations derived from back-annotated netlists, through ‘profiled’ models fitted to sample data, to simulations based on simplifying assumptions about typical device behaviour. We identify the requirements for our planned REASSURE simulator, which will need to be able to process source code as input and should be flexible with respect to the (profiled or unprofiled) models by which it produces leakage predictions.

Next we summarise a range of existing tools from industry and academia, including one (ELMO) which we argue best sets the precedent that we plan to follow for REASSURE. We explain the modelling procedure used in the construction of ELMO and report on the results of our own model-building endeavour using data from an alternative implementation of the same board (an ARM Cortex M0). In particular, we show that there are broad similarities between the implementations, but some differences that will require attention if models are expected to port from one to another.

The next steps for REASSURE will be to explore improvements to the model building procedure, as well as its extension to more complex devices, in order to eventually release our own simulator tools to the wider side channel research and evaluation community.

Read more

Deliverable D5.2: Data management plan

This document represents the first version of the Data Management Plan (DMP) of the REASSURE project. It is a living document that will be updated throughout the project. The document focuses on identifying the type of data that will be openly shared, namely leakage traces acquired and processed during a side-channel analysis of an embedded security device, as well as the intended target audience and the data format.