## Monday, May 16, 2016

### Securing Cryptography Implementations in Embedded Systems

In the afternoon of the second day of EUROCRYPT 2016, Emmanuel Prouff gave a tutorial on how to securely implement cryptographic schemes on embedded devices. The main focus of the tutorial was on the security definitions of side-channel countermeasures, their construction and analyses.

A side-channel observation tends to leak information on the operation being performed at a given point in time and also on the data under manipulation. To secure an implementation against side-channel leakage, we need effective and efficient countermeasures that can be formally analysed in reasonably realistic models of leakage. The need for formal security analysis need not be emphasised given the fact that the side-channel analysis research community has witnessed several ad hoc analyses being invalidated. Practical security evaluation of countermeasures that come with sound theoretical analysis is also necessary for various reasons, the most important of which is to validate the practical relevance of the formal model of leakage adopted.

Side-channel observations are inherently noisy and the efficiency of a side-channel attack depends on the amount of noise in an observation. So a natural way to secure implementations against such attacks is to increase the noise in the observations. One method to achieve this is to secret share sensitive intermediate variables in such a way that probing a few intermediate values would not reveal any information about the secret parameters. For example, a secret key $x$ could be shared as $x = x_0 \oplus \ldots \oplus x_d$. This method belongs to a popular class of countermeasures known as masking, and these type of countermeasures are of algorithmic nature, which is in contrast to physical countermeasures such as hiding. One of the reasons for the popularity of the masking countermeasure is its amenability to a formal security analysis. Though the "crypto theory" community has shown considerable interest to provide effective defenses against side-channel attacks, for instance, the research on leakage-resilient cryptography [MR04,DP08], a practical solution currently seems to be out of reach due to its lack of efficiency. A possible reason for this is that the formal model of leakage often used is too strong compared to that observed in practice.

The first use of the idea of secret sharing as a side-channel countermeasure dates back to the works [GP99] and [CJRR99]. The practical effectiveness of the masking countermeasure can be deduced from a result in [CJRR99] that the number of side-channel traces necessary to distinguish the underlying unmasked bit increases exponentially w.r.t. to the number of shares of the corresponding bit. To formally analyse the security of masking schemes the most popular model of leakage used is the probing model of [ISW03]. In this model, an adversary is allowed to obtain leakage corresponding to a fixed number of wires of a boolean circuit. A more realistic leakage model, called as information bounded model, was proposed in [RP13]. These two models were unified in [DDF14].

Two main issues that need to be addressed while designing a masking scheme are: (1) how to share the sensitive data, and (2) how to securely process the shared data. For the former problem, most often one adopts a linear secret sharing scheme, in particular, the boolean masking. This method is closely related to the problem of constructing error correcting codes with large dual distance. Other alternatives to the linear secret sharing schemes are multiplicative masking, affine masking, modular additive masking, homographic masking, inner product masking, etc.

The latter problem of securely processing on shared data is closely related to the problem of secure multi-party computation. In the context of boolean masking, a well known method to compute in the presence of shares is from [ISW03]. Note that for circuits over the finite field of two elements $\mathbb{F}_2$ (i.e., boolean circuits), processing an $\mathbb{F}_2$-addtion gate (i.e., an xor gate) is straightforward as we just need to add up the corresponding shares of the input to produce the shares of the output of the addition gate. The main difficulty is to process an $\mathbb{F}_2$--multiplication gate (i.e., an and gate). The method proposed in [ISW03] to securely perform multiplication in the presence of shares requires quadratic amount (in the number of shares) of time and randomness, and hence is more expensive compared to performing an addition. The [ISW03] method was generalised to $\mathbb{F}_{2^n}$-arithmetic circuits in [CGPQR12] for improved efficiency in software implementations. In [CGPQR12], the function represented by a given (non-randomised) arithmetic circuit over $\mathbb{F}_{2^n}$ is represented by an univariate polynomial over $\mathbb{F}_{2^n}$, which is then evaluated by a sequence of the following operations: polynomial addition, scalar multiplication, polynomial squaring and multiplication of two distinct polynomials. While the first three are $\mathbb{F}_{2}$-linear operations, and hence relatively cheap to process, only the multiplication operation is non-linear and relatively more expensive. Currently, the heuristic method of [CRV14] is the most efficient polynomial evaluation method over binary finite fields that seeks to minimise the total count of non-linear multiplications. Recently, [CPRR15] proposed an algebraic decomposition method where the target function to be securely evaluated is represented as a composition of quadratic or higher algebraic-degree functions, which in turn are securely implemented more efficiently than by using previously known techniques.
While the probing model is effective against differential power analysis-like attacks, however, they are vulnerable to attacks that exploit the presence of glitches. The security requirements of glitch-resistant side-channel countermeasures are more demanding than that of masking schemes and, as a consequence, are less efficient in practice than masking schemes. Threshold implementations are a well-known class of glitch-resistant countermeasures.

The speaker concluded the tutorial by emphasising the need for algorithmic side-channel countermeasures enabled with formal security analysis, the need for formal models of leakage that suit the physical reality of devices and that enables relatively simple security proofs, and the need for efficient countermeasures.

References:

[CGPQR12] Claude Carlet, Louis Goubin, Emmanuel Prouff, Michaël Quisquater, Matthieu Rivain: Higher-Order Masking Schemes for S-Boxes. FSE 2012.

[CJRR99] Suresh Chari, Charanjit S. Jutla, Josyula R. Rao, Pankaj Rohatgi: Towards Sound Approaches to Counteract Power-Analysis Attacks. CRYPTO 1999.

[CPRR15] Claude Carlet, Emmanuel Prouff, Matthieu Rivain, Thomas Roche: Algebraic Decomposition for Probing Security. CRYPTO 2015.

[CRV14] Jean-Sébastien Coron, Arnab Roy, Srinivas Vivek: Fast Evaluation of Polynomials over Binary Finite Fields and Application to Side-Channel Countermeasures. CHES 2014.

[DDF14] Alexandre Duc, Stefan Dziembowski, Sebastian Faust: Unifying Leakage Models: From Probing Attacks to Noisy Leakage. EUROCRYPT 2014.

[DP08] Stefan Dziembowski, Krzysztof Pietrzak: Leakage-Resilient Cryptography. FOCS 2008.

[GP99] Louis Goubin, Jacques Patarin: DES and Differential Power Analysis (The "Duplication" Method). CHES 1999.

[ISW03] Yuval Ishai, Amit Sahai, David Wagner: Private Circuits: Securing Hardware against Probing Attacks. CRYPTO 2003.

[MR04] Silvio Micali, Leonid Reyzin: Physically Observable Cryptography (Extended Abstract). TCC 2004.

[RP13] Emmanuel Prouff, Matthieu Rivain: Masking against Side-Channel Attacks: Masking against Side-Channel Attacks: A Formal Security Proof. EUROCRYPT 2013.

### Eurocrypt 2016: All Complete Functionalities are Reversible

In the multi-party computation (MPC) track at Eurocrypt 2016 in Vienna, Austria, Dakshita Khurana rather heroically gave two consecutive, very well-presented talks, the second of which explaining that All Complete Functionalities are Reversible. This second talk is the focus of this blog post.

In the context of MPC, a functionality is an algorithm which performs some operation using two or more parties' inputs while guaranteeing certain requisite security properties. The functionalities covered by this paper are secure function computations (SFEs), in which the parties want to compute some function on their combined inputs in a secure manner.

In the case of functionalities with two parties, $P_A$ and $P_B$, we usually consider one party as the sender and the other as the receiver. For decades, an open question in the field of MPC was, Given a two-party functionality $\mathcal{F}$ in which $P_A$ acts as the sender and $P_B$ as the receiver, is it possible to turn this into a secure scheme in which instead $P_A$ acts as the receiver and $P_B$ as the sender? This reverse functionality is denoted by $\textsf{reverse}(\mathcal{F})$. Intuitively, we think of a functionality allowing this reversal as containing 'enough symmetry' to perform the functionality of $\mathcal{F}$ regardless of which party performs which role.

It was shown by Wolf and Wullschleger that oblivious transfer (OT) can be reversed. OT is a process by which $P_A$ receives one of (at least) two values private to $P_B$ without $P_B$ finding out which $P_A$ chose.

A functionality $\mathcal{F}$ is called complete if both $\mathcal{F}$ and $\textsf{reverse}(\mathcal{F})$ are able to realise OT functionality. In light of Wolf and Wullschleger's result, a natural question to ask is then, Exactly which complete 2PC functionalities can be reversed? This new result shows that they all can be.

The main technical challenge that the authors overcome is that of creating commitments in both directions. This accomplishment is essentially what allows this symmetry property to go through and achieve the result.

As one would hope for in papers on cryptography, this result has practical as well as theoretical appeal: if one party possesses some physical property requiring that it perform a particular role in the functionality (e.g. a computationally 'weak' device), it is essential that this property suffice to compute securely even if roles are reversed. In this way, this paper serves as a good example of how cryptography theory and practice can and should go hand in hand. Like a couple dancing a Viennese waltz?

## Thursday, May 12, 2016

### Eurocrypt 2016: Message Authentication Codes

Of the many sessions of EuroCrypt 2016, one of the most interesting was on Message Authentication  Codes (MACs). Chaired by Peter Gazi, the session was somewhat different from many provable security sessions in that it focused on better understanding the security of two schemes already in existence (and use).  In each case, the MAC in question is constructed through a mode-of-operation on top of an ideal primitive. This seems to be part of a trend at the moment: a number of recent papers on modes-of-operation papers have been written to tighten the security bounds of known schemes[1].

First up, Stefano Tessaro presented joint work with Mihir Bellare and Dan Bernstein [2] on the security of the MAC function used in the Ed25519 deterministic signature scheme, which they term AMAC (for Augmented MAC). AMAC can be thought of as a simplified version of using HMAC with an MD-based hash function, in which we replace the outer keying with a public output transformation.
The work first generalises AMAC to describe an "Augmented Cascade" by allowing any output transformation. This surfaces a new characterisation for the compression function: a PRF-under-Out-Leakage, where Out is the output transformation in question. That is, the primitive should be secure even when the adversary is given the evaluation of Out on it's inputs (this basically corresponds to length-extension attacks).  Directly targeting multi-user security of the overall scheme (rather than relying on a hybrid argument), they provide a reduction from the from the multi-user augmented cascade to the multi-user PRF-under-Out-leakage security of the compression function.
The work then demonstrates that (in the ideal model) this bound is small, and in doing so deduces the security of the cascade, which directly implies security of AMAC.  It was observed that bound was better than one would have by simply calculating single-user security and applying a hybrid argument.

The second talk of the session was by Atul Luykx, who spoke about PMAC (Parallelizable MAC) and its security bounds [3], based on work with Bart Preneel, Alan Szepieniec and Kan Yasuda. Atul focussed on what a security bound actually means, and why in the concrete world (where we expect in instantiate our schemes by setting real-world parameters) we should be careful about needlessly loose security proofs.  For example, he showed how, if one wishes to have reasonable level of confidence that an adversary cannot create a forgery yet use a small block cipher, birthday bound security can end up as low as a single-digit number of queries.
With this in mind, we looked at the security landscape for first EMAC (encrypted CBC MAC) and then PMAC through a graph of number of queries against maximum query length, with the various security bounds drawn in.  With generic attacks taking out large sectors of each graph, it was still clear that the attacks and bounds did not meet, and the paper focusses on whether these proofs can be modified to be independent of query length. The paper explains how this length-dependence is heavily dependent on the type of masking used internally, and this leads to the main conclusions. The most relevant of these for the real world is that using a Gray code to generate the masks leads to a length-dependent attack.

[1] See for example these papers from FSE.
[2] https://eprint.iacr.org/2016/142.pdf
[3] https://eprint.iacr.org/2016/185.pdf

### Eurocrypt 2016: Hacker Ethics on the Rise

Following Phil Rogaway at Asiacrypt, Bart Preneel gave the IACR distinguished lecture on the future of cryptography. He went from outlining the Snowden leaks to proposing concrete ideas for how information technology should be used. It was refreshing to see a perspective that offers some optimism in these times.

Most of the Snowden leaks are already more than two years old, so I will not cover them in detail. However, it is important to repeat that they are less about breaking cryptography (and if so, more about breaking protocols than primitives) and more about breaking software and coercing companies.

Observing that privacy is a fundamental requirement of a free and democratic society, Bart expressed his disappointment over that the fact that most cryptography in use serves corporate interests rather than users' privacy. The largest deployment of cryptography for privacy is WhatsApp, which is of course dwarfed by the deployment in finance and access control.

A key point of the talk was the move of the focus from communication security to computation security. The real-world protocols of secure channels are far from being perfect, but there is continuous improvement, which promises to solve the problem eventually. However, the state of affairs is more critical for the software implementations. While attacks on TLS itself are relatively involved, software bugs tend to be more readily exploitable and require continuous patching, which in turns opens another attack vector. Current mitigations such as firewalls and anti-virus software seems to offer a weak line of defense at best and yet another attack vector at worst.

Moving away a bit from technological details, Bart compared the architecture of IT systems to politics. Just as democracies avoid a single point of control (and thus a single point of failure), IT systems should do the same. He advocated to move away from big data because big data leads to big breaches. In this light, privacy by design (a core of the new EU data protection law) requires data being stored decentrally. Cryptographic protocols such as MPC and FHE offer the capabilities of using decentralized data without any single point of failure.

Another important point of the talk was the observation that transparency is essential. Without open software and hardware it is impossible to assess the security of a system independently. Together with a call for more privacy, this sounds very much like the hacker ethics of keeping private data private and making data of public interest public.

Finally, Bart replied to Phil's point that cryptographers engage too much in crypto for crypto instead of real problem, saying that it is hard to predict which technology will be valuable in the future and quoting Einstein: "If we knew what it was we were doing, it would not be called research, would it?" Essentially, Bart called for the entirety of the cryptography "stack" to be continuously looked at, from the assumptions to the implementations, and for not avoiding hard problems.

He concluded by quoting Kant: "Optimism is a moral duty."

## Tuesday, May 10, 2016

### Some Reductions Will Always Be Lossy

One of the last sessions at yesterday’s Eurocrypt was an excellent talk by Tibor Jager, who presented his paper On The Impossibility of Tight Cryptographic Reductions, which is joint work with Christoph Bader, Yong Li and Sven Schäge.

Recall that we often prove the security of a public key primitive by constructing an algorithm, called a reduction, that is able to violate some complexity assumption $A$ (like CDH or DDH) if it has access to an adversary that breaks the primitive. So if $A$ is true, then such an adversary cannot exist.

In the setting where there are $n$ users, it is often the case that the probability of the reduction breaking $A$ is $\frac{1}{n}$ times the probability that the adversary breaks the primitive. This is what we mean when we say that the reduction is not “tight”: it is essentially $n$ times easier for an adversary to break the primitive than for the reduction to violate $A$. If $n$ is very large, then we must choose very large parameters to be sure that the primitive is unbreakable with these parameters (assuming $A$ is true). So tight reductions are highly desirable in practice.

The paper presented yesterday shows that there are many public key primitives for which no tight reductions can exist. We will illustrate this here with the example of an encryption scheme, but the result easily generalises to other primitives.

Consider the following multi-user security game $G$ for an encryption scheme: the adversary is given $n$ public keys $pk_1, \dots, pk_n$, outputs an index $i$ between 1 and $n$, receives the secret keys $sk_j$ for $j \in \{1, \dots, n\} \setminus \{ i \}$, and wins the game if it outputs a valid secret key for $pk_i$.

This looks like an odd game and it corresponds to a very weak notion of security, but showing that no tight reduction exists for $G$ implies that no tight reduction exists for a more natural game: $n$-key IND-CPA where the adversary can adaptively corrupt up to $n-1$ secret keys and query its test oracle using a single uncorrupted key.

We will suppose that a reduction from $G$ to an assumption $A$ exists, and use the reduction itself to violate $A$ directly, without accessing an adversary against $G$. Instead, our meta-reduction will simulate an ideal adversary against $G$ and violate $A$ using the reduction's output from interacting with the simulated adversary. To do the simulation, we essentially replay the reduction $n$ times, supplying a different index $i$ each time and storing the secret keys returned by the reduction, provided that they are actually valid for the public keys. Then it is trivial to “win” $G$ by simply outputting one of these stored secret keys, so our meta-reduction behaves like a very powerful adversary against $G$.

One of two things can happen: either the reduction answers honestly for exactly one index $i$, in which case our rewinding strategy won’t work but the reduction isn’t tight; or our meta-reduction perfectly simulates an ideal adversary, so we violate $A$. So either the reduction isn’t tight, or $A$ is false.

Many of the technical details are omitted here, but it’s worth pointing out one of the conditions that is necessary for this argument to go through. Simply forwarding valid secret keys provided by the reduction doesn’t quite simulate an ideal adversary, since these secret keys could be “non-uniform” in some way. So there must be an efficient algorithm that takes a public key $pk$ and a valid secret key $sk$ and uniformly samples from all the valid secret keys for $pk$. The easiest way to achieve this is if there is exactly one valid secret key for each public key: the “resampling” algorithm just returns its input.

I find this result very neat. It represents the intersection of theoretical work and practical concerns; often the two are very different things.

## Monday, May 9, 2016

### Eurocrypt 2016: Sanitization of FHE Ciphertexts

This year's Eurocrypt kicked off earlier in sunny Vienna. In today's last session, Léo Ducas talked about how to do your laundry homomorphically (and efficiently), which is joint work with Damien Stehlé.

Recall the aims of Fully Homomorphic Encryption: Alice has some data on which she wishes to compute some circuit $C$. However, she doesn't necessarily want to perform the computation herself, so she wants to outsource it. FHE allows her to outsource computations safely, by which we mean we want to preserve data privacy.

In this setting, Alice sends over her ciphertexts $c_i$ along with a circuit $C$. Bob then computes $c = Evaluate_{pk}(C, c_i)$ and sends $c$ back. In their work, the authors add the requirement that both parties preserve privacy - Alice's data is confidential, but so is Bob's circuit. As a motivation, imagine Bob to be a fiscal optimization consultant. The aim of this work is to efficiently obtain circuit privacy from existing FHE; in particular, we do not want the distribution of a ciphertext to depend on the circuit which led to it via homomorphic evaluation. This is assuming Alice to be honest-but-curious, meaning that we assume that the public key $pk$ and the ciphertexts are honestly formed. The authors combine previous techniques of bootstrapping and re-randomization to obtain a sanitization process.

The current method for doing the above is the so-called "flooding" of a ciphertext [Gentry09]. The drawback of this technique is the requirement of aggressive LWE assumptions (sub-exponential approximate factors) and bad asymptotic parameters.

In this work, the authors instead propose the soak-then-spin technique, which consists of bootstrapping several times and injecting some entropy in between each bootstrapping step. This allows for more lenient LWE hardness assumptions and smaller parameters. The downside is the repetitive bootstrapping operations, although the cost of such an operation has decreased with recent work [AP14][DM15].

They define WASH and SANITIZE algorithms and show them to be message preserving. This means that applying the algorithm then decrypting, or decrypting without applying the algorithm leads to the same result. Moreover, if two ciphertexts decrypt to the same plaintext, then the statistical distance between the sanitize (or wash) of the two ciphertexts is exponentially negligible. In terms of practical results, in HELib, they require around 3 bootstrapping operations.

If you haven't had time to read this post and/or see the talk, here is Léo's TL;DR:
• Put the ciphertext in water
• Wash it
• Rinse it
• Repeat three times!

References:
[AP14] J. Alperin-Sheriff and C. Peikert. Faster bootstrapping with polynomial error. In Proc. of CRYPTO, volume 8616 of LNCS, pages 297–314. Springer, 2014.

[DM15] L. Ducas and D. Micciancio. FHEW: bootstrapping homomorphic encryption in less than a second. In Proc. of EUROCRYPT, volume 9056 of LNCS, pages 617–640. Springer, 2015.

[Gentry09] C. Gentry. A fully homomorphic encryption scheme. PhD thesis, Stanford University, 2009. Manuscript available at http://crypto.stanford.edu/ craig.

## Thursday, May 5, 2016

### Side channel research in 2015: overview, statistics, and lots of new ideas!

This is the second time that we have produced a research review (thanks to everyone who commented on last year's effort!). Our review covers the many varied contributions to side channel research in the year 2015, and features five research themes:
• Efficiently extracting all information from leakage traces
• Furthering our understanding of fundamental statistical methods in the context of leakage attacks
• Efficient key rank computation and enumeration
• Attacking commodity devices
• Lightweight ciphers
I want to give credit to the effort made by my colleagues (RAs and PhD students) who read and reread last year's published papers and helped me put together a (reasonably) comprehensive review of the main themes of last year's research results. Without them this would not have been possible.

For what it matters, we have also updated citation counts and other quantitative information about papers (if you think that we missed something, then please let us know).

You can find the review on the SILENT website.