Thursday, October 30, 2014

SPA and the AES key schedule

Today's study group was given by Valentina on the 2014 CHES paper titled "Simple Power Analysis on the AES Key Expansion Revisited" by Christophe Clavier, Damien Marion and Antoine Wurcker from the Universite de Limoges in France.

To briefly recap, "simple power analysis" (SPA) is the rather misleading moniker for the class of methods that analyse side-channel information contained within a small amount of side-channel traces captured during the encryption of a single pair of plaintext and key material. The misleading nature of the title is that these methods are anything but simple to perform---the amount of exploitable information leakage contained within a single trace corresponding to the encryption and decryption of a fixed input pair is far, far smaller than that which can be achieved by capturing a large amount of traces for a variety of input pairs to be exploited using differential power analysis (DPA).

In the side-channel community there's a growing shift in perception towards viewing side-channel analysis as an auxiliary phase in an enhanced global key search, rather than a stand-alone ‘win-or-lose’ attack. DPA attacks use the additional information available in the larger set of traces and aim to reduce the final enumeration effort to as small an amount as possible. SPA attacks instead face the challenge of having to exploit all the available information in the trace and perform a potentially demanding enumeration phase.

The work of Clavier et. al. explores attacks on the AES key scheduling algorithm. It is sufficient for the purposes of this blog to know that the AES key schedule takes (assuming AES-128) one 16-byte secret key and expands it to 11 16-byte round keys, the first of which is the same as the input secret key. The authors explore two masked implementations of the key schedule algorithm---masking aims to combine random values (masks) with sensitive intermediate values, to break any relationships between the intermediate values and observed side-channel information. The first is termed "11 byte entropy boolean masking" and generates 11 individual random bytes, each of which masks all of the bytes with each round key (the 16 bytes of a single round key are masked using the same random mask). The second is termed "16-byte entropy boolean masking", and essentially is orthogonal to the 11-byte scheme: each byte within a round key is masked with a different random byte, but each round key is masked by the same 16 random bytes. In an ideal case, all of the 11 x 16 = 176 sensitive bytes would be masked by a new random byte each time---the authors claim that their 11- and 16-byte schemes are relevant in scenarios where a device does not have enough memory or available entropy to store or generate this many random values.

SPA attacks on the key-schedule attempt to reduce the size of the set of possible secret keys as much as possible. In this work, the authors assume that they know the Hamming-weight (a common form of side-channel leakage is that the Hamming weight of an intermediate value is proportional to side-channel information) of the key byte XORed with the mask. This is a strongly simplifying assumption---in practice, an attacker would have to profile the key schedule on a second device to begin to work towards an approximation for the side-channel leakage.

The primary contribution of the paper is an adapted "guess and backtrack"-style algorithm for reducing the set of possible keys by exploiting relationships between key bytes within the key schedule, with full knowledge of the leakage corresponding to the targeted intermediate variable of the key byte XORed with its mask. The additional enumeration effort imposed by the presence of the masks is shown to be manageble, finding that with up to 30 trace measurements their attack succeeds in drastically reducing the remaining search space in a relatively short amount of time. The attacks are further analysed in the presence of a shuffling countermeasure (the order of execution of the combination of the key bytes with the masks is randomised within each 4-byte column), and the authors discover that they can adapt the algorithm to explore all permutations of the ordering, taking on average several hours to succeed. With an assumption of being able to introduce faults in specific parts of the algorithm, this time can be reduced to the order of minutes.

The techniques presented for exploiting the information leakage are intricate and clever. The work motivates the following question for this area of research: to discover methods for relaxing the assumption on the attacker needing full knowledge of the information leakage occurring.

Friday, October 24, 2014

Study group: Lattice-based Digital Signatures

This week's study group was given by Emmanuela on the subject of digital signature schemes based on lattices. Because not everyone likes lattices as much as I do, Emmanuela decided to first convey some of the history of lattice signatures to the audience. The seminal work that everyone refers to when they talk about any sort of lattice-based cryptography is of course Ajtai's paper from '96 on the connection between the average and worst cases of certain lattice problems. Around the same time, the NTRU and GGH cryptosystems were proposed which provided both encryption and digital signature schemes. However, both schemes had their initial issues, and it turned out that especially the security of the digital signature schemes proved to be problematic. In hindsight it is pretty clear that the problem lies with the 'noise' distribution and that signatures leak information on the secret key.

The next steps towards the security of lattice-based signature schemes were taken in '08, when two independent works described schemes that are provably secure based on the hardness of standard lattice problems. The first is by Gentry, Peikert and Vaikuntanathan, which follows the hash-and-sign paradigm and includes a useful method to sample from the discrete Gaussian distribution, which is used in most modern lattice-based crypto schemes. As the word 'hash' implies, the security proof for this scheme is in the random oracle model. The second scheme is by Lyubashevsky and Micciancio in the standard model, but it only provides one-time secure signatures. These can be converted into fully secure signatures using a tree construction, which requires a logarithmic number (in the security parameter) of applications of the one-time scheme.

These two works inspired two different lines of research. One line focuses on getting the best efficiency in the random oracle model, whereas the other focuses on getting security in the standard model while maintaining a good asymptotic efficiency as well. The focus paper of the study group was in this second line of research: Improved Short Lattice Signatures in the Standard Model by Ducas and Micciancio from this year's Crypto. It combines the 'vanishing trapdoor' technique due to Boyen and the 'confined guessing' method due to Böhl et al. For their lattice trapdoors, they use the work of Micciancio and Peikert from Eurocrypt '12. This non-trivial combination leads to a scheme where the signatures are short (consisting of only one vector) at the cost of having keys consisting of a logarithmic number of vectors. They also propose a stateful scheme which reduces the key sizes to a log log number of vectors and also tightens the reduction, removing a factor introduced by the confined guessing stage as well as tightening the approximation factor of the underlying lattice problem. Interestingly, the schemes by Ducas and Micciancio require the additional algebraic structure of ideal lattices, whereas previous works only use this structure for the efficiency improvement.

In conclusion, the result is a new scheme that compares favourably to previous schemes by either allowing for smaller signatures or smaller keys. But things move fast in the lattice world, as there is already a new paper on the ePrint archive that reduces the keys to a constant number of vectors, at the cost of a bigger approximation factor in the underlying lattice problem. It is still possible to choose parameters such that this approximation is polynomial, but it is also possible to pick them less conservatively, resulting in a subexponential approximation factor. It will be interesting to see whether such choices survive future improvements to cryptanalysis.

Wednesday, October 22, 2014

52 Things: Number 3: Computational and storage power of different form factors

This is the third in a series of blog posts to address the list of '52 Things Every PhD Student Should Know' to do Cryptography. The set of questions has been compiled to give PhD candidates a sense of what they should know by the end of their first year. We will be presenting answers to each of the questions over the next year, one per week, and I am the student assigned to the third question:
Q3: Estimate the relative computational and storage capabilities of
  • a smart-card
  • a micro-controller (i.e. a sensor node)
  • an embedded or mobile computer (e.g., a mobile phone or PDA)
  • a laptop- or desktop-class computer.
To measure the computational capability of a device we could assess the clock speed of its processors. This may be misleading if the processor enables some form of parallelism---two cores running at 2 GHz obviously possess more computational power than a single core running at 2 GHz, and so finding a direct quantitative measure is not a realistic expectation. For specific devices like general purpose graphics cards, often the total FLOPS (floating point operations per second) the device is capable of sustaining is reported (for either single or double precision arithmetic) but even this measure is not a particularly reliable choice when applied to any given problem---indeed, some services facilitate a comparison by benchmarking the performance of different devices on a variety of problem instances---see, for example, CompuBench. Fortunately the range of capabilities of the devices included in the question makes a sufficient answer less dependent on quantitative metrics.
A measure for the storage capabilities of each device is much simpler to find: we can simply compare the approximate number of bytes of information the device is capable of holding on permanent storage.
A smart-card is the least computationally powerful device: obviously clock speeds vary for different implementations, but one might expect to see around a 20 MHz core speed. In terms of storage, a typical smart-card might have around 2 kilobytes (KiB) available.
A microcontroller is "a small computer on a single integrated circuit containing a processor core, memory, and programmable input/output peripherals" [1]. The range of storage and compute capability available will vary significantly according to the exact definition of microcontroller, but taking the suggested sensor node as an example, a typical microcontroller is likely to have similar computational capabilities as a smart-card and slightly more storage available, perhaps in the order of a few KiB to a few megabytes (MiB).
A mobile computer such as a mobile phone has significantly more storage and computing power, and the amount of power available is rapidly increasing over time. Taking the 2008 iPhone and the 2013 Nexus 5 phone as an example, the iPhone used a 412 MHz 32-bit RISC ARM core, and the Nexus 5 CPU used is a 2.3 GHz quad-core processor. In terms of storage, if we ignore the ability of some phones to use removable storage, then a high-end phone in 2013 might expect to provide in the order of 16 to 32 gigabytes (GiB) of storage
Finally, most laptop or desktop class computers are likely to have more processing power than a mobile phone: the high-end Intel "Haswell" i7 4960K processor contains 4 cores each clocked at 4 GHz, and the AMD "Piledriver" FX-9590 CPU contains 8 cores at 4.7 GHz---note that a direct comparison between these two processors requires more than just assessing core counts and clock speeds! There are other factors that can affect the computing capabilities of a desktop or laptop computer---in particular, the addition of a graphics processing unit can, for certain problems, provide a large increase in performance. The storage capacity of a laptop or desktop can vary tremendously, but a typical amount of storage in a consumer machine might be between hundreds of gigabytes and several terabytes (TiB)---the largest single hard drive capacities are now around 8 TiB.
  • [1] https://en.wikipedia.org/wiki/Microcontroller

Friday, October 17, 2014

Study group: witness-indistinguishable proofs (2014-10-16)

Zero-knowledge (ZK) proofs are a fairly common object in cryptography. What's less common knowledge: zero-knowledge does not compose well. For example, for every interactive ZK prover/verifier (P,V), you can build another pair $(\overline P, \overline V)$ that is still a ZK proof of the same language but running two prover/verifier instances in parallel leaks a witness.

Back in the early days of ZK, Feige and Shamir came up with an alternative notion called witness indistinguishability (WI). This says that (for some NP language) if $v, w$ are two witnesses to a statement $x$ then a proof using $v$ is indistinguishable from one using $w$. For some languages like "discrete logarithms" this property holds trivially but once there are several witnesses it becomes interesting. For example, a WI proof of a witness to a Pedersen commitment $G^x H^r$ is guaranteed not to reveal $x$, just like the original commitment itself. And WI is closed under composition. In fact, you can do a strong (and composable) form of WI that is information-theoretic and holds even if the verifier knows the witnesses in question, i.e. the proof is statistically independent of the witness.

The second topic we looked at are ZAPs. No-one seems to know what ZAP stands for but it's a two-round WI proof in the plain model (plain-ZK would require at least 3 rounds). The idea: start with a "common random string"-model non-interactive ZK scheme. In round 1, the verifier picks a lot of random strings $r_1, \ldots, r_k$. In round 2, the prover picks one random $r$ and sets $c_i = r_i \oplus r$ to get $k$ different CRS values. The prover then sends a CRS-NIZK proof for each of these values; the verifier accepts if all of them verify. An argument on the probability of a proof for a false statement going through on a random CRS then says that the soundness error of this construction is negligible in $k$.

At CRYPTO '03, Barak et al. further showed how to derandomise ZAPs.

Our final application is a 3-round OT scheme. To transfer a bit, verifier picks an RSA modulus $N = pq$. The prover sends a random string $r$  and the verifier replies with two random elements $y_0, y_1$ and a ZAP  w.r.t. $r$ that at least one is a quadratic residue $mod N$. The prover then picks two random $x_0, x_1$ and sends $y_0^{b_0} \cdot x_0^2$ and $y_1^{b_1} \cdot x_1^2$. The verifier can recover one bit by checking which of the two values is not a quadratic residue. To OT a whole bitstring, this protocol can be done for all bits in parallel. This is where it is important that the ZAP (which is WI) still works under concurrent composition.


Thursday, October 16, 2014

52 Things: Number 2: What is the difference between a multi-core processor and a vector processor?


On the face of it, you may be confused as to what the difference is between these two processors. After all, you may be familiar with words like parallel computing and come across these two different types of processor. So what are the differences between them? This is the question of this week’s 52Things Every Cryptography PhD Student should know. But before we get into the nitty gritty of it, why don't we first have a look at the concept these two different processors are part of, namely parallel computing.

What is parallel computing?


Before answering this question we first need to consider the conventional “serial” model of processing. Let's do so by imagining some problem we need to solve. The way serial computing solves this problem is by viewing it as a number of steps (instructions) which the processor deals with in sequential order. The processor deals with each of the instructions and then at the end, the answer comes out and the problem is solved. Whilst being a wonderful way of solving the problem it does however imply a bottleneck in the speed of solving it. Namely, the speed of the processor at executing the individual instructions. This is fine if the problem isn’t too large, but what happens when we have to deal with larger problems or want to compute things faster? Is there a way of increasing the speed of computation without the bottleneck of the speed of the processor?

The answer as you might have guessed is yes and it comes in the form of something called parallel computing. What parallel computing does to the problem we are trying to solve is to break it down into smaller problems, each of which can be computed separately at the same time. In this way, the problem is distributed over different processing elements which perform each of these different sub problems simultaneously, providing a potentially significant increase in speed of computation – the amount of speed up depends on the algorithm and can be determined by Amdahl's law [1]. So how does this all work? How can you process things in such a way as this? Well two solutions to the problem are multi-core and vector processors.

What is a multi-core processor?


A multi-core processor is a single computing component that carries out parallel computing by using multiple serial processors to do different things at the same time. The sub problems of the bigger problem discussed earlier are each solved by a separate processor allowing programs to be computed in parallel. It's like having multiple people working on a project where each person is given a different task to do, but all are contributing to the same project. This might take some extra organising to do, but the overall speed of getting the project completed is going to be faster.

What is a vector processor?


A vector processor is a processor that computes single instructions (as in a serial processor) but carries them out on multiple data sets that are arranged in one dimensional arrays (unlike a standard serial processor which operates on single data sets). The idea here is that if you are doing the same thing many times to different data sets in a program, rather than executing a single instruction for each piece of data why not do the instruction to all the sets of data once? The acronym SIMD (Single Instruction Multiple Data) is often used to denote instructions that work in this way.

What is the difference?


So that's the general idea, let's sum up with an example. Let's say we want roll 4 big stones across a road and it takes one minute to do each roll. The serial processor rolls them one by one and so takes four minutes. The multi core processor with two cores has two people to roll stones so each one rolls two stones, it takes two minutes. The vector processor gets a long plank of wood, puts it behind all four stones and pushes them all in one, taking one minute. The multi core processor has multiple workers, the vector processor has a way of doing the same thing to multiple things at the same time.

[1] http://en.wikipedia.org/wiki/Amdahl%27s_law

Friday, October 10, 2014

Compiler-based side-channel application and masking

This weeks study group was led by David McCann and focused on the “Compiler-based Side Channel Vulnerability Analysis and Optimized Countermeasures Application” [1] paper presented at the Design and Automation Conference (DAC) 2013. At a high-level, the authors consider the output for each instruction executed and determine, for each individual bit, the minimum number of key bits mixed into the result. Using this information, the compiler makes a decision (based on a threshold) on whether or not to mask the intermediate data.

Consider a toy example of the AddRoundKey operation from AES where $s = k \oplus m$ where $m$ is the input matrix, $k$ is the key matrix and $s$ is the resulting state matrix. Each bit of $s$ contains only a single bit of keyed information. The authors define security as a threshold for the minimum number of key bits affecting each intermediate state bit. The exception being an intermediate state bit that has no relation to the key bits (in this case, the security is considered infinity).

The authors incorporate the additional stages, referred to as the “security-oriented data flow analysis” (SDFA), into the LLVM compiler framework. This has some immediate advantages over applying masks directly in the source code, specifically, if the source code applies a masking scheme, and the compiler is clever enough to notice, the masking process may be identified as unnecessary computation and be optimised out. In addition to this, only the vulnerable intermediate bits are identified for masking rather than the application as a whole.

The SDFA converts the compiler intermediate-representation (IR) code to a Control-Flow Graph (CFG) where each node represents a statement in the program. The authors go on to define a set of rules for the propagation of keyed information at the input and output of the and, or, cmp, mul, div, mod, store, load and shift operations. I could define all the rules here, however this would be almost equivalent to re-writing the paper so if you are interested, please read the full text [1].

In the final section, the authors present experimental results with an implementation of AES-128. The results are compared to implementations presented in [2] and [3]. They discuss different levels of masking and their respective code size and speed performance. The authors highlight the speed-up in performance over existing masked implementations (roughly x2.5).

It is a nice and neat presentation on how to exploit the compilers framework to identify and mask vulnerable intermediate data. I am on the fence about the speed-up results seeing as the reference implementations are optimised for 8-bit processors (whereas the target hardware was a 32-bit processor). They also state the code size increase in their work is 'acceptable' given the speed up. However there is a three fold increase in size for a first order tabulated S-Box (accredited to loop-unrolling) for a two fold speed-up. Nevertheless, a nice paper with good potential for automation of side-channel countermeasures.




Plaintext Awareness and Signed ElGamal

For the first study group of the academic year, David Bernhard spoke on recent developments in the area of CCA-secure public key encryption. After introducing the security goals of the area and some key previous results, he presented a paper by Yannick Seurin and Joana Treger, from CT-RSA 2013 [0], which aims to provide such a scheme, which is a variant of Schnorr-signed ElGamal encryption...

ElGamal Encryption

We begin with a recap of the ElGamal encryption scheme. ElGamal is a public-key cryptosystem whose security is derived from the security of the discrete logarithm problem (DLP). Very briefly, it uses a publicly known cyclic group $G$ (in which the DLP must be hard!) of prime order $p$, with a chosen generator $g$. The user setting up the scheme picks a secret variable $x$, and sets their public key to be $h:=g^x$. To encrypt a message $m$ under this public key, one sends sends the pair $(g^y,mh^y)$, which the initial user can decrypt since they know $x$ and $p$.

ElGamal encryption is IND-CPA secure if (and only if) the Discrete Diffie-Hellman (DDH) assumption holds[1]. However, its ciphertexts are malleable and thus the scheme cannot be IND-CCA2. Indeed, trying to extend it to such a scheme turned out not to be as simple as one might think. Various papers made progress in this direction, taking two rather different directions. One branch aimed to create a hybrid scheme (e.g. DHIES [2]) and reduces security to a stronger assumption on the group and that of the symmetric primitive used. The method we will look at is the alternative...

 

Plaintext Awareness

A scheme is said to be plaintext aware in the Random Oracle Model (ROM-PA) if an adversary cannot generate a valid ciphertext without already knowing the plaintext of it. That is, it encapsulates the behaviour of a good scheme that prevents the adversary from generating valid ciphertexts other than by encrypting plaintexts herself (or doing something she knows to be equivalent to this). The technicalities of this definition are rather complex, but roughly mean that that given a ciphertext and the list of random oracle queries made to generate it, one can deduce the underlying plaintext.

Now, the key result for us is that if a scheme is both IND-CPA and ROM-PA secure, then it is IND-CCA2 secure [3].

 

Making ElGamal ROM-PA

Various earlier results had considered designing a plaintext aware scheme by combining ElGamal with a Schnorr signature [1,4], forming a scheme this paper refers to as SS-EG. Whilst SS-EG inherits IND-CPA security from ElGamal, unfortunately it does not add plaintext awareness. The key contribution of this paper[0] was to observe that actually SS-EG is in some sense "very close", and define a very similar scheme using Chaum-Pedersen commitments rather than the Schnorr signatures. Denoted CPS-EG, the only real difference between the schemes is a slight modification to the variables in the ciphertext and the addition of two more variables to the Random Oracle query when generating the commitment. It is the second of these differences that is criticial, because the extra information supplied to the random oracle (information that an extractor is given direct access to) is sufficient to make the scheme ROM-PA.

Making ElGamal IND-CCA2


At this point, one may hope that the scheme is indeed IND-CCA2, but there remains one final twist. There are various different ways one may transmit the required data for a signature, of which one traditional method is the Fiat-Shamir scheme, consisting of a challenge commitment and appropriate response (i.e. a non-interactive proof of knowledge). However, there are also alternative representations that are more efficiently packaged, leading to more concise ciphertexts. The particular representation chosen in the original scheme in fact failed to inherit IND-CPA security from the underlying encryption scheme [5].

Returning to the (less concise) Fiat-Shamir scheme ensures the final version of CPS-EG is indeed an IND-CCA2 secure public key encryption scheme.


References

[0] A Robust and Plaintext-Aware Variant of Signed ElGamal Encryption
Yannick Seurin and Joana Treger
CT-RSA 2013

[1] On the Security of ElGamal Based Encryption.
Yiannis Tsiounis and Moti Yung
PKC '98

[2] The Oracle Diffie-Hellman Assumptions and an Analysis of DHIES
M. Abdalla, M. Bellare and P. Rogaway
CT-RSA '01

[3] Relations among notions of security for public-key encryption schemes
M. Bellare, A. Desai, D. Pointcheval and P. Rogaway
Crypto '98

[4] A practical mix
Markus Jakobsson
Eurocrypt '98

[5] Cited by authors as 'personal communication'
Bertram Poettering
Jan 2013