Friday, January 30, 2015

Discrete Gaussian Sampling and the Quest for the Shortest Vector

The tl;dr:
When it comes to short vectors in lattices
The state-of-the-art apparatus is
Discrete Gaussian sampling
For use (for example) in
Reduced time and space cryptanalysis...
This week’s study group, presented by Joop, was on Solving the Shortest Vector Problem in 2n Time via Discrete Gaussian Sampling, a 2014 pre-print release by Aggarwal, Dadush, Regev and Stephens-Davidowitz. The lattice, to my regret, is not a creature I frequently encounter in the course of my own research, so I report here from a considerably less-than-expert perspective. But hopefully, if I stick to the high-level details then all shall be well.

The motivating theme of the paper is the Shortest Vector Problem (SVP) — an important computational problem on lattices: given a basis for a lattice, find a non-zero vector in the lattice of minimum Euclidean norm. Many cryptographic primitives (notably including breakthrough developments in fully homomorphic encryption) derive their security from the worst-case hardness of SVP and related problems.

Previous algorithms to find exact or approximate solutions to SVP fall into three main classes: enumeration, whereby basis reduction is combined with exhaustive search inside Euclidean balls (time ranging from 2O(n log n) to 2O(n2), polynomial space); sieving, whereby randomly sampled vectors are iteratively combined to generate increasingly short vectors (2cn time (c>2), exponential space);  and recent proposals using the Voronoi cell of a lattice -- the real region in which all points lie closer to the zero vector than to any other (22n time, exponential space).

The intuition behind Discrete Gaussian Sampling (DGS) is to draw from a (narrow) distribution of lattice points centred around zero. As the width of the distribution (characterised by the parameter s) decreases, the samples become more and more concentrated on short vectors; sufficiently many samples for an arbitrary s will eventually land on a solution to SVP. A simple idea but, of course, far from simple in practice. In particular, for any given lattice there is a smoothing parameter s* -- informally, the 'tipping point' at which the distribution begins to 'look like' a continuous Gaussian rather than a stack around the central point. Efficient algorithms are known for s much larger than s*, but these are inadequate to solve exact lattice problems. A key contribution of Aggarwal et al. is an algorithm that samples for any parameter s > 0; it returns 2n/2 independent discrete Gaussian distributed vectors using 2n+o(n) time and space. They add to this a second algorithm for "above smoothing" sampling with a substantially improved time and space complexity (2n/2 -- the current record for fastest provable running time of a hard lattice problem), and show that it can be used to approximate decision SVP to within a factor of 1.93.

Here's where I muster all my expertise-less enthusiasm and try to explain the (high level) details of the algorithms without saying anything too stupid…

Both algorithms operate by iteratively combining vectors drawn from carefully chosen high parameter distributions (from which it is well-known how to sample efficiently) in such a way that the dispersion parameter of the distribution of the combined vectors is progressively lowered. As an aid to intuition, consider, for a moment, the counterpart continuous case: the distribution of a (continuous) Gaussian-distributed vector divided by two is similarly Gaussian distributed with half the width. In the discrete case, though, dividing by two generally does not produce another vector in the lattice. A 'fix' would be to sample from L with parameter s', keep those that were also in 2L, and divide them by 2 -- but the "loss factor" of doing so can be high (the probability that a sample from L is also in 2L can be as small as 2-n), so one needs to be cleverer.

The 2n-time below-smoothing sampler looks for pairs of vectors sampled from lattice L with parameter s whose sum is in 2L (equivalently: vectors in the same coset mod 2L). The 'greedy' combiner which pairs as many as possible in each coset-related bucket would have a loss factor of just 2 in reducing from the sampled vectors to the shorter combined vectors. However, below the smoothing parameter the required distributional properties are lost in the combining step. The workaround to produce combined vectors with the correct distribution is very involved; I caught something about coin-flips and Poisson distributions, but the sterling efforts of our presenter to render comprehensible the technical details did not take sufficient root in my brain for me to attempt the same here -- I refer you to the paper at this point! From an input sample of order 2n, combining multiple times according to the method they devise reduces s as required with a total loss factor of 2n/2, thus outputting on the order of 2n/2 sampled vectors.

The 2n/2-time above-smoothing sampler is not just a tweak on the first algorithm but a surprisingly different approach supplied with its own set of clever tricks and insights. The intuitive idea is to construct a ‘tower’ of increasingly sparse lattices [L0,…,Lm] where Lm = L, the lattice one wishes to sample from. Each Li is chosen to ‘lie between’ Li+1 and 2Li+1 — that is, it is a (strict) sublattice of Li+1 which contains 2Li+1, i.e. Li+1Li ⊆ 2Li+1. Because L0 is dense, one can sample ‘easily’ from it with a small parameter (2-m/2s, in fact), and combine (by summing) over cosets of L1 to get samples over the sparser lattice 2L1 with a slightly increased parameter 2-(m-1)/2s. Repeating this process eventually produces samples from Lm = L with distribution statistically close to the discrete Gaussian with parameter s, provided s is above the smoothing parameter. They show that this method is able to approximate decision SVP to within a factor of 1.93.

That’s all we had time for in our session — and already, I admit, somewhat more than I have the capacity to fully absorb without some serious preliminary study! (Although, the bits I was able to follow, I found very interesting). The rest of the paper, as well as containing all the ‘proper maths’ (including the formal reduction from SVP to DGS), covers applications, adaptations to the closest vector and other related problems, and relevant points for discussion, all in some depth. Despite the clear theoretical value of their contribution, the authors are circumspect about the practical usages of their algorithms relative to enumeration and sieving methods. These latter perform well heuristically in relevant scenarios, whilst the run-time bounds of DGS are tight in practice, with no trade-off available even if one wants to sample fewer than 2n/2 vectors.

52 Things: Number 17: Describe and compare the round structure of DES and AES.

This is the latest in a series of blog posts to address the list of '52 Things Every PhD Student Should Know' to do Cryptography: a set of questions compiled to give PhD candidates a sense of what they should know by the end of their first year.In this week, we describe and compare the round structure of DES and AES. 

Both DES and AES are examples of iterated block ciphers. The block ciphers obtain their security by repeated use of a simple round function. The round function takes an n-bit block and returns an n-bit block, where n is the block size of the overall cipher. The number of rounds r can either be a variable or fixed. As a general rule increasing the number of rounds will increase the level of security of the block cipher. Each use of the round function employs a round key ki (where 1 ≤ i ≤ r) derived from the main secret key k, using an algorithm called a key schedule. To allow decryption, for every round key the function implementing the round must be invertible, and for decryption the round keys are used in the opposite order that they were used for encryption. In DES the functions needed to implement the round function are not invertible, but the whole round is invertible. For AES (Rijndael) not only is the whole round function invertible but every function used to create the round function is also invertible. 
More particularly, the DES cipher is a variant of the basic Feistel cipher. The interesting property of a Feistel cipher is that the round function is invertible regardless of the choice of the function in the box marked F. To see this notice that each encryption round is given by:
Li = Ri-1
Ri = Li-1 ⊕ F(Ki,Ri-1).

Hence, the decryption can be performed via:
Ri-1 = Li
Li-1 = Ri ⊕ F(Ki,Li).

This way we can choose any function for the function F, and we will still obtain an encryption function which can be inverted using the secret key. The same code/circuitry can be used for the encryption and decryption functions. We only need to use the round keys in the reverse order for decryption. As a variant of the Feistel cipher design, DES includes the following distinct characteristics:
  • the number of rounds r is 16,
  • the block length n is 64 bits,
  • the key length is 56 bits,
  • the round keys K1,...,K16 are each 48 bits
  • before and after the main Feistel iteration a permutation is performed.
In summary the DES cipher operates on 64 bits of plaintext in the following manner:
  • Perform an initial permutation.
  • Split the blocks into left and right half.
  • Perform 16 rounds of identical operations (Festal cipher). In each round the, the F function consists of the following six stages:
    • Expansion Permutation: The right half of 32 bits is expanded and permuted to 48 bits.
    • Round Key Addition: The 48-bit output from the expansion permutation is XORed with the round key, which is also 48 bits in length.
    • Splitting: The resulting 48-bit value is split into eight lots of six-bit values.
    • S-Box: Each six-bit value is passed into one of eight different S-Boxes (Substitution Box) to produce a four-bit result. Each S-Box is a look-up table of four rows and sixteen columns. The six input bits specify which row and column to use. Bits 1 and 6 generate the row number, whilst bits 2, 3, 4 and 5 specify the column number. The output of each S-Box is the value held in that element in the table.
    • P-Box: We now have eight lots of four-bit outputs which are then combined into a 32-bit value and permuted to form the output of the function F.
  • Join the half blocks back together.
  • Perform a final permutation.
The DES key schedule takes the 56-bit key, which is actually input as a bitstring of 64 bits comprising of the key and eight parity bits, for error detection. It first permutes the bits of the key (which takes a 64-bit input and produces a 56-bit output, hence discarding the parity bits). The output of this permutation, called PC-1 in the literature, is divided into a 28-bit left half C0 and a 28-bit right half D0. Now for each round we compute:
Ci=Ci−1 ≪ pi
Di=Di−1 ≪ pi

where x ≪ pi means perform a cyclic shift on x to the left by pi positions. Finally the two portions Ci and Di are joined back together and are subject to another permutation, called PC-2, to produce the final 48-bit round key.
Note that a key length of 56 bits is insufficient for many modern applications, hence often one uses DES by using three keys and three iterations of the main cipher. Such a version is called Triple DES or 3DES. In 3DES the key length is equal to 168. There is another way of using DES three times, but using two keys instead of three giving rise to a key length of 112. In this two-key version of 3DES one uses the 3DES basic structure but with the first and third key being equal. However, two-key 3DES is not as secure as one might initially think.

More details on actual values (S-Boxes, P-Boxes and all Permutation tables) can be found in [1].

The AES (Rijndael) algorithm, unlike DES, is a block cipher that does not rely on the basic design of the Feistel cipher. However, AES does have a number of similarities with DES. It uses a repeated number of rounds to obtain security and each round consists of substitutions and permutations, plus a key addition phase. AES in addition has a strong mathematical structure, as most of its operations are based on arithmetic in the field F28 . However, unlike DES the encryption and decryption operations are distinct.
AES identifies 32-bit words with polynomials in F28[X] of degree less than four. AES is a parametrized algorithm in that it can operate on block sizes of 128, 192 or 256 bits. It can also accept keys of size 128, 192 or 256 bits. For each combination of block and key size a different number of rounds is specified.
To make our discussion simpler we shall consider the simpler, and probably more used, variant which uses a block size of 128 bits and a key size of 128 bits, in which case 10 rounds are specified. AES operates on an internal four-by-four matrix (S(4,4)) of bytes, called the state matrix, which is usually held as a vector of four 32-bit words, each word representing a column. Each round key is also held as a four-by-four matrix [1]. The AES round function operates using a set of four operations:
  • SubBytes: There are two types of S-Boxes used in Rijndael: One for the encryption rounds and one for the decryption rounds, each one being the inverse of the other. For the encryption S-Box each byte s = [s7,...,s0] of the state matrix is taken in turn and considered as an element of F28. The S-Box can be mathematically described in two steps:
    1. The multiplicative inverse in F28 of s is computed to produce a new byte x = [x7, . . . , x0].
    2. The bit-vector x is then mapped, via an affine F2 transformation [1], to a new bit-vector y. The new byte is given by y. The decryption S-Box is obtained by first inverting the affine transformation and then taking the multiplicative inverse.
  • ShiftRows: The ShiftRows operation in AES performs a cyclic shift on the state matrix. Each row is shifted by different offsets [1]. The inverse of the ShiftRows operation is simply a similar shift but in the opposite direction. The ShiftRows operation ensures that the columns of the state matrix ‘interact’ with each other over a number of rounds.
  • MixColumns: The MixColumns operation ensures that the rows in the state matrix ‘interact’ with each other over a number of rounds; combined with the ShiftRows operation it ensures each byte of the output state depends on each byte of the input state [1].
  • AddRoundKey: The round key addition is particularly simple. One takes the state matrix and XORs it, byte by byte, with the round key matrix. The inverse of this operation is clearly the same operation.
The AES algorithm can be described using the pseudo-code:

AddRoundKey(S, K0) 
for i = 1 to 9 do 
      AddRoundKey(S, Ki) 
AddRoundKey(S, K10)

The message block to encrypt is assumed to be entered into the state matrix S. The output encrypted block is also given by the state matrix S.
The AES key schedule makes use of a round constant which we shall denote by: 
RCi = xi (mod x8 + x4 + x3 + x + 1)
We label the round keys as (W4i, W4i+1, W4i+2, W4i+3) where i is the round. The initial main key is first divided into four 32-bit words (k0, k1, k2, k3). The round keys are then computed as algorithm below, where RotBytes is the function which rotates a word to the left by a single byte, and SubBytes applies the Rijndael encryption S-Box to every byte in a word [1].

W0 =K0,W1 =K1,W2 =K2,W3 =K3 
for i = 1 to 10 do 
      T = RotBytes(W4i−1
      T = SubBytes(T) 
      T = T ⊕ RCi 
      W4i = W4i−4 ⊕ T 
      W4i+1 = W4i−3 ⊕ W4i 
      W4i+2 = W4i−2 ⊕ W4i+1 
      W4i+3 = W4i−1 ⊕ W4i+2 
References: [1]

Friday, January 23, 2015

52 Things: Number 16: Describe the key generation, signature and verification algorithms for DSA, Schnorr and RSA-FDH.

This is the latest in a series of blog posts to address the list of '52 Things Every PhD Student Should Know To Do Cryptography': a set of questions compiled to give PhD candidates a sense of what they should know by the end of their first year. In this week, we describe the key generation, signing and verification algorithms of DSA, Schnorr and RSA-FDH.

1. DSA
The Digital Signature Scheme (DSA), also known as the Digital Signature Standard (DSS), was proposed by the National Institute of Standards and Technology (NIST) in 1991 [1]. Security of DSA is based on the difficult of computing discrete logarithms. But there is no known proof of its security under a standard assumption (like DL), even in the random oracle model

Domain Parameter Generation
  1. Select a prime number $p$, where $2^{L-1}<p<2^L$ and $L$ is a multiple of 64 and $512 \leq L \leq 1024$.
  2. Select a prime divisor $q$ of $p-1$, where $2^{159}<q<2^{160}$.
  3. Compute a generator $g$ of the subgroup of order $q$: choose a random integer $r$, where $1<r<p-1$ such that $g=r^{(p-1)/q} \ mod \ p$ and $g \neq 1$.
Key Generation
  1. Select a random integer $x$, where $0<x<q$.
  2. Compute $y = g^x \ mod \ p$.
Then the public key is $y$ and the private key is $x$. 

  1. Select a random integer $k$, where $0<k<q$.
  2. Compute $r = (g^k \ mod \ p)\  mod \ q$.
  3. Compute $s = (h(m)+x\cdot r)\cdot k^{-1}\ mod \ q$, where $h(m)$ is hash of $m$ using SHA-1
The signature on $m$ is the pair $(r, s)$.

  1. Compute $u_1 = h(m) \cdot s^{-1}\ mod \ q$.
  2. Compute $u_2 = r \cdot s^{-1}\ mod \ q$.
  3. Compute $v = (g^{u_1} \cdot y^{u_2}\ mod \ p)\ mod \ q$.
  4. Output $1$ if $v = r$, otherwise output $0$.
If $(r,s)$ is a valid signature on $m$, then we have
$v = g^{u_1}\cdot y^{u_2}=g^{h(m)\cdot (h(m)+x\cdot r)^{-1}\cdot k}\cdot {g^{x \cdot r \cdot (h(m)+x\cdot r)^{-1}\cdot k}} \ mod \ p$
$= g^{(h(m)+x\cdot r)\cdot (h(m)+x \cdot r)^{-1}\cdot k}\ mod \ p$
$= g^k \ mod \ p$
Thus the verification succeeds. 

2. Schnorr
The Schnorr signature is an important DLP-based signature scheme. It works in any prime order group and its security is proven in the random oracle model under DL assumption [2]. 

Domain Parameter Generation
  1. Select a prime number $p$.
  2. Select a prime divisor $q$ of $p-1$.
  3. Select a generator $g$ of the subgroup of order $q$.
Key Generation
  1. Select a random integer $x$, where $0<x<q$.
  2. Compute $y=g^x \ mod \ p$.
The public key is $y$ and the private key is $x$.

  1. Select a random integer $k$, where $0<x<q$.
  2. Compute $a = g^k \ mod \ p$.
  3. Compute $r = h(m \| a)$, where $m$ is the message to be signed and $h:\{0,1\}^* \rightarrow \mathcal{Z}_{q}$ is a hash function. 
  4. Compute $s = (k + r\cdot x)\ mod \ q$
The signature on $m$ is the pair $(r,s)$.

  1. Compute $v = g^s \cdot y^{-r}\ mod \ p$
  2. Output $1$ if $v = r$, otherwise output $0$.
If $(r,s)$ is a valid signature on $m$, then we have
$v=g^s\cdot y^{-r}=g^{k+r\cdot x}\cdot g^{-r\cdot x}=g^k=r$
Thus the verification succeeds.

The RSA-FDH (full domain hash) scheme was introduced by Bellare and Rogaway in [3]. It is a RSA-based signature scheme and follows the hash-then-sign paradigm. It makes use of the hash function (the image size of the hash function equals to RSA modulus) to generate random-looking output for the plain RSA signature scheme. Thus it prevents the algebraic attacks on the plain RSA signature scheme and it is able to sign messages of arbitrary length. But it is hard to create such hash function in practice. RSA-FDH can be proven EU-CMA secure in the random oracle model

Key Generation
  1. Select two large primes $p$ and $q$. 
  2. Compute $N=p\cdot q$.
  3. Select a random integer $e$, where $1<e<\phi(N)$, such that $gcd(e,\phi(N))=1$.
  4. Compute the integer $d$, where $1<d<\phi(N)$, such that $e\cdot d = 1\ mod \ \phi(N)$.
The public key is $(N,e)$ and the private key is $(d,p,q)$.

  1. Compute $s=h(m)^d\ mod \ N$, where $m$ is the message to be signed and $h:\{0,1\}^* \rightarrow \mathcal{Z}_N$ is a hash function.
The signature on $m$ is $s$.

  1. Output $1$ if $s^e = h(m)\ mod \ N$, otherwise output $0$.
If $s$ is a valid signature on $m$, then we have
$s^e=h(m)^{d\cdot e} mod \ N=h(m)\ mod \ N$
Thus the verification succeeds.


Friday, January 16, 2015

52 Things: Number 15: Key generation, encryption and decryption algorithms for RSA-OAEP and ECIES.

This is the latest in a series of blog posts to address the list of  '52 Things Every PhD Student Should Know' to do Cryptography: a set of questions compiled to give PhD candidates a sense of what they should know by the end of their first year. We came back to "more crypto" staff by describing the key generation, encryption and decryption algorithms for RSA-OAEP and ECIES. 

RSA-OAEP  stands for RSA encryption/decryption scheme and OAEP padding scheme respectively. They are often used conjointly in real world.

1.1 RSA[1]
RSA is one of the earliest public key encryption scheme that has been deployed widely in real world. It is based on the assumption of the hardness of RSA problem which has been described in previous blog (here).

Key Generation:
  1.  Generate two large primes $p$, $q$, and compute the modulo $N = pq$.
  2.  Selects a random number $e \in \mathbb{Z}_N$ s.t. $gcd(\phi(N),e)=1$ where $gcd$ stands for Greatest Common Divisor
  3.  Since $\phi(N)$ and $e$ are co-prime ($gcd(\phi(N),e) = 1$), we can use XGCD to find the multiplicative inverse $d$ of $e$ over modulo $\phi(N)$: $d = e^{-1} \mod \phi(N)$.
  4. We distribute $(N,e)$ as our public key and hold $(p,q,d)$ as our secret key.
  1. Parse the message to an integer $m \in \mathbb{Z}_N$.
  2. Compute $c = m^e \mod N$.
  3. Output $c$ as our ciphertext.
Before we receive the ciphertext, we precompute some values: $d \mod p-1$, $q^{-1} \mod p$, $d \mod q-1$ and $p^{-1} \mod q$.
Then upon receiving the ciphertext $c$, we
  1. Compute\begin{equation}m = ((c ^{d \mod p-1}\mod p)q({q^{-1} \mod p})\\ + (c^{d \mod q-1}\mod q)p({p^{-1} \mod q})) \mod N\end{equation}
  2. Output $m$ as our plaintext.
Notice that the computation of $m$ is in fact $m=c^d \mod N$ using CRT. The reason is that performing exponentiations over small modulo($p$ and $q$) are faster than doing it over large modulos($N$). The decryption works by applying  Fermat's Little Theorem and we will have $c^d =m^{ed}= m ^{1 \mod \phi(N)} = m (\mod N)$ .

1.2 OAEP[2]
OAEP stands for Optimal Asymmetric Encryption Padding. It is a padding scheme used together with asymmetric encryption (usually RSA). It can bring some randomness to a deterministic encryption scheme. When used with RSA, the combined scheme is proven to be IND-CCA secure.

  • $f$ be a $k$-bit trapdoor one-way permutation. $f:\{0,1\}^k \rightarrow \{0,1\}^k$
  • $m$ be the $n$-bit message
  • $G$, $H$ be two psudorandom functions: $G: \{0,1\}^s \rightarrow \{0,1\}^{n+t}$ and $H:\{0,1\}^{n+t} \rightarrow \{0,1\}^s$, where $k = n + t + s$
  • $R$ be $s$-bit random number: $R \leftarrow \{0,1\}^s$
We compute the $k$-bit ciphertext as follows:
Encrypt(m) = f_{pk}(\{(m||0^t) \oplus G(R) \}||\{R \oplus H((m||0^t) \oplus G(R))\})

By using the trapdoor, we can recover the following value:
f_{sk}(c) =  \{(m||0^t) \oplus G(R) \}||\{R \oplus H((m||0^t) \oplus G(R))\}

  1.  Let the first $n+t$ bits be $T$: $T = (m||0^t) \oplus G(R)$, and the other $s$ bits be $S$: $S = R \oplus H((m||0^t) \oplus G(R))$
  2.  Compute $R$ as $R=H(T) \oplus S$
  3.  Compute $m||0^t = T \oplus G(R)$
  4. Verify if there is exactly $t$ 0s following the $n$-bit message $m$. If validated, remove the $t$ 0 bits and output $m$.
In practice, we replace $f_{pk}$ and $f_{sk}$ by RSA encryption and decryption function respectively.


Elliptic Curve Integrated Encryption Scheme is a variation of ElGamal public key encryption scheme based on Elliptic Curve Cryptography (click here to find more about elliptic curve).

For simplicity, we define an Elliptic Curve in the form:

E: y^2 = x^3 + ax +b

To further simplify our problem, we only discuss a curve $E$ on a prime field $\mathbb{F}_q$ with a base point $P$ having a prime order $n$. Then we can define a simplified domain parameter: $ D = (q, a, b, P, n)$ where:
  • $q$ is the prime field order. i.e. $q$ is a prime and $x, y, a, b$ are reduced to $\{0, 1, 2, ..., q-1\}$
  • $a,b$ are the coefficients of the curve.
  • $P$ is a point on the curve.
  • $n$ is the prime order of $P$. i.e. additions of $P$ yields $n$ points on the curve where $n$ is a prime.
The domain parameter are made public.

ECIES are always associated with a symmetric encryption scheme and a MAC scheme. We denote them as $\{Enc_k(m)=c, Dec_k(c)=m\}$ and $\{MAC_k(m)=t, Very(t,m)=T/F\}$ respectively.

We also denote $KDF(s_1,  s_2) = (k_{enc}, k_{MAC})$ as the Key Derivation Function which takes two seeds $s_1, s_2$ and outputs a pair of symmetric encryption key and MAC key.

Then we describe the scheme as:

Key Generation:
  1. Pick a random integer $d \in [1, n - 1]$.
  2. Compute a new point $Q = dP$.
  3.  Output $Q$ as the public key and $d$ as the secret key.
Then the encryption of a message $m$ is done as follows:

  1.  Pick a random integer $k \in [1, n-1]$.
  2.  Compute $R=kP, Z=kQ$. If $Z=\infty$ then we restart the process and pick a different $k$.
  3.  Generate $(k_1, k_2) = KDF(x_Z, R)$ where $x_Z$ is the $x$-coordinate of $Z$.
  4. Compute $c = Enc_{k_1}(m)$ and $t=MAC_{k_2}(c)$.
  5. Output $(R, c, t)$ as the ciphertext.
On receiving a ciphertext $(R,c,t)$,

  1.  Verify if $R$ is valid. This can be easily done by substituting $R$ in to the curve.
  2. Compute $Z'=dR$.
  3. Generate $(k'_1, k'_2) = KDF(x_{Z'}, R)$, where $x_{Z'}$ is the $x$-coordinate of $Z'$.
  4. Verify if MAC is correct by calling $Very(t, c)$.
  5. Decrypt $m' = Dec_{k'_1}$.
  6. Output $m'$ as the plaintext.
Since $Z' = dR = dkP = kQ = Z$; therefore seeds fed to KDF() are in fact same. Hence receiver can generate keys as same as the sender's and decrypt the message.

However, my knowledge to ECC is very limited. For those who are interested, you can find more in [4].


Sunday, January 11, 2015

Real World Crypto 2015: Are you there Bob? It's me, Alice.

Social media are all about the three F's: friends, fans and followers. But what if you want to keep your list of F-buddies private? On the final day of the Real World Crypto workshop Ian Goldberg spoke about how we can advertise our presence online to our friends without revealing our relationship graph.

In the 90s, people would fire up ICQ instant messenger and an ear-splitting foghorn sound effect would announce their presence online to everyone on their street. Friends further away relied on notification from the ICQ server, with the server knowing when users are online and who their friends are. In today's more privacy-sensitive climate this is undesirable, as the service operator may be legally compelled to surrender this data to governments on request.

The Dagstuhl Privacy Preserving Presence Protocol P (DP5 - the extra P is for patter) allows a service operator to provide this online presence information without learning the information itself, and so freely comply with search warrants without compromising user privacy.

The DP5 protocol keeps a friendship database and a presence database, and assumes two parties wishing to communicate already a hold shared key. Time is divided into long-term epochs T over which friendship data can be updated, and short-term epochs t over which status data can be updated.

In each long-term epoch T, Alice evaluates two PRFs at the point T under the key she shares with Bob in order to generate a public identifier ID and an epoch key K. She generates a public key P and encrypts it under the epoch key. This ciphertext C and the ID are stored in the friendship database.

In each short-term epoch t, Alice evaluates a third PRF at the point t under a key derived through hashing her public key P, generating an encryption key k. She then encrypts a status message under key k. This ciphertext c and an identifier derived from P are stored in the presence database.

When Bob wants to know if Alice is online, he evaluates the appropriate PRFs at point T under their shared key to recover the epoch key K and identifier ID. He pulls the corresponding record from the friendship database and decrypts the ciphertext C to recover Alice's public key P. He then computes from P the key k and the identifier for the presence database, pulling the corresponding record and decrypting the ciphertext under k. If decryption is successful then Alice is online and her status message is revealed, otherwise he concludes that Alice is offline.

This is a simplified explanation of the DP5 protocol, which makes use of Private Information Retrieval to ensure the server is unaware of the nature of the database queries. PIR is a beast, so the protocol doesn't scale too well, but it does provide a private way to tell your friends you're online and let them know your status.

***** X_Br1sT0L_B10gGeR_X is offline *****

Friday, January 9, 2015

Real World Crypto 2015: “Designers: ask not what your implementer can do for you, but what you can do for your implementer” Dan Bernstein, Real World Crypto 2015

Last year at Real World Crypto you may remember my blog post about how the conference went about the business of lamented the downfall of theoretically secure cryptographic primitives based on practical implementation issues. Well this year at real world crypto things have gone in a similar direction. An example of this is Bernstein's talk on Wednesday entitled “Error-prone cryptographic designs.” Much of the talk has already been described in a previous post by David B here which helpfully gives us the “Bernstein principle” which I'll follow up on, perhaps from a more applied point of view.

As the title suggests the talk was about error prone cryptography, but not as you might expect. The talk was not a frustrated outburst at how the practical implementation side of things was letting the side down and needed to get it's act together, the tone of which you may be familiar with from other talks with similar titles. The talk did contain many of the ingredients of this sort of talk. There was a look at how the problems with cryptosystems seem always to come from the implementation of secure primitives not the primitives themselves, backed up by a number of horror stories showing this for instance. But this talk had one major twist.

The main point of the presentation was made near the beginning in one of the horror stories (see blog post), when he referred to the breaking of the encryption used by Sony Playstation. Relating to this he observed how the blame for the attack got distributed. The attack was a practical attack and so naturally the designers of the primitives remarked how their secure designs had been made vulnerable by the terrible implementers who seem to keep getting things wrong and making all their hard work meaningless. But wait a minute, asked the implementers, if we keep getting what you designs wrong, perhaps the problem is with you? Perhaps the designs of the primitives are so hard to implement properly that the designers are really to blame for not taking implementation practicalities seriously enough in their design choices.
So who was to blame for this and many other security slip ups that have occurred through implementation errors? Well the answer to the question still remains unresolved, but the fact that the discussion is taking place throws a different kind of light on the practice of designing crypto systems. The current system of designer make something secure; implementer implement the design securely, may well be where many of the problems are coming from and should be replaced by something more like designer make something that's secure and easy to implement; implementer … you can't really go wrong!

Although unlikely to have drawn the agreement of the whole audience, the talk didn't have the feel of a dig at designers but an appeal to them to wake up to their responsibility to the implementers in making designs that are easy to implement. It neither had the air of trying to get implementers off the hook, but sought rather to look at the question as to why in a world where secure systems seem to break because of practical attacks against implementations rather than the security of the primitives aren't the primitives ever singled out to blame for being hard to implement rather than the people given the task of implementing them.
It's true, designers have a hard problem on their hands making sure what they design is secure, but often being from a more mathematical, theoretical background, have they tended to overlook the practical aspect of what they are concocting? Many designers are highly skilled in the art of theoretical design but is this at the expense of not knowing the real engineering world in which they work as well as they should?
The talk examined these and other similar questions and was summarised with an appeal to designers to “think of the children, think of the implementers” and ended with the words given as the title for this post.

Real World Crypto 2015: CAESAR came, you see, but who will win?

Cryptographers are humans, and humans are competitive, which might explain the popularity of cryptographic competition. The most visible competition running at this point in time is CAESAR, which hopes to recommend a (portfolio of) authenticated encryption. Elena Andreeva gave a wonderful talk giving a very brief overview of modern thinking regarding what security and functionality to aim for; and how different schemes try to achieve it.

The traditional lesson that encryption should be probabilistic to be secure, has been replaced by a nonce-based view, with further granularity depending on the security when nonces are somehow reused. Even prior to CAESAR there were an increasing number of dedicated, nonce-based schemes suggested, several of which were entered into the CAESAR competition. Taking into account nonces also changes how one would go about using a MAC to add authenticity to a mode-of-operation such as CBC. This problem, known as generic composition, has recently been revisited by Namprempre, Rogaway, and Shrimpton.

There has also been increased interest in the security ramifications of using online encryption and decryption. For efficiency purposes (e.g. to enable a single pass over the data), it can be beneficial to output plaintext as part of the decryption process before the ciphertext has been verified. At last year's Asiacrypt, Elena and her coauthors established a framework to capture the release of unverified plaintext (RUP). Only a handful of CAESAR candidates achieves this security notion.

Finally, Elena went into more detail of the current state of play of the CAESAR competition. Given the large number of candidates (57) and the large number of criteria based on which one can classify an authenticated encryption scheme, the information provided by the CAESAR zoo can be hard to interpret. Luckily for us, Elena has created a wonderful, interactive visualization tool to zoom in on whichever property takes our current fancy (for instance, geographical spread of submissions). The tool is available from her home page so have a go at it yourself!

The first round of CAESAR is drawing to a close and it is expected that next month DJB (J for Julius?) will announce which candidates the CAESAR committee has progressed to round 2. Eventually the winner or winners will be crowned in another two years' time. After her presentation, there was a question from the audience whether this winner will be fast tracked into the TLS standard.

Thursday, January 8, 2015

52 Things: Number 14: What is a cryptographic pairing?

This is the latest in a series of blog posts to address the list of '52 Things Every PhD Student Should Know' to do Cryptography: a set of questions compiled to give PhD candidates a sense of what they should know by the end of their first year. We build on the previous few weeks by introducing the notion of a pairing.

Pairing definition: Given 3 cyclic groups $\mathbb{G}_1,\mathbb{G}_2,\mathbb{G}_3$ of order $q$ with generators $g_1,g_2,g_3$ respectively. We say a function $e:\mathbb{G}_1\times\mathbb{G}_2\rightarrow\mathbb{G}_3$ is a pairing if the following hold:

  1. [bilinearity] $\forall A,B\in\mathbb{G}_1,C,D\in\mathbb{G}_2$: $e(A+B,C)=e(A,C)\cdot e(B,C)$ and $e(A,C+D)=e(A,C)\cdot e(A,D)$
  2. [non-dengeneracy] $e(g_1,g_2)\neq 1$
  3. [efficiency] $e$ is efficiently computable

Types of pairing: There are 3 type of pairings that will be described below:

  1. $\mathbb{G}_1=\mathbb{G}_2$
  2. $\mathbb{G}_1\neq\mathbb{G}_2$ but there is an efficiently computable isomorphism from $\mathbb{G}_2$ to $\mathbb{G}_1$ and maps the generator $g_2$ to $g_1$
  3. $\mathbb{G}_1\neq\mathbb{G}_2$ and there is no efficiently computable isomorphism

The last two are asymmetric pairings while the first is a symmetric pairing.

A warning on pairings: It feels like I am always having a warning section in each of my blogs but these are important and I feel should be included. In type 1 (and can be shown similarly for type 2) pairings (this doesn't mean type 3 are safe) the DDH problem (given $g,g^x,g^y,g^z$ does $z=x\cdot y$) is easy since you can check if $e(g^x,g^y)\overset{\$}{=}e(g^z,g)$. Another thing to be careful of is that it is possible to make a pairing that does everything you want it to,

Uses of pairings: Pairings have a wide range of uses, including; cryptanalysis, Identity Based Encryption, Attribute Based Encryption and Leakage Resilient Cryptography.

Instantiation of pairings: The only way we know how to instantiate pairings is over elliptic curves (see the last few blogs in the 52 things series) and this is another reason why elliptic curves have become so desirable in cryptography. More recently Multi-Linear Maps have appeared in the literature which work over different groups. However, that is a story for another time...

Real World Crypto 2015: Error-prone cryptographic designs (djb)

Some of the really important ideas can be summarised in one sentence. I'll coin the name "Bernstein principle" for the following:

Do not blame the (crypto) implementor for a mistake that the designer could have avoided in the first place.

Dan Bernstein gave a talk on several crypto horror stories including DSA and HTTPS which are often given as examples of "bad implementations of otherwise good crypto". Instead, we should see designs that have a tendency to be hard to implement or use simply as bad crypto. Nowadays, constant-time and side channel-resistant ciphers are state of the art; Bernstein told us how some of the problems were identified much earlier but ignored, with the inevitable mistakes being blamed on the implementations.

Another important point: a primitive design tends to get a lot of review (think AES competition) but a primitive implementation less so, a protocol design usually even less (reviewing security "proofs" is quite officially out of scope for most conference program committees these days) and once we get to a protocol implementation, there's very little interest around. Put another way, to take Bernstein's example, which of these got the least review: (a) discrete logarithms (b) ECDSA using discrete logarithms (c) Sony's ECDSA implementation for play station code-signing? And which one was horribly broken?

I would personally take the Bernstein principle further and say we should never, ever blame a crypto user for anything - yes, someone who reuses passwords, and picks weak ones in the first place, isn't going to get much security but take a deep breath - do we really expect the average person to memorise two dozen distinct high-security passwords (and change them every six months)? I'd argue that if any security-related design is widely misused be users, even in "stupid" ways, then that is enough evidence to call it a bad design.

Real World Crypto 2015: 'One of Our Algorithms is Missing'

Graham Steel from Cryptosense ( concluded Day 1 of Real World Crypto yesterday with a short talk of the state of APIs as of the end of 2014. It is well known that standardisation is a lengthy (and sometimes painful) process and the story is no different for APIs. An infamous example, which served as the main content of Graham's talk, is PKCS#11 which describes 'Cryptoki', a key management API that is widely used in practice and typically interacts with a Hardware Security Module or some kind of security token. Like the other PKCS (Public Key Cryptography Standards) documents, PKCS#11 was originally written by RSA Labs. Until very recently, the latest edition dated back to 2004! Sadly, this is not because it was a flawless standard that had stood the test of time. Instead, the key management aspects of Cryptoki have been attacked in various ingenious ways (in particular, using key wrapping to export and reimport a key with new attributes that contradict what it was supposed to be used for), including in work by Graham himself, and no one seems to be sure about how such attacks can be prevented without sacrificing a great deal of useful functionality.

The slight silver lining on this black cloud is that OASIS (Organisation for the Advancement of Structured Information Standards, has taken up the mantle of improving PKCS#11 and version 2.4 of the standard was approved in December 2014. Their writing process is highly open with the whole development of the new standard described on their website, which is great to hear. Graham himself worked on the new document and reassured the audience yesterday that lots of old, bad cryptographic algorithms have been removed and new useful algorithms like CMAC and GMAC are now supported. Unfortunately though, key management is still a problem. This is both exciting and worrying for me in particular as finding a way to do secure key management is a pretty good description of my PhD project. It's exciting that there's plenty of new work to do but worrying that lots of very smart people have tried to do it for many years and found little success.

OASIS are also working on standardising WebCrypto, a JavaScript API designed by the World Wide Web Consortium and discussed in an earlier talk yesterday by Harry Halpin. WebCrypto performs cryptography directly in your browser and was designed in another pleasingly open process which you can read about here. Where Graham's work at Cryptosense intersects with this project is that they have built an extension for WebCrypto which evaluates the code running in your browser and checks that it's doing what it's supposed to do. He admitted though that, in WebCrypto (as well as in just about everything else it would seem to me!) secure key management is still hard to get right.

So to summarise: standardisation of APIs has had a bit of a boost in the last year which is good news, but there are still big open problems in key management to inspire/terrify PhD students like me.

Real World Crypto 2015: Bitcoin and Beyond: What Virtual Currencies Really Are Useful For

One year after Real World Crypto opening with Bitcoin, there was only one talk to follow up. Sarah Meiklejohn talked about the many ways how Bitcoin has failed but also a new practical application for Bitcoin or any similar system.  She started by outlining the four categories of digital currencies according to the European Central Bank:
  1. Closed currencies: No official exchange with central bank currencies exist in either way, even though black markets might exists. Examples are most currencies used in online games.
  2. One-direction currencies: They can be bought using central bank currencies, but they can't be sold. Examples are airline reward miles.
  3. Convertible centralized currencies: They are convertible both ways and governed by a single authority. An example is the Linden Dollar of Second Life.
  4. Convertible decentralized currencies: There is no central authority. The most prominent example is Bitcoin.
From a cryptographer's point of view only the last category is interesting because of the mechanisms needed to establish consensus. There are a few requirements to such a decentralized system: The history has to be updateable, globally visible, and immutable.

Sarah then examined the history of Bitcoin using the hype cycle paradigm, which uses the following five phases: technology trigger, peak of inflated expectations, trough of disillusionment, slope of enlightenment, and plateau of productivity. Bitcoin was proposed in 2008, and the block chain started in 2009. One year later, Mt. Gox began trading Bitcoin, and the first mining pool appeared. In 2011, Silk Road emerged, and the price exceeded 1 dollar for the first time. The subsequent general interest sparked the involvement of regulatory authorities and the creation of many other currencies, called altcoins. Finally, the price of Bitcoin reached and all-time high of over 1000 dollars in December 2013. With respect to the hype cycle, the speaker claimed that Bitcoin has reached the trough of disillusionment in 2014.

The increased interest also revealed a few technical weaknesses of Bitcoin and its deployment, from its theory (minority pools can take over control) to implementation issues. Also the unproven claim of anonymity was disputed by a paper. On the non-technical side, Bitcoin has lost two thirds of its Dollar value and was banned in several countries, and centralized payment systems such as Apple Pay look more promising at the moment.

After all this disillusionment, Sarah turned the attention to a new application of Bitcoin or any decentralized system achieving the aforementioned properties. Essentially, they can be used as a notary public. For example, one can include a hash of a digital object into the block chain for timestamping, that is, to prove knowledge of it at a certain point in time without revealing it at that time. Similarly, the block chain could be used to store the ownership history of a piece of land or any object. This could in fact replace the land registry. The speaker pointed out that anonymity or the value of a currency is less crucial for this application.

The talk ended with a call for cryptographers to increase the usability, better enforce decentralization, and audit the software for virtual currencies.

Real World Crypto 2015: Password Hashing according to Facebook

So I am going to start in the middle of the talk by describing the part that most people will find most interesting before looping back around to discuss the presentation in order. Given below and received with a flurry of excitement during the presentation itself (lots of camera phones appeared for this slide) is the way Facebook hash their passwords:

1)$\$$cur  = 'plaintext'
2)$\$$cur  = md5($\$$cur)
3)$\$$salt = randbytes(20)
4)$\$$cur  = hmac_sha1($\$$cur, $\$$salt)
5)$\$$cur  = cryptoservice::hmac($\$$cur)
6)        [= hmac_sha256($\$$cur, $\$$secret)]
7)$\$$cur  = scrypt($\$$cur, $\$$salt)
8)$\$$cur  = hmac_sha256($\$$cur, $\$$salt)

Ok, so why do it like this? Well while Facebook have the usual security considerations that we all have, they also have one that probably only they can claim - having to efficiently deal with over a billion users! I will now try and explain why each of the lines are in source.

1) This is just taking in the plaintext (the password) and is clearly required

2) md5 hash - this is a pretty standard thing to do, or at least was about 10 years ago. So why is it still here? The standard way to change this would be to keep two tables side by side one with the md5 hashes for the user and one for whatever the new solution is and then when the user logs in for the first time since the change you check the md5 hash and then store the new one for future uses. When all users have done this you can delete the md5 hashes and you are done. With a small number of users this seems feasible but with a billion users that is a lot of data to store and could take a (extremely) long time to get to the point everyone has transfered to the new system. Hence this is why this line is still here and then the remaining lines make the system more secure. This solution makes more sense at this scale because the whole table can be updated without having to have the user log in first and it can be done within the single table.

Interestingly in the talk afterwards "Life of a password" we learn that LinkedIn do something different here (almost the exact opposite in fact). LinkedIn do change what is being used instead of adding layers but they do this by having some of the layers being Encryption and thus (unlike hashes) is invertible, so the table can be updated by inverting the encryption step and adding the new layers without the user having to log in. One advantage of this is that it gave them the ability to timestamp the password database. If the key for the encryption is changed (say) every day then if there is a database leak LinkedIn can tell which day the leak occured making working out what the cause was easier.

To me the interesting question is can md5 collisions be used to log into other user's Facebook accounts? Now collisions are known in md5 so if I can get a user to set their password to one of these I should be able to log in with the other element in the collision. However if I can make a user set their password to something, I may as well just log in with what I made them set their password to! The interesting question then becomes while we have broken 2nd preimage resistance of md5, if we can break preimage resistance then there may be more trouble...

3-4) This is the standard step of salt and hash. The interesting point here is that 160 bits of salt are used, which seems like a lot. However it is explained that for all the Facebook users, from the beginning of time (or Feb 2004 to be precise) to now, to have a unique salt the salt would need to be about 32 bits long. However since salts are assigned randomly (as they should be) you need to consider the birthday bound on the probability of collisions, so you need 64 bits of salt. The other 100 bits (while seems a bit on the large side) allows for future proofing for things like new users and multiple password changes (people tend to forget their passwords...)

5-8) As you all probably know; hash (by design) is fast, so the goal here is to slow down the brute force time of a user's password. the interesting part is on lines 5-6 which calls this cryptoservice. What this is doing is sending it over to Facebook who hash in a secret, this has two advantages; firstly it means that passwords can not be brute forced in offline attacks and secondly it allows Facebook to monitor password hashing attempts and to block any suspicious looking activity. The scrypt on line 7 is used to slow down the local computation while the hmac_sha256 on line 8 is used to shrink the size of the output, so that the password database is manageable (after all even if each entry in the table is tiny, with a billion users it will still be a very large table. For example if each entry has to increase by a single bit the whole table will increase by a Gb in size!).

Various points from the rest of the talk:
Authentication for standard websites tends to be "something you know" (your password), while if you are security concious you can turn on two factor authentication to add "something you own" (tends to be your phone) but Facebook have started including other factors as well when you log in. One thing they now consider is where you are; if I always log on from Bristol but five minutes later I log on from Hawaii then there is probably something wrong and further authentication checks should be made. Of course now that Tor is becoming more widespread this could just be Tor doing its thing and I imagine a conversation between Tor and Facebook will be on the cards. The other check they are doing (which again can be seen as a something you own) is a "have they logged on from this browser before?" if they have the it is (more) likely to be the person who logged in last time but if it is a new device then further authentication should take place since it is less likely to be the intended user.

We have all had issues with a touch screen phone before and have especially had issues with the caps lock on the device (auto-capitalisation has been the bane of my existence when travelling with my phone). Facebook have considered this and they will not only check your password (as typed) but they will also try the password with the case of the first letter changed (because phones like to auto-capitalise the first letter) and the password with the case of letters switched (to counter the caps lock being toggled issue). For example if my password was passWORD123, they would check this as well as checking PassWORD123 and PASSword123. In a follow up discussion I learnt that (combined) these two issues tend to appear on 3% of smart phones and so it is worth doing. I asked if this was hinting at the direction that Facebook will start checking for "common mistypes" when you type your password (to be this would be a very bad idea, as would reduce the password entropy significantly) but was assured this will not be done.

The final thing I want to mention (which came as a surprise to me) is about password dumps. Now we hear on the news several times a year about a website being hacked and the usernames and password hashes being published online but realistically small dumps happen multiple times a month. What Facebook do is they keep an eye on these dumps for you and if they spot your username and password for Facebook amongst the dump they automatically notify you upon next log in that this is the case and ask you to change your password. I feel this is a particularly nice feature and they have the ability to manually notify users if their username appeared in a big data leak for a different site (even when your password isn't the same as your Facebook one) but this is more of a discretional thing than the automation for if it is the same as your Facebook password.

To conclude:
I went into this talk not knowing what to expect (it was still TBA on the schedule) I thoroughly enjoyed this talk, learnt a lot and would recommend listening to it if you are ever lucky enough to be given the opportunity.

Real World Crypto 2015: Credit Cards and standards

Terence Spies of Voltage Security provided Thursday's second talk, discussing the regulatory agencies and standards for the credit card sector. There are roughly 144 billion credit cards in use currently,  handling $\$$3.6T of transactions each year,  of which around $\$$12B are believed to be fraudulent. With such a valuable market then, there is a clear need for security, and the data we care most about is the credit card number,  known as the PAN in the industry. 

As users you or I certainly care about our credit card details not being stolen or misused,  but what does the company want?  Well, any security tool must provide a qualified feature that can be presented to users or regulators, and so unsurprisingly the main drivers for change and improvement are standardisation bodies. Alongside national governments, the lead organisations in the sector are X9, PCI-SSC and EMV,  run respectively by ANSI,  the industry and a collaboration of Credit card companies lead by Europay-Mastercard-Visa. Not conforming to the appropriate standards can lead to a company being sanctioned,  having its transaction fees increased,  or even being disconnected from the payment network.

One of the key points that make this area most interesting is that the hardware is entrenched, a concept Terence referred to as "Brownfield Research". As such,  any new schemes or techniques must be compatible with hardware up to 50 years old. These devices often have very specific (and unchangeable) database fields.  In particular,  this means that whilst we are no longer happy storing credit card numbers in the clear,  the encrypted version must still look like a credit card number, or it would not be possible to store in the database. This is the driving force behind the use of Format Preserving Encryption (FPE), an encryption mechanism that ensures the message and ciphertext spaces are the same (roughly speaking, by wrapping a secure encryption scheme in an efficient de/encoding scheme).

 The more interesting problem (from a technical point of view) is what can we do if you don't actually need to store the value itself? Certainly this is appealing from a security perspective, since if the card number is not stored,  it cannot be stolen if the database is compromised.  For example,  a companies customer database might only be interested in what a particular card user has bought in the past.  This requires just a unique identifier for the card,  which need not be the card number.  Preferably,  possession of this identifier (the token) should not allow one to recalculate the card number. Initially this appears trivial (use a hash function), but this misses an important issue: the space of credit card numbers is (by cryptographic standards) very small.  As a result,  the security guarantees of modern symmetric primitives are often not relevant.

 The upshot of this is that we require a more specialised primitive. In contrast to traditional block ciphers,  which are developed to be efficiently implementable constructions for expanding a small key into a much larger permutation,  we wish to use a secret static lookup table,  which can be thought of as simply a very large key. These objects (sometimes referred to as big key encryption schemes) are so large that,  even with such small message/ciphertext spaces, it is impractical for the adversary to learn about their content. Combining this with the tweak techniques of [LST12], the presentation explains, forms a viable tokenisation scheme.

Once we have a viable tokenisation method,  we can extend this to create restricted tokens,  that could be used instead of a card number, under certain restricted circumstances.  For example,  when visiting a hotel,  rather than authorising your credit card for the institution to make an arbitrary transaction,  it is (theoretically) possible to generate a specific token that only the hotel could use,  and only within a certain time window.  Indeed,  ApplePay works in a similar manner, whereby a token is generated that is tied to your phone. Token revocation is possible (for example if your phone is stolen), but this is implemented in a decidedly non cryptographic manner: it is simply deleted from the list of valid tokens.

Overall then, this was a succinct introduction to the state of the payments industry,  outlining some of the challenges and solutions that arise in this most real world of crypto problems.

Friday, January 2, 2015

52 Things: Number 13: Outline the use and advantages of projective point representation.

This is the latest in a series of blog posts to address the list of '52 Things Every PhD Student Should Know' to do Cryptography: a set of questions compiled to give PhD candidates a sense of what they should know by the end of their first year. We continue the Mathematical Background section by outlining the use and advantages of projective point representation.

TLDR - Point doubling and addition on elliptic curve points requires a field inversion and several multiplications. We consider a field $K$ (of characteristic that is neither $2$ or $3$). Given an inversion in $K$ is significantly more expensive than multiplication, then it is generally more efficient to use projective point coordinates to compute these operations.

What is a projective point?

The projective form of the Weistrass equation (see Guys blog last week) for an elliptic curve $E$ is an alternative but equivalent way of defining a point. We do not gain any additional functionality and, in fact, we can define an equivalence relation. Let $c$ and $d$ be positive integers and $K$ is a field (of characteristic that is neither $2$ or $3$), then the equivalence relation $\sim$ on the set $K^{3}\backslash\{0,0,0\}$ of nonzero triples over the field $K$ is

$(X_1,Y_1,Z_1) \sim (X_2,Y_2,Z_2)$ if $X_1 = \lambda^c X_2,Y_1 = \lambda^d Y_2,Z_1= \lambda Z_2$ for some $\lambda \in K^*$.

The equivalence class containing $(X,Y,Z) \in K^3 \backslash \{0,0,0\}$ is

$(X:Y:Z) = \{(\lambda^c X, \lambda^d Y, \lambda Z) : \lambda \in K^*\}$.

We now have the projective point $(X:Y:Z)$ and its representation $(X,Y,Z)$.

Various projective coordinate systems have been proposed in the literature but for the purpose of this blog we consider the Jacobian coordinate system. In this representation, the projective point $(X:Y:Z)$ where $Z \not= 0$ corresponds to the affine point $(\frac{X}{Z^2}, \frac{Y}{Z^3})$.

What are the advantages to using projective point representation?

Using projective point representation to compute point addition and doubling results in fewer field inversions and a higher number of multiplications (in comparison to working with affine coordinates). This can be demonstrated by converting the projective points to affine coordinates and attempting to simplify for addition and doubling operations. The resulting equation clears the denominators and hence removes the field inversion. At face value, this doesn't seem like a great achievement, however, evaluating a field inversion is significantly more computationally expensive than multiplication given the current state of the art in computer systems. To give an idea of the number of operations comparison for Affine vs Jacobian:

Format Doubling Addition
Affine 1I, 2M, 2S 1I, 2M, 1S
Jacobian4M, 4S 12M, 4S
Operation counts for point addition and doubling on $y=x^3 - 3x + b$. I = inversion, M = multiplication, S = squaring. 

Exact performance counters are tricky as they will be dependant on the underlying platform and implementation. However, as long as field inversions remain significantly more expensive than multiplications, using affine coordinates will incur a high performance penalty over projective points.

Any drawbacks?

Not that I know of (although I wouldn't consider myself an expert in this field). As ever, there is always the scope to cock-up the implementation and potentially leak bits of the underlying discrete logs through $Z$[1].