Saturday, December 5, 2015

Secure Computation from Millionaire

In the last session of AsiaCrypt2015, Muthuramakrishnan Venkitasubramaniam presented  his paper ``Secure Computation from  Millionaire'', joint work with abhi shelat.  Given a function f , the standard way to perform secure computation for f consists of two different steps: in the first f is transformed into either an arithmetic/boolean circuit or a RAM program, in the second a generic secure computation protocol (for example Yao/GMW based, or information theoretic) is applied to perform the actual evaluation. In his talk Muthuramakrishnan described a different approach for designing secure computation protocols.

The method he described appeared first at Eurocrypt 2004,  in the work of Aggarwal, Mishra and Pinkas,  where it was used for computing the median: Alice holds  a private dataset S_A and Bob a private dataset S_B,  each party computes the median of its own dataset, say m_A for Alice and m_B for Bob,   and jointly compare them. If  for example m_A < m_B, then Alice deletes  all the values in S_A smaller than m_A and Bob deletes all the values in his dataset bigger than m_B. Thereafter, they recursively repeat this step on the smaller dataset.  By replacing each comparison with a small secure protocol, it is possible to prove security of  the overall protocol in the semi-honest model.  This technique was later generalized by Brickell and Shmatikov and applied to solve shortest path problem.

The natural question which now arises is: what else can we compute using the comparison (millionaire) function? The authors generalized the techniques from both the aforementioned papers to a large class of problems (matroid optimizations, convex hull,  job scheduling, set cover)  that can be seen as instantiations of the greedy-algorithms paradigm.  The main idea is that of reducing these problems to iterative comparison operations and gradually revealing the answer, as for the median computation.  Unfortunately, this implies that simulation-based security can only be guaranteed (under certain conditions) in the   semi-honest and covert setting, because a malicious adversary  could adaptively abort in the middle of the computation.


Muthuramakrishnan  concluded his talk with interesting  open problems, i.e.  trying to  find examples that admit malicious security and  generalize their framework to other primitives or paradigm.

Friday, December 4, 2015

Workshop On Lattice Cryptography

It is the day after AsiaCrypt 2015 and there are two workshops being held in Auckland. The one which is most relevant for my research is that on Lattice Based Cryptography; which consists of four talks. One by Jung Hee Cheon on "Multilinear Maps and Their Cryptanalysis", one by Amit Sahai on "Obfuscation", one by Fre Vercauteren on "Weak Instances of RLWE" and one by Martin Albrecht on "Small Secret LWE".



Cheon first described a very naive version of multi-linear maps and then went on to show how this can be attacked by creating non-trivial encodings of zero, and then taking greatest common divisors. Then he went on to generalise this naive scheme to the CLT scheme (which is a bit like the DGHV FHE scheme). The naive attack does not apply to CLT as the dimension increased, meaning taking naive greatest common divisors would not work. Cheon then showed how to extend the naive attack to the CLT case by turning the gcd extraction into an eigenvalue extraction problem. This done by building quadratic forms which represent encodings of zero. The result is that for the CLT scheme one can break the equivalent of the DLP problem.

Cheon then went on to present the GGH scheme, which is a bit like the NTRU FHE scheme; except the instead of encrypting via c=[(m+r*p)/z] for an integer p, one encodes via c=[(m+r*g)/z] for a polynomial g which generates the ideal lattice <g>. Modifying the prior attack in this situation allows us to recover a basis of this ideal. But finding a short vector in this lattice can be hard. However, by utilizing encodings of zero one can actually solve the equivalent of the CDH problem.

Both attacks rely heavily on the presence of encodings of zero. So the attacks do not apply to situations in which one does not publish such encodings; i.e. applications such as indistinguishability Obfuscation (iO).






Amit Sahai then gave an introduction to iO; he motivated it via an analogy of an attacker who captures your brain and is able to read and tamper with every neuron, yet we still do not want the attacker to know what we are thinking about. This is the problem which obfuscation tries to solve in the computing realm. Martin pointed out that this would be a great way to produce malware!

Amit then put Multi-Party Computation within this analogy. He suggested we can think of MPC as protecting our brain against the tampering adversary, by dividing the brain up into portions. As long as one portion is kept out of the adversaries control we can use MPC to protect our thoughts. Obfuscation tries to do the same, without there needing to be an honest part of the brain.

Any program which is suitable for obfuscation must be unlearnable from query access to the program. Since otherwise the adversary could learn the program from the input/output behaviour. However, black-box obfuscation has been shown to be impossible; essentially because their are contrived programs which are unlearnable but for which one cannot produce an obfuscation, since any obfuscated program has an explicit attack against it.

This is why iO as a concept was presented; since it at least seems possible to achieve. The idea is that if you have two equivalent programs and we obfuscate one of them, then the adversary cannot tell which one we obfuscated. One way of thinking of this is as a psuedo-canonicalizer. The question is what useful can one do if we could create an obfuscator which satisfied the iO definition. Amit gave the application of building demo versions of software, without needing to re-engineer the software.



Fre Vercauteren then discussed a more in depth analysis of a paper from CRYPTO this year on Weak Instances of Ring-LWE. The CRYPTO paper gave instances where decision Ring-LWE was easy, but search appeared to be hard. However, Fre's talk showed that the search problem was in fact easy from the start, and thus the CRYPTO paper was less surprising than it at first seemed to be. As with all things on Ring-LWE the question arises as to how to choose the error distributions.

Fre spend the first part of his talk discussing the geometry of number fields, and in particular the Minkowski embedding. The Ring-LWE problem generates errors according to a discrete Gaussian distribution in the Minkowski embedding, Poly-LWE is to generate the errors according to a discrete Gaussian in the polynomial embedding.

Eisentrager et al discussed cases for which Poly-LWE was easy, these were then extended by Elias et al to special cases of decision Ring-LWE. They did this by mapping the special Ring-LWE instance to a special Poly-LWE instance.This is done by pulling back the problem from Ring-LWE to Poly-LWE via the matrix which defines the Minkowski embedding. The Poly-LWE attack requires that q is larger than f(1), and hence q will "kind of show up" in the coefficients of the defining polynomial f. So the fields being attacked, are very special indeed.








Thursday, December 3, 2015

Garbling with constant gate size


In the final MPC session of Asiacrypt, Carmen Kempka from NTT presented a Garbling scheme for formulas with constant size of garbled gates. The talk described an interesting new technique for garbling Boolean formulas (i.e. circuits with one output) based on trapdoor permutations and "backwards" garbling that, remarkably, achieves a constant size of just 4 bits per garbled gate.

Garbling schemes are a popular technique for constructing secure two-party computation schemes, and most practical approaches are based on the classic technique of Yao, where for each gate of the circuit, the resulting garbled circuit contains several ciphertexts that must be transmitted in the 2-PC protocol, where each ciphertext is of size $O(k)$ bits. At Eurocrypt this year, Zahur, Rosulek and Evans proposed the half-gate technique for garbling with just two ciphertexts per gate, and also showed that any "linear" garbling scheme cannot possibly do better than this. So, to reduce gate size any more, new non-linear techniques are needed; in Carmen's talk, some possible such techniques were given.

The main idea of the scheme is to use randomly generated, independent ciphertexts for each gate, so that the circuit evaluator can reconstruct these from just a single seed. The main difficulty is to ensure that the evaluator cannot then do the same as the circuit constructor, and create additional garbled circuits other than the one specified. To overcome this problem, they use trapdoor permutations (as opposed to typical garbling schemes, which just use hash functions) combined with a backwards garbling technique, where the input wire keys for each gate are chosen by solving a system of equations in the output keys and the ciphertexts.

The overall size of a garbled circuit is 4 bits per garbled gate, plus a ciphertext for each input, so this is the first scheme to achieve a constant gate size without a huge expansion in the input key size (as in Kolesnikov's scheme from 2005). However, since each input wire is uniquely determined by the output wires, gates can only have fan-out one, so the scheme is restricted to garbling formulas only. Extension to general circuits is possible, but at the cost of including 2 ciphertexts per gate.

Overall, the idea is neat, and it seems like a very interesting open problem to overcome the limitations of the scheme for general circuits, and reduce the size of the TDP-based ciphertexts for input gates.


Asiacrypt 2015: The Moral Character of Cryptographic Work

The distinguished IACR lecture at this year's Asiacrypt was given by Phillip Rogaway, who chose to talk more about the political implications of cryptographic work rather than the technology itself. This was certainly refreshing to see.

Phil started his talk by highlighting some historical events on the relationship of ethics and sciences: the nuclear bomb, the Nazi doctors, and the environmental movement. There is now the general idea that scientists should not harm society, but actually contribute to the social good as an obligation from the professional role. This manifests itself in various ethical codes and organizations; even the IACR bylaws oblige it to serve the public welfare. According to the talk, however, these values are in decline. The military has no problems recruiting scientists, and it provides more and more funding for science. At the same time, cryptography, like any technology, is used as a tool of power, which is recognized much more by popular culture than by cryptographers themselves.

An older generation of cryptographers seems to more aware of this, for example David Chaum, who mentions big data collection as early as in the 1980s as well as the possible effects on behavior. Comparing the citations of David Chaum's most cited paper on electronic mail with Goldwasser and Micali's on probabibilistic encryption, one can see that only the latter is picked up by the usual cryptography conferences. Phil argues that this split is more political than anything else and that cryptographic primitives do not inherently favor individuals or powerful entities. For example, conventional encryption can be used as well to protect one's information as to take control from the user as in trusted computing.

All these issues gained much more attention by the Snowden leaks in the summer of 2013. It was revealed that mass surveillance is rife, obscured both by secrecy and complexity. Unsurprisingly, there is significant disagreement between government agencies and surveillance studies. While the former argue that cryptography destroys the balance between security and privacy, the latter show that surveillance simply is an instrument of power that makes people conformant (or killed by drones). Furthermore, they also argue that security and privacy are not necessarily mutually exclusive. There is historic evidence of unsavory uses of political surveillance from the FBI's letter to Martin Luther King Jr. trying to convince him of suicide to totalitarian regimes of today.

Considering all this, Phil claimed that while cryptography for security is a success, cryptography for privacy is not, and moreover, that cryptography becomes more and more self-serving ("crypto-for-crypto"). To counter this, he presented a few suggestions for interesting problems such xMail and big-key cryptography. The former is about sending messages via an untrusted server without allowing the server to link sender and receiver, the latter assumes that an attacker has already subverted the machine holding a key, but only has limited bandwidth to send information.

The last part of the talk consisted of twelve suggestions for cryptographers essentially calling for a more holistic view of our work. The suggestions cover quite a range from thinking twice about military funding to stopping to draw cute little devils for the adversary when it is in fact a large state-sponsored agency. The most interesting suggestions in my opinion is that we should taste our own medicine, that is, we should use privacy tools and improve them if necessary. However, there was also the suggestion to write fewer but more relevant papers, which is orthogonal to the current incentives in science.

Phil concluded with the quote "Just because you don't take an interest in politics doesn't mean politics won't take an interest in you."

There is an accompanying essay on his website.

Sunday, November 15, 2015

Games of Timing for Security in Dynamic Environments


Last week I had the pleasure of attending the 6th International Conference of Decision and Game Theory for Security based in the (very swanky) Grand Royale Hotel next to Hyde Park. The first of these conferences occured in 2010 and has continued to attract
international scholars and researchers. Decision and Game theory has proved to be a valuable and systematic framework used to deal with the intricacies involved in making rational decisions in security.

Games of timing have been an interest of game theorists for quite some time now. However, according to recent literature, most have been assuming that the cost and effectiveness of actions are time-independent. An example of this is the game of FlipIt[1]
between an attacker and defender playing for possession of a resource.
 A talk at this year's conference given by Ben Johnson removes this assumption by setting up a model that captures the discovery, repair and exploitation of software vulnerabilities. An example given by Ben was Microsoft officially stopping all support of Windows XP in 2014. Security professionals conjectured that attackers would begin ``stockpiling'' vulnerabilities until this time in order to exploit them fully.
The question that Johnson et. al.'s paper [2] and his talk tries to answer is whether the attacker should wait to exploit these vulnerabilities considering there is a risk of the internal security team discovering them. If you require a more in depth description of anything discussed here then please refer to the authors' paper.[2]

The authors consider the life cycle of the software as finite with an end time $t = T$.  The rate of vulnerability discovery is an arbitrary function of time $V(t)$. The defender's security investment $d(t):[0,T] \to \mathbb{R}_{\geq 0}$ is a function of time representing the level of her security investment. The timing of the attacker's exploitation of vulnerabilities is $a(t):[0,T] \to \mathbb{R}_{\geq 0}$. This is the amount of time the attacker will wait to exploit the vunerability when discovered at time $t$.

The repair process for vulnerabilities is determined by an exponential decay function $e^{-\lambda \tau}$. This represents the probability that a vulnerability remains exploitable at a time $\tau$ after discovery. The authors consider their model in both continuous and discrete time. For the purpose of this blog post, i'll just discuss continuous-time.

The defender's total cost over the time interval $[0,T]$ is simply $\int^T_{t=0} d(t) dt$. The expected loss to the defender if the attacker waits for time $a(t)$ to exploit a vulnerability found at time $t$ is $e^{-\lambda a(t)} \frac{R}{d(t + a(t))}$ where $R$ is a scaling factor between security costs and losses. These can be combined to find the defenders total payoff
$$
 U_d = - \int^T_{t=0} \bigg{(}d(t) + V(t)  e^{-\lambda a(t)} \frac{R}{d(t + a(t))}\bigg{)}dt
$$
Likewise, the attacker's payoff is given by
$$
 U_a =  \int^T_{t=0} \bigg{(} V(t)  e^{-\lambda a(t)} \frac{R}{d(t + a(t))}\bigg{)}dt
$$
The authors then turn to looking for conditions for useful Nash Equilibria[3] to exist.
Interestingly, it can be shown that the attacker will never wait if the vulnerability function satisfies
$$
\frac{V(t+a)}{V(t)} \geq e^{-\lambda a(t)}
$$
for every $t \in [0,T]$ and $a(t) \in [0, T-t]$. This begins to provide guidelines for finding vulnerability repair rates in order to stop the attacker from stockpiling vulnerabilities. Further work suggested by Ben is to study the Stackelberg equilibria[4] of the game.


References
[1] -  https://eprint.iacr.org/2012/103.pdf
[2] -  http://aronlaszka.com/papers/johnson2015games.pdf
[3] - https://en.wikipedia.org/wiki/Nash_equilibrium
[4] - https://en.wikipedia.org/wiki/Stackelberg_competition

Tuesday, October 27, 2015

Sardinia eCrypt School, October 2015

As I was standing in the sweltering heat, the burning light from above with the hustle bustle of the busy bodies rushing all around me, I thought to myself "in about three hours I'll finally get through Stansted airport security".

One of the reasons I applied for a PhD at the University of Bristol was to travel; to travel all around the world attending conferences, networking with other academics in my field and collaborating on applicable pieces of work. However, I hadn't anticipated that I would arrive at Bristol and meet my supervisors (Elisabeth Oswald and Daniel Page) for them to say "thank goodness you're here, now pack your bags and head to Sardinia for a week".

I didn't argue.

The school I'm referring to is the "School on Design and Security of Cryptographic Algorithms and Devices", and this year it was co-sponsored by the International Association for Cryptologic Research (IACR). A total of roughly 80 Academics and Professionals from all around the globe attended the conference, many of whom were new PhD students (comme moi), and a few were seasoned Crypto School Veterans. As a newbie I experienced the excitement of meeting all these new faces, and getting to know them over a few glasses of wine (the magnificent hotel resort was very generous with its distribution of wine, particularly to my table).

The Summer School took the form of a crash course in all different aspects of cryptography, focusing on covering a lot of ground with introductory lectures. For my first blog post I really wanted to focus on one particular talk, but after attending them all it became clear that I had quite a few favourites; for their content, Benedikt Gierlichs "Introduction to Implementation Attacks" and Josep Balash "Introduction to Fault Attacks" were fascinating. With a background of little knowledge of practical attacks on embedded security, I was shocked to see just how easy one could manipulate theoretically secure devices, simply by making minor alterations to the embedded circuit (see slides from Benedikt Gierlich "Introduction to Implementation Attacks" for more detail).

As for the presentation of the talk, I give a special mention to Gregor Leander. He was able to introduce Lightweight Block Cipher design whilst holding the attention of his audience through quick wit; he told stories of multiple attempts to create nifty lightweight block ciphers, for them to be proven weak and useless within several months of publication. My favourite was the story of Keeloq - a proprietary hardware-dedicated block cipher that uses a non-linear feedback shift register. It was sold to Microchip Technology Inc in 1995 for $10 million (wow), and it was (maybe is) used in many remote keyless entry systems by companies such as Chrysler, Daewoo, Fiat, GM, Honda, Toyota, Volvo, Volkswagen Group, Clifford, Shurlok, Jaguar, etc. However, the Keeloq algorithm does not use cryptographic nonces (for simplicity), nor does it include time stamping (due to clock drift). This means, of course, that the algorithm is vulnerable to replay attacks: simply jam the channel while intercepting the code, and one can obtain a code that may still be usable at a later stage. If your car still uses Keeloq, you should probably either invest in a steering wheel lock, or paint your car bright pink.

To summarise, the organisers of the School did a fantastic job of bringing everyone together, the speakers were all fantastic and engaging, and I learnt a great deal about Cryptography whilst having the greatest week in Sardinia. I would strongly recommend this school to anyone who has just started a PhD in cryptography, regardless of their topic title. I will most certainly be getting involved in as many of these schools as I can!

Finally, a big shout out to all the friends I made at the School - you are all brilliant and I can't wait for the next one. Also, sorry for my atrocious dancing throughout the week. Blame the bar for playing 'Barbie Girl'.

Thursday, October 22, 2015

Obfuscation: Hiding Secrets in Software

Last week the HEAT-hosted summer school on FHE and multi-linear maps took place. On the final day we learned about indistinguishability obfuscation from the eminent cryptographer Amit Sahai. The talk was entitled 'Obfuscation: Hiding Secrets in Software' and a summary is presented here.

Suppose you want to keep a secret, but some adversary has taken over your brain: whenever you think about the secret, the adversary is able to read it. While this Orwellian nightmare is not realisable for us (at the moment!), in software this happens all the time. This is the fundamental question of obfuscation; i.e., Can we keep a secret from some adversary if all of the functionality is known? Indistinguishability obfuscation is an attempt to realise this.

One early idea on the way to program obfuscation was secure multi-party computation (MPC). The rough idea of secure MPC is to allow each individual of a party to compute some function on everyone's input without anyone learning too much about the input of any other person. In such a situation, an adversary may be one member of the party, or even act as multiple members in order to try to work out something about the input of one of the other party members. In this case, the adversary has a lot of information he can exploit; for true obfuscation, however, we want that nothing at all be revealed to an adversary.

Now we ask the important question, Which programs allow for secrets to be kept? Suppose we have a program with some secret $s$ accepting an integer $x$ as input such that it outputs 1 if $x<s$ and 0 otherwise. By a simple binary search, $s$ can be determined by the adversary. Plausible security requires unknowable queries, so if we are to succeed in obfuscation we need to have a 'virtual black box'.

However, back in 2001, black box obfuscation was suggested and ruled out in the same paper: it cannot be achieved for all programs. The authors go on to describe and define indistinguishability obfuscation (iO) -- obfuscating two programs with the same functionality so the programs are computationally indistinguishable. Interestingly, if P=NP then any circuit can be obfuscated, and so iO cannot produce hardness itself.

This well-known paper was the first construction of iO for NC$^{1}$ circuits. The authors describe a useful possible application: crippleware. Suppose you are a software vendor and have written a program with lots of functionality, and you want to distribute a trial version of the software which is somewhat restricted. In order to block out trial users from using content for which they have not yet paid, one possibility is for you to go through the code and manually remove the blocks of code for which a full licence should be required. In a large program this very cost- and time-inefficient. However, if instead you simply restrict the functionality of the user interface and release an obfuscated version of the program, this version will not have any functionality beyond that which was intended.

Very recently, there have been many attacks on multi-linear maps, many of which rely on obtaining top-level encodings of zero. Because in obfuscation the adversary never has access to encodings of zero, it is resistant to these attacks. Thus the hope remains that obfuscation will not be broken.

Wednesday, October 21, 2015

Using Linearly-Homomorphic Encryption to Evaluate Degree-2 Functions on Encrypted Data


This post is about a talk in the final session of CCS 2015 that was presented by Dario Fiore. The presented work is a joint work with Dario Catalano and the full version of their paper can be found here.

Homomorphic Encryption (HE) provides a solution to the problem of outsourcing computations privately to a cloud. There are a number of Somewhat Homomorphic Encryption (SHE) schemes that support many additions but only a limited number of multiplications. Some examples of such schemes in the chronological order are Goldwasser-Micali, El Gamal, Okamoto-Uchiyama, Paillier, Boneh-Goh-Nissim, and all the underlying SHE schemes of Fully Homomorphic Encryption (FHE) schemes since the construction of Gentry. Though SHE schemes can evaluate only a limited set of circuits compared to FHE schemes, it is still interesting to consider them for applications mainly for efficiency reason. Those SHE schemes that support only a single operation on ciphertexts - addition or multiplication - is called linearly homomorphic. That is, (additive) linearly-homomorphic encryption schemes can evaluate only linear functions on ciphertexts. (Here, by  a "linear function" we mean that the function can be expressed as a polynomial of degree one in the input variables with the scalars coming from a ring.)

The main problem addressed in the current work is to enable evaluation of degree-two functions using linearly-homomorphic encryption schemes. To this end, the authors propose a generic transformation that extends an (additively-written) linearly-homomorphic scheme to support a single multiplication operation so that it will now be able to evaluate a subset of functions of degree two.  The only requirement of the underlying linearly-homomorphic scheme is that it should be public space, which means that the message space must be a publicly known finite commutative ring and that it is possible to efficiently sample a random element from this ring. This property seems to hold for nearly all of the known linearly-homomorphic encryption schemes or can be easily adapted to become so.

Compactness is one of the three main requirements of a (fully) HE scheme that says that the complexity of decryption must be independent of the complexity of the circuit evaluating the given function. The other two requirements are semantic security and circuit privacy for the underlying plaintext messages. The latter means that it must not be possible to learn anything about the messages in the ciphertexts input to a function. It is shown that the proposed transformation preserves semantic security and the transformed scheme will be levelled circuit private if the underlying scheme is circuit private. The transformed scheme will be able to compactly evaluate a subset of degree-two functions (represented as polynomials) $\mathcal{F}_2^* $ consisting of the following polynomials $f$
\[
f(m_1,...,m_t) = P(m_1,...,m_t) + \sum_{i=1}^L Q_i(m_1,...,m_t) * R_i(m_1,...,m_t),
\]
where $P$, $Q_i$, $R_i$ are linear polynomials.  It turns out that the subclass $\mathcal{F}_2^*$ of quadratic polynomials is relevant for applications, for example, SPDZ protocol for multi-party computation requires a HE scheme for $\mathcal{F}_2^*$ with $L=1$. Though there are many prior works that aim to construct efficient SHE schemes, they do not achieve full compactness even for this subclass of quadratic functions. In the current work, this limitation for the quadratic functions outside $\mathcal{F}_2^*$ is overcome by working in a different model that uses two servers that do not communicate nor collude. A transformation of the underlying HE scheme is given in this model that is similar to the previous transformation, using which it is now possible to evaluate any quadratic polynomial compactly. Interestingly, the former (single-server) transformation preserves other properties such as zero-knowledge proofs of plaintext knowledge, threshold decryption, and multi-key homomorphicity. Also, the above results are generalised to obtain a $d$-homomorphic to $2d$-homomorphic transformation, and to obtain a two-server protocol to evaluate all degree-three polynomials from BGN HE scheme. 

The authors have compared implementations of the Paillier scheme and the Joyce-Libert scheme when transformed to support evaluation of degree-two functions, with that of the HElib implementation of the BGV FHE scheme instantiated with parameters to support a single multiplication, and also with an implementation of the BGN SHE scheme. Upto values of $L = 10$, the transformed Joyce-Libert scheme has comparable decryption time, but with ciphertexts 25 times shorter, and the encryption and the homomorphic operations at least 30 times faster.

Tuesday, October 20, 2015

52 Things, Number 50: What is the BLS pairing-based signature scheme?

This week we look at what the BLS pairing-based signature scheme is. See here for full details.

This signature scheme makes use of the Weil pairing on elliptic curves, essentially a bilinear form (with multiplicative notation) on points of order dividing $n$ on a curve, taking values in $n^{th}$ roots of unity.

So assume you have an elliptic curve $E/\mathbb{F}_{3^l}$, following the notation in the original paper. The scheme is as follows:

KeyGen: Let $E/\mathbb{F}_{3^l}$ be an elliptic curve and $q$ be the largest prime factor of the order of the curve. Let $P$ be a point on it of order $q$ and $x \in \mathbb{Z}_q^*$ be selected at random. Finally let $R = xP$. Then output $(l, q, P, R)$ as the public key and $x$ as the secret key.

Sign: To sign a message $M \in \{0,1 \}^*$ we map $M$ to a point $P_M$ in $<P>$. This is done via an algorithm $h$ described in section 3.3 of the paper, and is a hash function. Then let $S_M = xP_M$. The signature $\sigma$ is the $x$-coordinate of the point $S_M$, and $\sigma \in \mathbb{F}_{3^l}$.

Verifiy: Given a public key $(l, q, P, R)$, a message $M$ and a signature $\sigma$, do:
  • find a point $S$ on the curve of order $q$ whose $x$-coordinate is $\sigma$ and whole $y$-coordinate belongs to $\mathbb{F}_{3^l}$. If no such point exists, reject the signature as invalid.
  • set $u = e(P, \phi(S))$ and $v = e(R, \phi(h(M)))$, where $e$ is the Weil pairing on the curve and $\phi$ is an automorphism $E \leftarrow E$. $h$ is the same $h$ mentioned earlier.
  • if either $u = v$ or $u^{-1} = v$ accept the signature as valid; reject otherwise.


Monday, October 19, 2015

52 Things: Number 52: Pick an advanced application concept such as e-Voting, Auctions or Multi-Party Computation. What are the rough security requirements of such a system?

This is the last of our 52 things which we think every first year PhD student in Cryptography should know. And as you may have gathered over the last 52 posts, we expect students to know all sorts of aspects ranging from theory to practice. But the key thing is that you need to consider in cryptography not only security against players who play by the rules, but also security against players who do not play by the rules. Lets examine this in the context of Voting, Auctions and Multi-Party Computation.

Let us first discuss what we mean by these three applications. In voting we have voters who select candidates according to some voting scheme (first-past-the-post, alternative-vote, approval voting or whatever). The vote should remain secret, only valid voters can vote, only one vote per candidate is allowed, votes must be valid (for a real candidate for example), the final result must be correct, voters must not be able to be coerced, the list of security requirements are quite long. 

For auctions we may want bids to be private, we may not trust the auctioneer, there may be multiple items, with multiple possible final prices, the selection of the winning bid/price will be due to some algorithm, the final output may need to be auditable.

For Multi-Party Computation (by which we mean a computation of a function on the private inputs of a set of parties) the security requirements are simpler in that we want only the output of the function to be revealed, and nothing about the inputs (bar what can be computed from the output). However, whilst this is a simpler goal, the functionality is wider than for auctions and voting in that we require ANY function should be able to be computed.

What makes these operations interesting is that we EXPECT the bad guy to be part of the protocol. Compare this with simple encryption, where a message from Alice to Bob is sent. In encryption we expect Alice and Bob to be trustworthy and the bad guy is the person on the outside looking in. For voting, auctions and MPC we do not trust anybody, the bad guy could be a voter trying to cast multiple votes, a vote tallier trying to count the votes incorrectly, a bidder trying to made a winning bid which is not the highest, or an auctioneer trying to work out what the value of a non-winning bid is etc. 

The parties in the protocol do not even need to behave by the rules, i.e. follow the protocol. They could send messages which are produced incorrectly but which "look like" valid ones, but which later produce incorrect results for the protocol. We need to protect against this so called "malicious" behaviour.

There might be a group of bad guys working together to defeat the system, we need to determine how big a coalition of bad guys we can tolerate in our protocol. In MPC there is a big difference between the case when we have an honest majority and a dishonest majority. For honest majority protocols we can ensure that the honest parties always end up with a valid function output. For dishonest majority protocols we cannot stop a dishonest party from terminating the protocol for everyone.

We need to protect against the problem of who goes first or last. A property in the literature called "fairness". For example suppose we have an election with three voters; A, B and C. Suppose the votes are encrypted, player C can ensure that the candidate voted for by A wins (and hence find out who A voted for) by replicating A's vote. This should be protected against. 

The adversary may have a set of parties who it controls at the start of the protocol, a so-called static adversary. Or perhaps as the protocol proceeds the adversary decides which parties it wants to corrupt, a so-called adaptive adversary.

As one can see there are a multitude of security concerns one may have in such advanced protocols, and indeed a multitude of security results. Indeed each application domain gives rise to different security properties that one may require. Given the wide range of possible application protocols this means a never ending series of problems for cryptography to solve; and thus a never ending supply of problems for cryptography PhD students to solve.

Thursday, October 15, 2015

Inference Attacks on Property Preserving Encrypted Databases

This post will focus on Inference Attacks on Property-Preserving Encrypted Databases by Naveed, Kamara and Wright presented this week at CCS in Denver. As the name suggests, the idea of property-preserving encryption (PPE) is to encrypt data in such a way that some property, such as ordering (OPE), is preserved---i.e. if $a>b$ then $\mathsf{Enc}(a)>\mathsf{Enc}(b)$. Another example is deterministic encryption (DTE) which preserves equality, meaning that if $a=b$ then $\mathsf{Enc}(a)=\mathsf{Enc}(b)$. PPE is of course inherently less secure than semantically-secure encryption, however it opens the door to a number of applications, in particular the context of healthcare. As a straightforward example, if a large database holds patient records and each column is encrypted using some method of PPE, e.g. gender and disease using DTE, age and admission date using OPE (fields such as patient identifiers may not require any functionality). If a medical professional wishes to analyse data of all patients between the ages of 18 and 25 who have suffered from altitude sickness, instead of downloading the entire database (which she would have to do if the database was encrypted using standard, probabilistic encryption), she can calculate $\mathsf{Enc.OPE.age}(18)=c_1$, $\mathsf{Enc.OPE.age}(25)=c_2$ and $\mathsf{Enc.DTE.disease}(\textrm{altitude sickness})=c_3$ and only download the database entries that have an age ciphertext $c$ in range $c_1 \leq c \leq c_2$ and disease ciphertext $c_3$.

This saves bandwidth, and consequently time and money, but we really don't want the database (or queries to it) to leak anything more than the properties claimed by the encryption mechanisms, namely equality for DTE, and order+equality for OPE. Unfortunately, human characteristics and medical diagnoses have low entropy: if an adversary knows that a medical database holds records for patients from one geographic area, then the distributions of age and race are going to be very similar to the general population for that area, and even worse most of the data fields will be highly correlated to data from another nearby hospital. This opens the door to two attack vectors: inference attacks that use publicly-available data to gain information about queries (the focus of the paper), and de-anonymisation attacks (that can be built from inference attacks) that aim to identify entries of the EDB. Of course we can use semantically-secure encryption for sensitive attributes (requiring download of the whole column before analysis can occur) to try to thwart these attacks, but which attributes are sensitive? The answer to that question is beyond the scope of this post but the answer is, in general, as many as possible. Note that attributes may be sensitive not only for patients but also to hospitals, which may want to keep secret how many patients die of certain diseases.

The authors show that it is possible to mount inference attacks on encrypted databases (that are built on tools such as CryptDB and Cipherbase) storing real patient data from 200 US hospitals, using only the ciphertexts and public auxiliary information. The public data is obtained from the Healthcare Cost and Utilization Project (HCUP) National Inpatient Sample (NIS) database. The talk provided a jarring 'context is everything' moment by including a slide that displayed only the words "We attack each hospital individually" in large font. This means that the authors look at each hospital database on an individual basis rather than grouping them, and use the auxiliary info to mount their attacks. A number of tools are used and can be found in the paper. The easiest to explain is frequency analysis on a column encrypted using DTE (authors focus on mortality risk): sort the histogram, compare to the auxiliary histogram and map same-ranked elements to infer which entries correspond to which attribute. The authors' novel $l_p$ optimisation recovered mortality risk and patient death data for at least 99% of the 200 largest hospitals, and recovered disease severity for 100% of patients for at least 51% of those hospitals. Sorting attacks on data encrypted using OPE take columns that are 'dense' (meaning that all values have entries), and allow the attacker to decrypt all values with no knowledge of the key: the authors recovered the admission month and mortality risk of 100% of patients for at least 90% of the 200 largest hospitals. Another novel attack called the cumulative attack recovered disease severity, mortality risk, age, length of stay, admission month, and admission type of at least 80% of patients for at least 95% of the largest 200 hospitals. These results only exploit the databases themselves and not leakage from queries ('ciphertext only'), suggesting that the results are a lower bound on the information that could be gleaned from these encrypted databases.

Interestingly, the following talk in the session was given by Florian Kerschbaum defined a new security model for order-preserving encryption, potentially giving a countermeasure against some of the attacks on OPE given above.

Face/Off: Preventing Privacy Leakage From Photos in Social Networks

One of the worst features of Facebook is when a friend uploads a photo of you looking awful and then tags you in it so all of your friends can have a good laugh. You can untag yourself and burn your ugly outfit but the photo is still there. Worse still, your friend might have terrible privacy settings allowing anyone to see the picture.

Wouldn’t it be nice if you had more control over the privacy settings for photos you’re in? At CCS today Panagiotis Ilia spoke about a tool to give you just that. With their system, when a user uploads a photo it gets run through facial recognition software to identify the people in it. The photo is then split into layers so that each person gets control over the layer containing their own face. Users then choose who can see their face in the photo, with people outside the privileged group seeing a blurred face instead.

Ilia and his team ran experiments, showing people blurred face photos of random friends and found that 82% of those polled could not say which of their friends featured in the photos. Of those who could, their friend’s identity was given away in most cases by the other people in the photo, followed by their body or hair, and finally by their clothing, for example their characteristic Hawaiian shirt and running shoes.

A member of the audience asked what would happen if you used your favourite photo editing software to manipulate a person’s face so that the facial recognition software would be fooled but a human could still recognise the person, and was assured that should the facial recognition software fail to find a match for the person in the photo, they would appear blurred by default. This protects those who opt out of social networking from appearing in pictures. A potential issue here is if I get a selfie with my favourite celebrity then their face may be blurred by the software and no one will believe I really did meet Alan Hansen in a KFC.

If you make bad fashion choices, are regularly photographed in compromising positions, or just want to know more about the technical details you can read the paper here.

Tuesday, October 13, 2015

Network protocol obfuscation

CCS 2015 kicked off in Denver today, with three parallel sessions in play throughout the conference. Today I'm going to write a little bit about the first talk in the session on censorship and resistance, "Seeing through Network Protocol Obfuscation" by Liang Wang, Kevin P. Dyer, Aditya Akella, Thomas Ristenpart and Thomas Shrimpton, with the talk delivered by Liang.

The authors framed the adversary in the context of their work as a nation state armed with deep-packet inspection (DPI) capabilities, where the adversary wishes to block network access to certain sites or resources, and uses DPI to identify and nullify attempts to circumvent these blocks.

Typically, anti-censorship attempts follow fairly typical strategies; if a particular circumvention protocol is blocked (Tor being the canonical example), then an anti-censorship mechanism will typically try to either randomise bytes sent over the wire with the goal of looking like 'nothing', to 'look' like an unblocked protocol (e.g make Tor look like HTTP) or to tunnel the blocked protocol over an unblocked protocol.

The adversary here is probabilistic; it must guess whether a particular connection is attempting circumvention or not. As a consequence, it's important to understand both the false positive (blocking a genuine connection) and false negative (not identifying a circumvention attempt) rates to be able to judge most accurately the effectiveness of the strategy followed by the adversary.

A valid and I think very important criticism of a proportion of research into this area of classification of network trace information is that often the data set used to evaluate success rates of adversarial strategies isn't representative enough of the real-world environment; the laboratory data sets may be too small, only sampled from a small subset of the 'population', and a classifier may be heavily over-fitted to the training data and thus performance results will not translate to a different data set (see, for example, Mike Perry's blog post and an associated paper). In this regard, this work appears to do a reasonable job; one data set gathered contained packet level information from a variety of locations and over a variety of connection types from a campus networks, and researcher-generated obfuscated Tor connections generated from a reasonably wide range of environments.

The attack strategies used by the adversary are often very straightforward. The authors explored a wide range of circumvention strategies and consequent detection mechanisms that are better explained in the research paper rather than here, but as a motivating example, to detect usage of the format-transforming encryption (FTE) technique, which attempts to make Tor traffic mimic standard HTTP traffic, the adversary can achieve fairly low false positive rates (< 4 percent) by simply reassembling HTTP streams and checking the header information for correctness.

Across the board the authors find that they can construct attacks with false positive rates as low as 0.03 percent. The precise level at which the FPR becomes unacceptable for an organisation inspecting extremely large amounts of traffic (such as the Great Firewall of China) isn't clear, but it seems that anti-circumvention technologies face a tricky task in remaining useful over time. There's also an interesting comparison here with data exfiltration techniques---some anti-censorship mechanisms are viable as data exfiltration tools for extracting secure data unnoticed from a network.

Particularly when machine-learning techniques are involved, the future may yield a scenario in which the obfuscation tool has to adapt faster than the detection mechanism used to 'survive' (and vice versa).

Monday, October 12, 2015

52 Things: Number 51: What is the security model for ID-based encryption, and describe one IBE scheme.

This is the latest in a series of blog posts to address the list of '52 Things Every PhD Student Should Know To Do Cryptography': a set of questions compiled to give PhD candidates a sense of what they should know by the end of their first year. This week we introduce Identity-Based Encryption.

In public key cryptography, if Alice wants to send a message to Bob, she needs his public key. Typically this will be some very long bitstring (encoding, for example, the product of two large primes).

Suppose instead that Alice could use Bob's name, or email address, as his public key. Practically speaking, this is very convenient: there are no very long strings for Alice to obtain and remember and she doesn't need to verify that some seemingly random string is in fact Bob's public key (and not, for example, Charlie's public key). But to facilitate this, one needs Identity Based Encryption, or IBE.

In IBE, there is an entity called the Private Key Generator, or PKG. The PKG is able to compute Bob's private key using his ID (e.g. his email address) and a master key. Once Bob has authenticated himself to the PKG, he can ask for his private key and then, once he has it, he can decrypt any messages that have been encrypted under his ID.

There is an issue here, though. With the master key, the PKG can generate private keys for any agent it likes. Therefore the PKG can decrypt any messages intended for any agent. This is called key escrow, and it means that you must either trust the PKG not to read your encrypted messages or else not care that it does. In a company, though, senior management often has the right to read your (work) emails, and so IBE can be an appropriate solution for internal correspondence.

Formally, an IBE scheme consists of four algorithms: setup, extract, encrypt and decrypt.

Setup takes a security parameter and outputs the (secret) master key and the (public) system parameters, such as the message and ciphertext spaces.

Extract takes an ID and a master key and returns a private key corresponding to the ID.

Encrypt takes a message and an ID, and returns a ciphertext.

Decrypt takes a ciphertext and a private key, and returns a message.

Boneh and Franklin gave an IBE scheme in this 2003 paper. They prove, under an assumption similar to assuming that the CDH problem is hard, that their scheme is IND-ID-CCA secure in the Random Oracle Model. This means that (assuming all hash functions are random oracles), any attacker, running in polynomial-time with respect to the security parameter, wins the following security game with probability that is only negligibly (with respect to the security parameter) more than 1/2:

First, the attacker
  • can request the private keys corresponding to any ID
  • can request decryptions of any ciphertexts under any ID.
Then, the attacker chooses two messages $m_0, m_1$ and an ID $ID^*$ that does not occur in the list of IDs for which he has requested the corresponding private key. The attacker then receives the encryption $c^*$ of $m_b$ under $ID^*$ (where the bit $b$ is chosen uniformly at random). Then the attacker
  • can request the private keys corresponding to any ID apart from $ID^*$
  • can request decryptions of any ciphertexts under any ID apart from $ \left ( c^*, ID^* \right) $
and outputs a bit $b'$. We say the attacker wins if $b' = b$.

The scheme given by Boneh and Franklin relies on a non-degenerate bilinear map $e: G_1 \times G_1 \to G_2$, where $G_1$ is a group of prime order $q$, which we write additively, and $G_2$ is a group, also of order $q$, which we write multiplicatively. They instantiate this map with the Weil pairing on elliptic curves, but we omit details here. All that matters is bilinearity: $e \left( aP, bQ \right ) = e \left( P, Q \right)^{ab}$ for any $a, b \in \mathbb{Z}_q$.

There's not enough space here to describe the scheme in full, but essentially the master key is some non-zero $s \in \mathbb{Z}_q$ and the private key corresponding to $ID$ is $sH \left(ID \right)$, where $H$ is a hash function sending bitstrings to elements of $G_1$. There are public generators $P$ and $P_{pub} = sP$ of $G_1$.

To encrypt $m$ under $ID$, one selects a random string $\sigma$ and XORs $m$ with a hash of $\sigma$, creating $c_m$. Then $M$ and $\sigma$ are hashed together, giving a non-zero element $r \in \mathbb{Z}_q$. Finally one computes the pairing $e \left( H \left( ID \right), P_{pub} \right) $, raises it to the power $r$, hashes this and XORs with $\sigma$, creating $c_{ID}$. The triple $\left( rP, c_{ID}, c_m \right)$ is the ciphertext.

With the private key $d = sH\left(ID \right)$, one decrypts the triple $\left (U, V, W \right)$ as follows: compute $e \left( d, U \right)$, which, by bilinearity, will equal $e \left( H \left( ID \right), P_{pub} \right)^r $ if the ciphertext was genuine. So one XORs $V$ with the hash of the pairing to obtain $\sigma$. Then XORing $W$ with the hash of $\sigma$ will give $m$. To check that this is the intended message, one verifies that $\sigma$ and $m$ hashed together gives $r$ such that $U = rP$.

Sunday, October 11, 2015

Study group: Provably weak instances of Ring-LWE

After Ryan's study group last week, it was my turn this morning to present my chosen paper, Provably Weak Instances of Ring-LWE.

The authors build up on previous attack on Poly-LWE (EHL) to extend the attack to a larger class of number fields, and show how to apply that to construct an attack on Ring-LWE.

In a nutshell, Ring-LWE instances can be mapped into Poly-LWE ones, and if the distortion is not too large and there is an attack on the Poly-LWE instance, then this can be mapped back. The distortion is governed by the spectral norm of the mapping, which we define later on.

Let us begin by re-stating the Poly-LWE problem. From now on, we take $f(x)$ to be a monic irreducible polynomial in $\mathbb{Z}[x]$ of degree $n$ and $q$ to be a prime such that $f$ factors completely modulo $q$ and has no repeated roots. Let $P = \mathbb{Z}[x]/(f(x))$ and $P_q = P/qP = \mathbb{F}_q[x]/(f(x))$. We denote the uniform distribution by $\mathcal{U}$ and the discrete spherical (with respect to power basis) Gaussian of mean 0 and variance $\sigma^2$ by $\mathcal{G}_\sigma$, both on $P_q$.

Decision Poly-LWE Problem: Let $s(x) \in P_q$ be a secret, drawn from the uniform distribution. The Decision Poly-LWE problem is to distinguish, with non-negligible advantage, between the same number of independent samples in two distributions of $P_q \times P_q$. The first consists of samples of the form $(a(x), b(x)=a(x)s(x)+e(x))$, where $e(x)$ has been drawn from $\mathcal{G}_{\sigma}$ and $a(x)$ from the uniform distribution. The second consists of uniformly random and independent samples from $P_q \times P_q$.

It is worth mentioning that this is slightly non-standard terminology. PLWE differs from RLWE in that RLWE samples the errors in the Minkowski embedding and then pulls these values back to obtain polynomials, whereas PLWE samples the polynomial coefficients directly. Now that we have established the problem, let's break it.

The attack on Poly-LWE is fairly straightforward. It proceeds as such.

1. Map the problem down to $\mathbb{F}_q$ via a homomorphism $\phi$.
2. Loop through all possible guesses of the image of the secret $\phi(s(x))$.
3. Gather the values $\phi(e_i(x))$ under the assumption that the guess is correct.
4. Examine the resulting samples to try and determine whether they are images of a uniform distribution or a Gaussian one.

We recall that in a previous work, an attack was presented in the case when $f$ has a root $\alpha \equiv 1 (\mod q)$ or has a root of small order modulo $q$.

We first map everything down to $\mathbb{F}_q$. Write $f(x) = \prod_{i=o}^{n-1} (x - \alpha_i)$, which we can do by assumption. We then use the Chinese Remainder Theorem to write

$P_q \cong \prod_{i=o}^{n-1}\mathbb{F}_q[x]/(x-\alpha_i) \cong \mathbb{F}^n_q$.

For each root $\alpha_i$ of $f$ (of which we recall there are $n$), there is a ring homomorphism

$\phi : P_q \rightarrow \mathbb{F}_q[x]/(x-\alpha_i) \cong \mathbb{F}_q$,

obtained by simply evaluating $g(x) \in P_q$ at the said root. We fix a root $\alpha = \alpha_i$. We then loop through all $g \in \mathbb{F}_q$, where each $g$ is taken as a guess for the image of the secret $s(\alpha)$. Assuming this is correct, we obtain

$e_i(\alpha) = b_i(\alpha) - a_i(\alpha)g = b_i(\alpha) - a_i(\alpha)s(\alpha)$.

If our samples were of LWE nature, then the collection of ${e_i(\alpha)}_i$ will be distributed as the original errors. Thus this method relies on $\phi(\mathcal{U})$ and $\phi(\mathcal{G}_\sigma$ being distinguishable, as well as $q$ being small enough to allow looping through $\mathbb{F}_q$. We present the first algorithm in the paper; the other, based on the size of the error values, is similar and can be found here. It runs as follows.

We assume that $f$ has a root $\alpha$ of small order $r$ modulo $q$, i.e. $\alpha^r \equiv 1 \mod q$. Then

 $e(\alpha) = \sum e_i\alpha^i = (e_0 + e_r + e_2r + ...) + \alpha(e_1 + e_r+1 +...) + \alpha^r-1(e_r-1 + ...)$.

If $r$ is small enough, then the above sum only takes on a small number of values modulo $q$; which  allow us to efficiently distinguish whether a given value modulo $q$ belongs to that set of values. So let $S$ be the set of possible values of $e(\alpha)$ modulo $q$, notice it has size $(4\sigma n/r)^r$.

  • Let $G$ be an empty list
  • For $g \in \mathbb{F}_q$ from 0 to $q-1$, for (a(x), b(x)) in the collection
    • if $b(\alpha)-ga(\alpha)$ does not equal an element of $S$ then break
    • otherwise append $g$ to $G$
  • If $G$ is empty, return NOT PLWE
  • If $G = {g}$, return $g$
  • If $|G| > 1$ return INSUFFICIENT SAMPLES

To finish this blog post, we explain how the authors move the attack from Poly-LWE to Ring-LWE. Suppose $K$ is a monogenic number field and $R$ its ring of integers, so that $R$ is isomorphic to the polynomial ring $P = \mathbb{Z}[x]/(f(x))$ for some monic irreducible $f$. Let $\Phi(R)$ be its canonical embedding into $\mathbb{R}^n$. Then there is a map which we'll denote by $M_\alpha$ giving an isomorphism $M_\alpha : P \rightarrow \Phi(R)$. We define the spectral norm of this map as the radius of the smallest ball containing the image of the unit ball under $M_\alpha$. This will govern the distortion of the error distribution on $\Phi(R)$. And if it not too large, then a solution to the Poly-LWE problem with the new spherical Gaussian distribution may be possible. In which case, it will give a solution to the original Ring-LWE problem. Loosely speaking, we require that the following conditions are satisfied.

1. $K$ is monogenic.
2. $f$ satisfies $f(1) \equiv 0 \mod q$ (or has a root of small order, as seen).
3. $\rho$ and $\sigma$ are sufficiently small.

If we denote the spectral norm by $\rho$, the main theorem in the paper tells us that if the spectral norm satisfies

$\rho < q/(4 \sqrt{2\pi}\sigma n)$,

then the (non-dual) Ring-LWE problem decision problem can be solved (for parameters) in time $\widetilde{O}(lq)$ with probability $1-2^{-l}$, where we recall $l$ is the number of samples.

Monday, October 5, 2015

Study Group: Attacking PKCS#11 Tokens

This year, the Bristol Cryptography Group is doing something slightly different with our weekly study groups. Each member of the group has been tasked with talking about "the best paper [they've] read this year".

This means that papers we're presenting won't all be particularly recent, but (hopefully) interesting and exciting bits of work.

I kicked things off last week by talking about Attacking and Fixing PKCS#11 Tokens [1], which is from CCS '10. I started my PhD just over a year ago and this was one of the first papers I read. It has stuck with me as a model of good scientific writing: the exposition is very clear, the content is cleanly presented and one doesn't need to have a huge amount of prerequisite knowledge in order to follow the methodology. More importantly, even a non-specialist reader can understand the main result of the paper and appreciate its significance.

In brief: the authors used a formal, symbolic model to analyse a number of commercial security devices implementing a standard key management API called PKCS#11. Of the 17 tokens tested, "9 were vulnerable to attack, while the other 8 had severely restricted functionality." As I said, one doesn't need to have a PhD in Cryptography in order to get the point: real security devices were seriously flawed. In fact, they allowed you to obtain secret keys in plaintext.

A security token is designed to provide access to sensitive material in insecure environments. Your company might want you to access their (sensitive) system from your (potentially insecure) house, so they give you a USB stick (or similar) to bridge the gap. The token will do cryptography for you and your personal laptop, which could be infested with malware, won't be able to directly access the secret keys stored on the token. Instead, there's an API - an interface - between your machine and the token. Moreover, the token is (supposedly) tamper-resistant, so one cannot learn the key values by breaking open the hardware.

The most popular API for security tokens is PKCS#11, which is described in a vast document that was first published in the mid '90s and has been updated very little since then. The trouble is, while the document itself is public [2], the implementation of its contents within each security token is typically unknown to all but the token's manufacturer. So, while attacks on the standard had been known for quite some time prior to the publication of this paper [3, 4], it was unclear whether these attacks could actually be mounted on real devices. Perhaps the manufacturers had been smarter than those who had written the standard and had avoided the many pitfalls by adding extra safeguards?

Of course not. Compliance trumps security every time.

The authors of the paper built a tool that reverse-engineers a token to deduce its functionality, builds a formal model to represent this functionality, finds an attack in the model and then attempts to implement this attack on the real device. While the formal model is quite sophisticated, the attacks are not. Of the five presented in the paper, three are obvious non-compliance with the standard (such as the token just handing you a secret key in plaintext if you ask for it!) and the other two are versions of the famous Wrap/Decrypt attack first found in 2003 [3]. The only tokens that were not vulnerable to these basic attacks were those that disabled key wrapping (encrypting a key under another key) altogether, but this is an important feature of key management APIs.

It is, in fact, possible to do key wrapping in a secure way, but it requires great care. There has been some research in this direction since 2010, including my own ongoing work, but I don't have space to elaborate on this here.

For me, the take-home message from this paper is that, if you don't prove the security of your security device, it's probably insecure. This insecurity makes for fun academic papers, but is also rather worrying when your device is widely used in the real world.

[1] - http://dl.acm.org/citation.cfm?doid=1866307.1866337
[2] - http://docs.oasis-open.org/pkcs11/pkcs11-base/v2.40/pkcs11-base-v2.40.pdf
[3] - http://link.springer.com/chapter/10.1007%2F978-3-540-45238-6_32
[4] - http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/DKS-tfit08.pdf

Wednesday, September 30, 2015

Challenges in Cloud Storage

This post will discuss parts of the talk by Angelo De Caro (IBM Zurich) on 'data storage-oriented cryptographic protocols,' given at the workshop on secure and trustworthy computing in Bucharest.

The cloud provides an attractive opportunity for users and enterprises (collectively 'clients') to outsource their storage. Many providers offer low-cost storage, with straightforward management and ubiquitous access (from multiple devices). However users inherently lose direct control, meaning they have no guarantees about privacy or the future availability of their files.

Deduplication is the process by which a provider saves itself storage space (and consequently money) by only storing one copy of each file, and using some method of tracking which users own that file. This concept is so desirable because there is a large amount of redundancy in many contexts, such as media (movies, music etc.), system files/software and email attachments. When server providers want to deduplicate they have a number of choices:
- file-level or block-level (latter allows better dedup in systems where updates to large files are common)
- single-user/cross-user (latter is more desirable for providers)
- client-side/server-side

The final point introduces some interesting concerns. Server-side dedup means that the client needs to send the whole file to the server, meaning a high bandwidth cost. For client-side deuduplication Alice takes a hash of her file $F$ and sends this value $H(F)$ to the server: if the server hasn't seen this before then instructs Alice to upload the file, if not then deduplication occurs. This means that having $H(F)$ is enough to download the file (DropBox previously used a system where this was the case). This means Alice can send all her friends $H(F)$ where $F$ is a movie, meaning that the provider acts as a content distribution network (this is not a privacy problem: server doesn't want to become such a provider and will have to pay for bandwidth etc). This method also creates a covert channel that reveals which files are stored on the server: in the so-called 'salary attack' discussed by Pinkas et al. an adversary Eve has knowledge of what a company's payslips look like, so can learn an employee's salary by creating a large number of possible payslips and beginning this upload process for each one until the server informs Eve that it already has the file.

Proofs of Ownership
These issues mean we'd rather use proofs of ownership (PoWs)--a way for Alice to convince the server that she owns the file--and this means we need to avoid short file identifiers. In 2011 Halevi et al. suggested the following framework for such a paradigm: in the preprocessing phase the server stores some short info per file (file itself is located in some secondary storage), then in the proof phase (done only during file upload) there is some challenge/response mechanism. This procedure needs to be bandwidth-efficient, and computation (particularly for the client) should be efficient. The authors suggested a method using Merkle trees with special encodings, where the prover is 'challenged' on a certain block in the hash tree. In the preprocessing phase a server is sent the file and computes a Merkle tree, then stores the root. To prove, the server asks the client to present sibling paths to $t$ random leaves, the client computes them, server authenticates. This solution is bandwidth efficient, and space efficient at server side, but the client has to do quite a lot of computation. A client that knows a large proportion of the file will likely be able to 'cheat,' so we need a way to 'spread' the entropy across the file to stop the scenario like the salary attack. Using an erasure code is a good way of doing this (cheating probability $2^{-t}$) but constructions are fairly computationally expensive. An alternative approach was taken by Di Pietro and Sorniotti (AsiaCCS 2012), which is considerably more efficient at on the client side but worse on the server side, and the challenge values have to be recomputed when they are exhausted.

Proofs of Retrievability
Alice outsources a file and wants to know that it's retrievable, meaning not only that the server still holds the file but also that the server hasn't modified the file. There is a trivial solution: just download the file on a regular basis. A better approach is to use a keyed hash function and store $H(k,F)$; if Alice wants to verify she can just send $k$ to the server S and ask S to compute $H(k,F)$ and compare. This is storage efficient for Alice, but S needs to read the entire file and can only verify once. Even better is to use 'sentinels' (short, random strings): Alice embeds sentinels in random positions in $F$, encrypts block-wise, and sends this file $F'$ to server and keeps the sentinels. To verify Alice just asks for sentinel $s_i$ and checks if it is correct. Protocol means Alice doesn't need to store all of F and can detect large erasure (which would remove more than one sentinel), but Alice has to store the sentinels and cannot detect small erasure. One can improve this by computing $MAC(k,s_i)$ and appends this value to the file (server doesn't know to which sentinels these MACs correspond), only needing to store $k$. Alice doesn't need to store any of F and can detect large erasure but still can't detect small erasure. We can solve the 'small erasure' problem by using error-correcting code, however this makes it more expensive.

Confidentiality
Both PoWs and PoRs are tangential to the goal of confidentiality of files from an untrusted server. If two clients upload the same file, encrypted under their own keys, then we'd expect these two ciphertexts to be distinct and (assuming a strong method of encryption) the server shouldn't be able to learn that the two ciphertexts correspond to the same file. Douceur et al. gave an initial solution to this problem: hash the file and use this value H(F) as the encryption key, and this was generalised by Bellare et al. (Eurocrypt 2013). Since encryption is deterministic we can only expect some sort of security if files have high entropy, and indeed this approach allows offline brute-force attacks (Eve is sent a challenge ciphertext $C^*$, and if the message space is small Eve just computes hash of each file and creates ciphertexts until she finds a collision with $C^*$). The same authors of the EC13 paper gave a solution to this problem: their system called DupLESS uses an independent key server (KS), and a user engages in an oblivious PRF with KS to get an encryption key (this means that this key server needs to enforce a per-client rate-limiting strategy to stop brute force attacks). At CCS next month Liu, Asokan and Pinkas will present a paper that removes the need for the key server, by distributing its role among the clients using PAKE (Asokan gave a talk on this paper earlier in the workshop).

This presentation complemented Asokan's talk, Florian Kerschbaum's talk about computing on encrypted data (slides here) and the talk given by Marc Lacoste from Orange Labs who discussed the goals of the Super Cloud project and the security challenges involved in the development of 5G communication standards and IoT.

Monday, September 28, 2015

52 Things: Number 49: Describe the basic ideas behind IPSec and TLS.

This is the latest in a series of blog posts to address the list of '52 Things Every PhD Student Should Know To Do Cryptography': a set of questions compiled to give PhD candidates a sense of what they should know by the end of their first year. This week we discuss the basic ideas behind IPSec and TLS.

Internet Protocol Security (IPsec) and Transport Layer Security (TLS) both aim to create a secure communication channel between two parties over an insecure network. In general, both use some mechanism to establish a private session key (either pre-shared or via a key negotiation protocol) and use symmetric key cryptography for the bulk of the communication. There are some further details with regards to authentication but I'll skip over that. Although these two ultimately have similar goals, they differ considerably in their implementation.

IPSec sits on the network layer of the OSI model and aims to provide integrity, authenticity and confidentiality between two end points. As it sits on network layer, it blindly encrypts, MACs and packages up the data from the above layers before sending it down the line. This effectively creates a virtual network link between the two end-points without the need to ensure the end-point application has secured the data appropriately. This is often deployed for enterprise VPN solutions as it is a fast solution for remote access to an enterprise network. The downside however is that once a connection is up, it is tricky to restrict applications from using the connection once it is up.

TLS on the other hand establishes a secure connection at the application layer of the OSI model. We see TLS heavily used for securing web protocols such as HTTPS, STARTTLS etc. and as a consequence, each connection/application will negotiate/set up a secure connection independently. From a security perspective, this is quite attractive as a single compromised channel *should* have no bearing on the remaining channels. Whilst TLS can be viewed as a more flexible approach, it does incur some overhead over IPSec for a large number of connections between two nodes.

It's easy to get into very fine details but I think that should cover the 'basic' ideas of the two.

Saturday, September 26, 2015

52 Things: Number 48: What is the purpose and use of a TPM?


This is the latest in a series of blog posts to address the list of '52 Things Every PhD Student Should Know To Do Cryptography': a set of questions compiled to give PhD candidates a sense of what they should know by the end of their first year.

Before examining the point of this question (namely what the purpose and use of a TPM is) it's worth trying to understand the problem a TPM is designed to overcome. The problem is really one of trust. Trusting what? Well, primarily the memory and software running on a computer. These things can be directly accessed by the operating system and so secret information (such as cryptographic keys) can be accessed by an attacker who has access to the machine at the operating system level. If these keys are being stored directly in memory and being accessed by software, it could be fairly trivial for an attacker to read off the memory location where the keys are being stored and then compromise security.

One way around this problem is make sure that keys are never stored directly in the computers memory which can be accessed by software. Given that the keys are required for secure applications they must at some point be presented in a state that can be used by the software so how could this be possible? Well, one way is to protect the secret keys stored in memory by wrapping them using a key that the software does not have access to. By having a separate piece of hardware for instance that has a key burned into it and which is able to perform certain cryptographic operations with that key. This piece of hardware could therefore be employed by the software to do various things with this secret key that is stored on the hardware to do things such as wrap keys to be stored in memory, but never have access to this key directly.

This is essentially what a TPM does. A TPM has an RSA key pair called the Storage Root Key (SRK). The private part of this key is kept secret from everything and everyone. Using this private key, other keys (that software uses) can be wrapped (often called “binding”) using the SRK, protecting them from disclosure. In addition to simply wrapping keys, TPMs can also wrap keys and tie them to certain platform measurements. This type of key can only be unwrapped when those platform measurements have the same values that they had when the key was created. This process is known as “sealing.” TPMs can also be used for cryptographic key generation and perform other cryptographic tasks one of which is know as remote attestation, which creates a hash key summary of the hardware and software configuration allowing a third party to verify that the software has not been changed.

The real point to understand here is that by pushing security down to the hardware level and ensuring that it is given over to a separate piece of hardware that has it's own firmware and circuits that can't be altered from the outside, the system is not exposed to software vulnerabilities and is therefore more trustworthy.

So what is the purpose of a TPM? To overcome the problem of trusting (or rather not trusting) software to be completely reliable.

What is the use of a TPM? We mentioned a number of them. First of all was binding, which essentially wraps a key using the private key of the SRK. The second was sealing which also ties the wraped key to a particular platform measurements. And thirdly we looked at remote attestation and noted that TPMs can also be used for other cryptographic functions such as key generation.


Distance bounding protocols and physical layer security

Yesterday at the Summer School on Secure and Trustworthy Computing Srdjan Capkun gave an interesting talk on physical layer security and distance bounding protocols. I'll try to give some insights of why I found this talk particularly interesting.

Bounding the distance between two devices is becoming an increasingly important problem. Think of you contactless card, your car key. Crypto alone can not answer this questions. Indeed, this is a property of the underlying physical space, not of abstract data/identities. To solve that question, distance bounding protocols have been using time of flight measurements (mixed with some more classical crypto). The intuition is that if you know that the signal can travel from A to B in time less than t, then A and B are at distance at most t*speed of light. The talk wasn't so much about how to study formally such protocols (see [BCSS11] for example of such an analysis), but on the physical properties of the underlying systems needed for these protocols to actually work as intended.

The main assumption is that an adversary can not transmit information faster than the speed of light. This looks like quite a reasonable assumption, as the theory of relativity seems to be a hard problem to break 1. However, as the talk made quite clear, the situation is more complex. Outputing a bit on a channel is not an instantaneous operation, and time matters a lot here. A nano-light-second is about 15cm, and usually transmitting a bit takes micro/milli seconds...  This can practically be used in attacks where the adversary guesses the bit you're trying to send based on only part of the transmission. As a consequence it becomes essential for the length of a bit in that setting to be as low as possible. The technical details were somewhat lost on me, but, according to the speaker, it is possible to reduce this time to a few nano-seconds. This reduces the time of flight uncertainty to less than a meter[under the assumption that FTL travel is not possible], which is pretty good.

The second unusual problem tackled by this talk is as follows: assume that the machine that wants to prove its proximity to you is adversarial, can we still bound the distance? Distance measurement between A and B is done by A sending some signal to B and B answering a processed version of this signal back to A. A then computes the distance: (total time - processing time)*c/2. An adversarial machine can not cheat on the time of flight part, but it can cheat on the processing time. In the end, not only do we need short bits, we also need extremely short processing time. This means that for this approach to become practical, we need to build systems that receive, process and send signal in the span of nano-seconds. This talk provided with an example of such extremely efficient, completely analog computing/transmiting node. Interestingly this also entails that the kind of computations you can do in that framework is quite limited, making the problem extra interesting.

The talk was concluded by a fun use case: secure positioning. In a world where drones and self driving cars are getting more and more comon, the ability for someone to make your system believe it's in the wrong location, this might well become a real problem. GPS positioning is far from secure (see https://en.wikipedia.org/wiki/Iran%E2%80%93U.S._RQ-170_incident), and there is little hope that any similar non-interactive system will ever yield the integrity guarantees we need. This nicely justifies the need for secure distance bounding protocols with the appropriate architecture.

All in all, after this talk my impression is that the interactions between the physical layer and the protocol layer might very well be the key for future developements of secure distance bounding.


1 I guess that if you break it, distance bounding will anyway be the least of your problems

Thursday, September 24, 2015

Intel SGX: The Death of MPC?

An earlier version of post contained errors about the signing of binaries. Thanks to Guillaume Scerri for pointing them out.

On the first day of the summer school on secure and trustworthy computing, considerable time was dedicated to speakers from Intel talking about security-related CPU features. SGX is an upcoming such feature. Even though the introduction seemed to be more directed towards programmers rather than security researchers or even cryptographers, I was left with the following insights.

The core idea of SGX is to obscure the memory of an application from the operating system. Motivated by possible flaws in the OS, the applications themselves are separated from the OS. This is achieved by encrypting the application's memory using a key generated by the CPU, which never leaves the CPU. It is unclear to me whether SGX will hide memory access patterns of the application. However, this probably can be achieved using oblivious RAM within the application. Furthermore, I/O will be still be controlled by the OS, thus leaving the possibility of keylogging etc.

In order to convince a remote party that the binary is indeed running on SGX, processors will contain a private key to sign the binary being executed, with the corresponding public key being provided by the manufacturer. It is not clear whether the private key will be global or per CPU. A private key in the CPU would make it easy to also have the binaries encrypted, but it seems that this is not planned at the moment.

With SGX, Intel obviously targets cloud applications in an attempt to restrict the cloud provider's access to data in the cloud. Since there are no processors with SGX yet, it is hard to estimate the cost and the feasibility for SGX on mobile processors. However, if SGX eventually comes to mobile processors, rooting or jail-breaking becomes somewhat useless because the power of the OS will be restricted. This could be seen as another step of an evolution that went from proprietary software computing locally on personal data to personal data being held in the cloud. With SGX available on all processors, an app provider could exercise almost complete control over all cloud and mobile instances of the app, given the trustworthiness of the processor manufacturer. As a result, apps might effectively become black boxes with all interfaces defined by the app provider.

The second talk on SGX suggested its use as a replacement for multiparty computation as follows: The parties agree on a piece of software executing the computation. The software is then run using SGX and generates a public key, which the parties use to encrypt their inputs. After running the computation, the software only outputs the previously agreed results to the parties. One can argue that the security should be similar to the security of MPC because one has to trust the processor to execute the local computation correctly and without leakage to the adversary. Therefore, one might as well trust the processor with SGX to execute the software correctly while preserving confidentiality by encrypting the memory. However, this scheme introduces the CPU manufacturer as a third party possibly knowing the private key, which is used to confirm that the software runs on SGX. The manufacturer is inherently trusted not to collude with any party. More concisely, one might ask: You trust your CPU. Do you also trust the manufacturer?

Tuesday, September 15, 2015

CHES 2015: Evaluation and Improvement of Generic-Emulating DPA Attacks

On Tuesday afternoon at the ‘Palais Grand Large’ (or, as my long-unrehearsed school-level French childishly insists on translating it, the ‘Big Fat Palace’), Weijia Wang presented ‘Evaluation and Improvement of Generic-Emulating DPA Attacks’. This paper (joint work with Yu Yu, Junrong Liu, Zheng Guo, François-Xavier Standaert, Dawu Gu, Sen Xu and Rong Fu) is of particular interest to me as it builds on work done in part at Bristol and presented at CT-RSA 2014 ([WOS2014]; ePrint).

‘Generic DPA’ attacks -- methods which succeed against an arbitrary physical implementation without the need to meaningfully characterise the power model -- have been hotly pursued in the side-channel literature. However, it has been shown ([WOS2014]) that all such proposals rely on noninjectivity of the target function (among other properties) to succeed. Attacks against injective targets necessarily fail unless provided with some compressing mapping corresponding meaningfully to something about the true leakage of the specific device. 

We introduced the notion of ‘generic-emulating DPA’ to enable key recovery attacks against injective functions such as the AES S-Box requiring only some minimal intuition about the form of the leakage. The instantiation we presented used stepwise linear regression to find the key hypothesis producing the most ‘simply explained’ model for the measured leakages (see [WOS2014] for details).

Wang et al. experiment further with our proof-of-concept example and show that the stepwise approach is unstable in small samples. They suggest two alternatives -- ridge-based and lasso-based regression -- which perform better in practice. Against low degree leakage functions, the best difference-of-means (DoM) attacks outperform all three tested generic-emulating methods. However, against complex leakages (for example, a polynomial in eight target bits comprised only of terms of degree 5 and above) their proposals demonstrate an advantage.

They also incorporate cross-validation into their strategies, which is shown to further improve outcomes so that even against typical smartcard leakages the ridge-based and lasso-based approaches begin to rival the best DoM attacks. The question and answer session drew attention to the computational cost of such methods relative to (cheap) DoM, but the authors were keen to stress that this was not prohibitive. Whilst no thoroughly convincing application scenario has yet been discovered for generic-emulating DPA, it is interesting to see further progress from our basic suggestion into more practical territory.


[WOS2014]: The Myth of Generic DPA … and the Magic of Learning, Carolyn Whitnall, Elisabeth Oswald, François-Xavier Standaert, CT-RSA 2014, vol 8366 LNCS pp 183-205.

Secure Protocols in a Hostile World

It is Tuesday afternoon of CHES in Saint Malo and I will be blogging the only invited talk of the conference entitled "Secure Protocols in a Hostile World" given by Matthew Green from John Hopkins University.

The talk begins with a slide entitled "Cryptography is a solved problem" and contains lots of quotes such as "Crypto is the strongest link in the chain" and "Why would you focus on making the strongest part of a system stronger". It is explained that this is wrong and the remainder of the talk is justifying why these people's views aren't correct and that we should still care about creating new crypto. Crypto can be seen as being split into the following five parts:


  1. Algorithms
  2. Protocol Design
  3. Implementation
  4. Library API Design
  5. Deployment and Correct Usage


We as cryptographers spend most of our time working on the first point and we are very good at it but the further down the list you go the more unsolved the problems are. Taking implementation as an example there are schemes (such as ElGamal) which are secure because they come with proofs of security but implementing them is hard and if not done carefully leads to sidechannels which allows the scheme to be broken.

Why does it matter that we focus on the first point and leave the rest to others? Well Heartbleed and FREAK to name two reasons why this matters and is becoming more of an issue. In 2015 we seem very good at designing new strong crypto but have still not got the hang of deploying strong crypto out into the real world. We will now look at a case study.

SSL/TLS

SSL has been described as "The most important security protocol on the Internet", however recently it has had its own share of troubles, including; Heartbleed, Lucky13, FREAK, CRIME, BEAST and RC4 (well unlike the rest RC4 isn't an attack but has caused its fair share of trouble recently).

So below I will give a few examples of issues with SSL/TLS;


  • CBC with MAC then Encrypt
  • Bad use of IVs
  • Compression
  • RC4 (I think my favourite quote of the conference so far has been "When RC4 is the solution, you need a better problem")


All of these problems are well known - I mean MAC then Encrypt not (always) being secure is taught in undergraduate crypto courses so what is it doing in TLS?! The answer is communication, the people doing the implementation are not cryptographers so don't always know about these things and cryptographers tend to be looking for the 'next big thing' instead of working within all five of the points above. To further emphasize the need for communication; when FREAK was released three TLS implementations had the exact same bug that allowed FREAK to happen. There is a clear disjoint here that needs to be resolved.

Another example was then given in the form of a downgrade attack from DHE to DHE_EXPLOIT and then solving the corresponding discrete logarithm problem. This is known as LogJam and will appear at CCS15 so I won't go into detail here but it is certainly worth looking into (and was fixed in browsers only a couple of weeks ago).

Conclusion

I am going to conclude using the same bullet points that Matthew used on his own slides as I think this hits the nail on the head.

Cryptography is hard
Cryptographers fail to push best practices to engineers
Engineers fail to pull best practices from us or from the literature
Cryptography is becoming more complex
It has got to the point this process can no longer be tolerated and something needs to be done about it