Tuesday, October 27, 2015

Sardinia eCrypt School, October 2015

As I was standing in the sweltering heat, the burning light from above with the hustle bustle of the busy bodies rushing all around me, I thought to myself "in about three hours I'll finally get through Stansted airport security".

One of the reasons I applied for a PhD at the University of Bristol was to travel; to travel all around the world attending conferences, networking with other academics in my field and collaborating on applicable pieces of work. However, I hadn't anticipated that I would arrive at Bristol and meet my supervisors (Elisabeth Oswald and Daniel Page) for them to say "thank goodness you're here, now pack your bags and head to Sardinia for a week".

I didn't argue.

The school I'm referring to is the "School on Design and Security of Cryptographic Algorithms and Devices", and this year it was co-sponsored by the International Association for Cryptologic Research (IACR). A total of roughly 80 Academics and Professionals from all around the globe attended the conference, many of whom were new PhD students (comme moi), and a few were seasoned Crypto School Veterans. As a newbie I experienced the excitement of meeting all these new faces, and getting to know them over a few glasses of wine (the magnificent hotel resort was very generous with its distribution of wine, particularly to my table).

The Summer School took the form of a crash course in all different aspects of cryptography, focusing on covering a lot of ground with introductory lectures. For my first blog post I really wanted to focus on one particular talk, but after attending them all it became clear that I had quite a few favourites; for their content, Benedikt Gierlichs "Introduction to Implementation Attacks" and Josep Balash "Introduction to Fault Attacks" were fascinating. With a background of little knowledge of practical attacks on embedded security, I was shocked to see just how easy one could manipulate theoretically secure devices, simply by making minor alterations to the embedded circuit (see slides from Benedikt Gierlich "Introduction to Implementation Attacks" for more detail).

As for the presentation of the talk, I give a special mention to Gregor Leander. He was able to introduce Lightweight Block Cipher design whilst holding the attention of his audience through quick wit; he told stories of multiple attempts to create nifty lightweight block ciphers, for them to be proven weak and useless within several months of publication. My favourite was the story of Keeloq - a proprietary hardware-dedicated block cipher that uses a non-linear feedback shift register. It was sold to Microchip Technology Inc in 1995 for $10 million (wow), and it was (maybe is) used in many remote keyless entry systems by companies such as Chrysler, Daewoo, Fiat, GM, Honda, Toyota, Volvo, Volkswagen Group, Clifford, Shurlok, Jaguar, etc. However, the Keeloq algorithm does not use cryptographic nonces (for simplicity), nor does it include time stamping (due to clock drift). This means, of course, that the algorithm is vulnerable to replay attacks: simply jam the channel while intercepting the code, and one can obtain a code that may still be usable at a later stage. If your car still uses Keeloq, you should probably either invest in a steering wheel lock, or paint your car bright pink.

To summarise, the organisers of the School did a fantastic job of bringing everyone together, the speakers were all fantastic and engaging, and I learnt a great deal about Cryptography whilst having the greatest week in Sardinia. I would strongly recommend this school to anyone who has just started a PhD in cryptography, regardless of their topic title. I will most certainly be getting involved in as many of these schools as I can!

Finally, a big shout out to all the friends I made at the School - you are all brilliant and I can't wait for the next one. Also, sorry for my atrocious dancing throughout the week. Blame the bar for playing 'Barbie Girl'.

Thursday, October 22, 2015

Obfuscation: Hiding Secrets in Software

Last week the HEAT-hosted summer school on FHE and multi-linear maps took place. On the final day we learned about indistinguishability obfuscation from the eminent cryptographer Amit Sahai. The talk was entitled 'Obfuscation: Hiding Secrets in Software' and a summary is presented here.

Suppose you want to keep a secret, but some adversary has taken over your brain: whenever you think about the secret, the adversary is able to read it. While this Orwellian nightmare is not realisable for us (at the moment!), in software this happens all the time. This is the fundamental question of obfuscation; i.e., Can we keep a secret from some adversary if all of the functionality is known? Indistinguishability obfuscation is an attempt to realise this.

One early idea on the way to program obfuscation was secure multi-party computation (MPC). The rough idea of secure MPC is to allow each individual of a party to compute some function on everyone's input without anyone learning too much about the input of any other person. In such a situation, an adversary may be one member of the party, or even act as multiple members in order to try to work out something about the input of one of the other party members. In this case, the adversary has a lot of information he can exploit; for true obfuscation, however, we want that nothing at all be revealed to an adversary.

Now we ask the important question, Which programs allow for secrets to be kept? Suppose we have a program with some secret $s$ accepting an integer $x$ as input such that it outputs 1 if $x<s$ and 0 otherwise. By a simple binary search, $s$ can be determined by the adversary. Plausible security requires unknowable queries, so if we are to succeed in obfuscation we need to have a 'virtual black box'.

However, back in 2001, black box obfuscation was suggested and ruled out in the same paper: it cannot be achieved for all programs. The authors go on to describe and define indistinguishability obfuscation (iO) -- obfuscating two programs with the same functionality so the programs are computationally indistinguishable. Interestingly, if P=NP then any circuit can be obfuscated, and so iO cannot produce hardness itself.

This well-known paper was the first construction of iO for NC$^{1}$ circuits. The authors describe a useful possible application: crippleware. Suppose you are a software vendor and have written a program with lots of functionality, and you want to distribute a trial version of the software which is somewhat restricted. In order to block out trial users from using content for which they have not yet paid, one possibility is for you to go through the code and manually remove the blocks of code for which a full licence should be required. In a large program this very cost- and time-inefficient. However, if instead you simply restrict the functionality of the user interface and release an obfuscated version of the program, this version will not have any functionality beyond that which was intended.

Very recently, there have been many attacks on multi-linear maps, many of which rely on obtaining top-level encodings of zero. Because in obfuscation the adversary never has access to encodings of zero, it is resistant to these attacks. Thus the hope remains that obfuscation will not be broken.

Wednesday, October 21, 2015

Using Linearly-Homomorphic Encryption to Evaluate Degree-2 Functions on Encrypted Data


This post is about a talk in the final session of CCS 2015 that was presented by Dario Fiore. The presented work is a joint work with Dario Catalano and the full version of their paper can be found here.

Homomorphic Encryption (HE) provides a solution to the problem of outsourcing computations privately to a cloud. There are a number of Somewhat Homomorphic Encryption (SHE) schemes that support many additions but only a limited number of multiplications. Some examples of such schemes in the chronological order are Goldwasser-Micali, El Gamal, Okamoto-Uchiyama, Paillier, Boneh-Goh-Nissim, and all the underlying SHE schemes of Fully Homomorphic Encryption (FHE) schemes since the construction of Gentry. Though SHE schemes can evaluate only a limited set of circuits compared to FHE schemes, it is still interesting to consider them for applications mainly for efficiency reason. Those SHE schemes that support only a single operation on ciphertexts - addition or multiplication - is called linearly homomorphic. That is, (additive) linearly-homomorphic encryption schemes can evaluate only linear functions on ciphertexts. (Here, by  a "linear function" we mean that the function can be expressed as a polynomial of degree one in the input variables with the scalars coming from a ring.)

The main problem addressed in the current work is to enable evaluation of degree-two functions using linearly-homomorphic encryption schemes. To this end, the authors propose a generic transformation that extends an (additively-written) linearly-homomorphic scheme to support a single multiplication operation so that it will now be able to evaluate a subset of functions of degree two.  The only requirement of the underlying linearly-homomorphic scheme is that it should be public space, which means that the message space must be a publicly known finite commutative ring and that it is possible to efficiently sample a random element from this ring. This property seems to hold for nearly all of the known linearly-homomorphic encryption schemes or can be easily adapted to become so.

Compactness is one of the three main requirements of a (fully) HE scheme that says that the complexity of decryption must be independent of the complexity of the circuit evaluating the given function. The other two requirements are semantic security and circuit privacy for the underlying plaintext messages. The latter means that it must not be possible to learn anything about the messages in the ciphertexts input to a function. It is shown that the proposed transformation preserves semantic security and the transformed scheme will be levelled circuit private if the underlying scheme is circuit private. The transformed scheme will be able to compactly evaluate a subset of degree-two functions (represented as polynomials) $\mathcal{F}_2^* $ consisting of the following polynomials $f$
\[
f(m_1,...,m_t) = P(m_1,...,m_t) + \sum_{i=1}^L Q_i(m_1,...,m_t) * R_i(m_1,...,m_t),
\]
where $P$, $Q_i$, $R_i$ are linear polynomials.  It turns out that the subclass $\mathcal{F}_2^*$ of quadratic polynomials is relevant for applications, for example, SPDZ protocol for multi-party computation requires a HE scheme for $\mathcal{F}_2^*$ with $L=1$. Though there are many prior works that aim to construct efficient SHE schemes, they do not achieve full compactness even for this subclass of quadratic functions. In the current work, this limitation for the quadratic functions outside $\mathcal{F}_2^*$ is overcome by working in a different model that uses two servers that do not communicate nor collude. A transformation of the underlying HE scheme is given in this model that is similar to the previous transformation, using which it is now possible to evaluate any quadratic polynomial compactly. Interestingly, the former (single-server) transformation preserves other properties such as zero-knowledge proofs of plaintext knowledge, threshold decryption, and multi-key homomorphicity. Also, the above results are generalised to obtain a $d$-homomorphic to $2d$-homomorphic transformation, and to obtain a two-server protocol to evaluate all degree-three polynomials from BGN HE scheme. 

The authors have compared implementations of the Paillier scheme and the Joyce-Libert scheme when transformed to support evaluation of degree-two functions, with that of the HElib implementation of the BGV FHE scheme instantiated with parameters to support a single multiplication, and also with an implementation of the BGN SHE scheme. Upto values of $L = 10$, the transformed Joyce-Libert scheme has comparable decryption time, but with ciphertexts 25 times shorter, and the encryption and the homomorphic operations at least 30 times faster.

Tuesday, October 20, 2015

52 Things, Number 50: What is the BLS pairing-based signature scheme?

This week we look at what the BLS pairing-based signature scheme is. See here for full details.

This signature scheme makes use of the Weil pairing on elliptic curves, essentially a bilinear form (with multiplicative notation) on points of order dividing $n$ on a curve, taking values in $n^{th}$ roots of unity.

So assume you have an elliptic curve $E/\mathbb{F}_{3^l}$, following the notation in the original paper. The scheme is as follows:

KeyGen: Let $E/\mathbb{F}_{3^l}$ be an elliptic curve and $q$ be the largest prime factor of the order of the curve. Let $P$ be a point on it of order $q$ and $x \in \mathbb{Z}_q^*$ be selected at random. Finally let $R = xP$. Then output $(l, q, P, R)$ as the public key and $x$ as the secret key.

Sign: To sign a message $M \in \{0,1 \}^*$ we map $M$ to a point $P_M$ in $<P>$. This is done via an algorithm $h$ described in section 3.3 of the paper, and is a hash function. Then let $S_M = xP_M$. The signature $\sigma$ is the $x$-coordinate of the point $S_M$, and $\sigma \in \mathbb{F}_{3^l}$.

Verifiy: Given a public key $(l, q, P, R)$, a message $M$ and a signature $\sigma$, do:
  • find a point $S$ on the curve of order $q$ whose $x$-coordinate is $\sigma$ and whole $y$-coordinate belongs to $\mathbb{F}_{3^l}$. If no such point exists, reject the signature as invalid.
  • set $u = e(P, \phi(S))$ and $v = e(R, \phi(h(M)))$, where $e$ is the Weil pairing on the curve and $\phi$ is an automorphism $E \leftarrow E$. $h$ is the same $h$ mentioned earlier.
  • if either $u = v$ or $u^{-1} = v$ accept the signature as valid; reject otherwise.


Monday, October 19, 2015

52 Things: Number 52: Pick an advanced application concept such as e-Voting, Auctions or Multi-Party Computation. What are the rough security requirements of such a system?

This is the last of our 52 things which we think every first year PhD student in Cryptography should know. And as you may have gathered over the last 52 posts, we expect students to know all sorts of aspects ranging from theory to practice. But the key thing is that you need to consider in cryptography not only security against players who play by the rules, but also security against players who do not play by the rules. Lets examine this in the context of Voting, Auctions and Multi-Party Computation.

Let us first discuss what we mean by these three applications. In voting we have voters who select candidates according to some voting scheme (first-past-the-post, alternative-vote, approval voting or whatever). The vote should remain secret, only valid voters can vote, only one vote per candidate is allowed, votes must be valid (for a real candidate for example), the final result must be correct, voters must not be able to be coerced, the list of security requirements are quite long. 

For auctions we may want bids to be private, we may not trust the auctioneer, there may be multiple items, with multiple possible final prices, the selection of the winning bid/price will be due to some algorithm, the final output may need to be auditable.

For Multi-Party Computation (by which we mean a computation of a function on the private inputs of a set of parties) the security requirements are simpler in that we want only the output of the function to be revealed, and nothing about the inputs (bar what can be computed from the output). However, whilst this is a simpler goal, the functionality is wider than for auctions and voting in that we require ANY function should be able to be computed.

What makes these operations interesting is that we EXPECT the bad guy to be part of the protocol. Compare this with simple encryption, where a message from Alice to Bob is sent. In encryption we expect Alice and Bob to be trustworthy and the bad guy is the person on the outside looking in. For voting, auctions and MPC we do not trust anybody, the bad guy could be a voter trying to cast multiple votes, a vote tallier trying to count the votes incorrectly, a bidder trying to made a winning bid which is not the highest, or an auctioneer trying to work out what the value of a non-winning bid is etc. 

The parties in the protocol do not even need to behave by the rules, i.e. follow the protocol. They could send messages which are produced incorrectly but which "look like" valid ones, but which later produce incorrect results for the protocol. We need to protect against this so called "malicious" behaviour.

There might be a group of bad guys working together to defeat the system, we need to determine how big a coalition of bad guys we can tolerate in our protocol. In MPC there is a big difference between the case when we have an honest majority and a dishonest majority. For honest majority protocols we can ensure that the honest parties always end up with a valid function output. For dishonest majority protocols we cannot stop a dishonest party from terminating the protocol for everyone.

We need to protect against the problem of who goes first or last. A property in the literature called "fairness". For example suppose we have an election with three voters; A, B and C. Suppose the votes are encrypted, player C can ensure that the candidate voted for by A wins (and hence find out who A voted for) by replicating A's vote. This should be protected against. 

The adversary may have a set of parties who it controls at the start of the protocol, a so-called static adversary. Or perhaps as the protocol proceeds the adversary decides which parties it wants to corrupt, a so-called adaptive adversary.

As one can see there are a multitude of security concerns one may have in such advanced protocols, and indeed a multitude of security results. Indeed each application domain gives rise to different security properties that one may require. Given the wide range of possible application protocols this means a never ending series of problems for cryptography to solve; and thus a never ending supply of problems for cryptography PhD students to solve.

Thursday, October 15, 2015

Inference Attacks on Property Preserving Encrypted Databases

This post will focus on Inference Attacks on Property-Preserving Encrypted Databases by Naveed, Kamara and Wright presented this week at CCS in Denver. As the name suggests, the idea of property-preserving encryption (PPE) is to encrypt data in such a way that some property, such as ordering (OPE), is preserved---i.e. if $a>b$ then $\mathsf{Enc}(a)>\mathsf{Enc}(b)$. Another example is deterministic encryption (DTE) which preserves equality, meaning that if $a=b$ then $\mathsf{Enc}(a)=\mathsf{Enc}(b)$. PPE is of course inherently less secure than semantically-secure encryption, however it opens the door to a number of applications, in particular the context of healthcare. As a straightforward example, if a large database holds patient records and each column is encrypted using some method of PPE, e.g. gender and disease using DTE, age and admission date using OPE (fields such as patient identifiers may not require any functionality). If a medical professional wishes to analyse data of all patients between the ages of 18 and 25 who have suffered from altitude sickness, instead of downloading the entire database (which she would have to do if the database was encrypted using standard, probabilistic encryption), she can calculate $\mathsf{Enc.OPE.age}(18)=c_1$, $\mathsf{Enc.OPE.age}(25)=c_2$ and $\mathsf{Enc.DTE.disease}(\textrm{altitude sickness})=c_3$ and only download the database entries that have an age ciphertext $c$ in range $c_1 \leq c \leq c_2$ and disease ciphertext $c_3$.

This saves bandwidth, and consequently time and money, but we really don't want the database (or queries to it) to leak anything more than the properties claimed by the encryption mechanisms, namely equality for DTE, and order+equality for OPE. Unfortunately, human characteristics and medical diagnoses have low entropy: if an adversary knows that a medical database holds records for patients from one geographic area, then the distributions of age and race are going to be very similar to the general population for that area, and even worse most of the data fields will be highly correlated to data from another nearby hospital. This opens the door to two attack vectors: inference attacks that use publicly-available data to gain information about queries (the focus of the paper), and de-anonymisation attacks (that can be built from inference attacks) that aim to identify entries of the EDB. Of course we can use semantically-secure encryption for sensitive attributes (requiring download of the whole column before analysis can occur) to try to thwart these attacks, but which attributes are sensitive? The answer to that question is beyond the scope of this post but the answer is, in general, as many as possible. Note that attributes may be sensitive not only for patients but also to hospitals, which may want to keep secret how many patients die of certain diseases.

The authors show that it is possible to mount inference attacks on encrypted databases (that are built on tools such as CryptDB and Cipherbase) storing real patient data from 200 US hospitals, using only the ciphertexts and public auxiliary information. The public data is obtained from the Healthcare Cost and Utilization Project (HCUP) National Inpatient Sample (NIS) database. The talk provided a jarring 'context is everything' moment by including a slide that displayed only the words "We attack each hospital individually" in large font. This means that the authors look at each hospital database on an individual basis rather than grouping them, and use the auxiliary info to mount their attacks. A number of tools are used and can be found in the paper. The easiest to explain is frequency analysis on a column encrypted using DTE (authors focus on mortality risk): sort the histogram, compare to the auxiliary histogram and map same-ranked elements to infer which entries correspond to which attribute. The authors' novel $l_p$ optimisation recovered mortality risk and patient death data for at least 99% of the 200 largest hospitals, and recovered disease severity for 100% of patients for at least 51% of those hospitals. Sorting attacks on data encrypted using OPE take columns that are 'dense' (meaning that all values have entries), and allow the attacker to decrypt all values with no knowledge of the key: the authors recovered the admission month and mortality risk of 100% of patients for at least 90% of the 200 largest hospitals. Another novel attack called the cumulative attack recovered disease severity, mortality risk, age, length of stay, admission month, and admission type of at least 80% of patients for at least 95% of the largest 200 hospitals. These results only exploit the databases themselves and not leakage from queries ('ciphertext only'), suggesting that the results are a lower bound on the information that could be gleaned from these encrypted databases.

Interestingly, the following talk in the session was given by Florian Kerschbaum defined a new security model for order-preserving encryption, potentially giving a countermeasure against some of the attacks on OPE given above.

Face/Off: Preventing Privacy Leakage From Photos in Social Networks

One of the worst features of Facebook is when a friend uploads a photo of you looking awful and then tags you in it so all of your friends can have a good laugh. You can untag yourself and burn your ugly outfit but the photo is still there. Worse still, your friend might have terrible privacy settings allowing anyone to see the picture.

Wouldn’t it be nice if you had more control over the privacy settings for photos you’re in? At CCS today Panagiotis Ilia spoke about a tool to give you just that. With their system, when a user uploads a photo it gets run through facial recognition software to identify the people in it. The photo is then split into layers so that each person gets control over the layer containing their own face. Users then choose who can see their face in the photo, with people outside the privileged group seeing a blurred face instead.

Ilia and his team ran experiments, showing people blurred face photos of random friends and found that 82% of those polled could not say which of their friends featured in the photos. Of those who could, their friend’s identity was given away in most cases by the other people in the photo, followed by their body or hair, and finally by their clothing, for example their characteristic Hawaiian shirt and running shoes.

A member of the audience asked what would happen if you used your favourite photo editing software to manipulate a person’s face so that the facial recognition software would be fooled but a human could still recognise the person, and was assured that should the facial recognition software fail to find a match for the person in the photo, they would appear blurred by default. This protects those who opt out of social networking from appearing in pictures. A potential issue here is if I get a selfie with my favourite celebrity then their face may be blurred by the software and no one will believe I really did meet Alan Hansen in a KFC.

If you make bad fashion choices, are regularly photographed in compromising positions, or just want to know more about the technical details you can read the paper here.

Tuesday, October 13, 2015

Network protocol obfuscation

CCS 2015 kicked off in Denver today, with three parallel sessions in play throughout the conference. Today I'm going to write a little bit about the first talk in the session on censorship and resistance, "Seeing through Network Protocol Obfuscation" by Liang Wang, Kevin P. Dyer, Aditya Akella, Thomas Ristenpart and Thomas Shrimpton, with the talk delivered by Liang.

The authors framed the adversary in the context of their work as a nation state armed with deep-packet inspection (DPI) capabilities, where the adversary wishes to block network access to certain sites or resources, and uses DPI to identify and nullify attempts to circumvent these blocks.

Typically, anti-censorship attempts follow fairly typical strategies; if a particular circumvention protocol is blocked (Tor being the canonical example), then an anti-censorship mechanism will typically try to either randomise bytes sent over the wire with the goal of looking like 'nothing', to 'look' like an unblocked protocol (e.g make Tor look like HTTP) or to tunnel the blocked protocol over an unblocked protocol.

The adversary here is probabilistic; it must guess whether a particular connection is attempting circumvention or not. As a consequence, it's important to understand both the false positive (blocking a genuine connection) and false negative (not identifying a circumvention attempt) rates to be able to judge most accurately the effectiveness of the strategy followed by the adversary.

A valid and I think very important criticism of a proportion of research into this area of classification of network trace information is that often the data set used to evaluate success rates of adversarial strategies isn't representative enough of the real-world environment; the laboratory data sets may be too small, only sampled from a small subset of the 'population', and a classifier may be heavily over-fitted to the training data and thus performance results will not translate to a different data set (see, for example, Mike Perry's blog post and an associated paper). In this regard, this work appears to do a reasonable job; one data set gathered contained packet level information from a variety of locations and over a variety of connection types from a campus networks, and researcher-generated obfuscated Tor connections generated from a reasonably wide range of environments.

The attack strategies used by the adversary are often very straightforward. The authors explored a wide range of circumvention strategies and consequent detection mechanisms that are better explained in the research paper rather than here, but as a motivating example, to detect usage of the format-transforming encryption (FTE) technique, which attempts to make Tor traffic mimic standard HTTP traffic, the adversary can achieve fairly low false positive rates (< 4 percent) by simply reassembling HTTP streams and checking the header information for correctness.

Across the board the authors find that they can construct attacks with false positive rates as low as 0.03 percent. The precise level at which the FPR becomes unacceptable for an organisation inspecting extremely large amounts of traffic (such as the Great Firewall of China) isn't clear, but it seems that anti-circumvention technologies face a tricky task in remaining useful over time. There's also an interesting comparison here with data exfiltration techniques---some anti-censorship mechanisms are viable as data exfiltration tools for extracting secure data unnoticed from a network.

Particularly when machine-learning techniques are involved, the future may yield a scenario in which the obfuscation tool has to adapt faster than the detection mechanism used to 'survive' (and vice versa).

Monday, October 12, 2015

52 Things: Number 51: What is the security model for ID-based encryption, and describe one IBE scheme.

This is the latest in a series of blog posts to address the list of '52 Things Every PhD Student Should Know To Do Cryptography': a set of questions compiled to give PhD candidates a sense of what they should know by the end of their first year. This week we introduce Identity-Based Encryption.

In public key cryptography, if Alice wants to send a message to Bob, she needs his public key. Typically this will be some very long bitstring (encoding, for example, the product of two large primes).

Suppose instead that Alice could use Bob's name, or email address, as his public key. Practically speaking, this is very convenient: there are no very long strings for Alice to obtain and remember and she doesn't need to verify that some seemingly random string is in fact Bob's public key (and not, for example, Charlie's public key). But to facilitate this, one needs Identity Based Encryption, or IBE.

In IBE, there is an entity called the Private Key Generator, or PKG. The PKG is able to compute Bob's private key using his ID (e.g. his email address) and a master key. Once Bob has authenticated himself to the PKG, he can ask for his private key and then, once he has it, he can decrypt any messages that have been encrypted under his ID.

There is an issue here, though. With the master key, the PKG can generate private keys for any agent it likes. Therefore the PKG can decrypt any messages intended for any agent. This is called key escrow, and it means that you must either trust the PKG not to read your encrypted messages or else not care that it does. In a company, though, senior management often has the right to read your (work) emails, and so IBE can be an appropriate solution for internal correspondence.

Formally, an IBE scheme consists of four algorithms: setup, extract, encrypt and decrypt.

Setup takes a security parameter and outputs the (secret) master key and the (public) system parameters, such as the message and ciphertext spaces.

Extract takes an ID and a master key and returns a private key corresponding to the ID.

Encrypt takes a message and an ID, and returns a ciphertext.

Decrypt takes a ciphertext and a private key, and returns a message.

Boneh and Franklin gave an IBE scheme in this 2003 paper. They prove, under an assumption similar to assuming that the CDH problem is hard, that their scheme is IND-ID-CCA secure in the Random Oracle Model. This means that (assuming all hash functions are random oracles), any attacker, running in polynomial-time with respect to the security parameter, wins the following security game with probability that is only negligibly (with respect to the security parameter) more than 1/2:

First, the attacker
  • can request the private keys corresponding to any ID
  • can request decryptions of any ciphertexts under any ID.
Then, the attacker chooses two messages $m_0, m_1$ and an ID $ID^*$ that does not occur in the list of IDs for which he has requested the corresponding private key. The attacker then receives the encryption $c^*$ of $m_b$ under $ID^*$ (where the bit $b$ is chosen uniformly at random). Then the attacker
  • can request the private keys corresponding to any ID apart from $ID^*$
  • can request decryptions of any ciphertexts under any ID apart from $ \left ( c^*, ID^* \right) $
and outputs a bit $b'$. We say the attacker wins if $b' = b$.

The scheme given by Boneh and Franklin relies on a non-degenerate bilinear map $e: G_1 \times G_1 \to G_2$, where $G_1$ is a group of prime order $q$, which we write additively, and $G_2$ is a group, also of order $q$, which we write multiplicatively. They instantiate this map with the Weil pairing on elliptic curves, but we omit details here. All that matters is bilinearity: $e \left( aP, bQ \right ) = e \left( P, Q \right)^{ab}$ for any $a, b \in \mathbb{Z}_q$.

There's not enough space here to describe the scheme in full, but essentially the master key is some non-zero $s \in \mathbb{Z}_q$ and the private key corresponding to $ID$ is $sH \left(ID \right)$, where $H$ is a hash function sending bitstrings to elements of $G_1$. There are public generators $P$ and $P_{pub} = sP$ of $G_1$.

To encrypt $m$ under $ID$, one selects a random string $\sigma$ and XORs $m$ with a hash of $\sigma$, creating $c_m$. Then $M$ and $\sigma$ are hashed together, giving a non-zero element $r \in \mathbb{Z}_q$. Finally one computes the pairing $e \left( H \left( ID \right), P_{pub} \right) $, raises it to the power $r$, hashes this and XORs with $\sigma$, creating $c_{ID}$. The triple $\left( rP, c_{ID}, c_m \right)$ is the ciphertext.

With the private key $d = sH\left(ID \right)$, one decrypts the triple $\left (U, V, W \right)$ as follows: compute $e \left( d, U \right)$, which, by bilinearity, will equal $e \left( H \left( ID \right), P_{pub} \right)^r $ if the ciphertext was genuine. So one XORs $V$ with the hash of the pairing to obtain $\sigma$. Then XORing $W$ with the hash of $\sigma$ will give $m$. To check that this is the intended message, one verifies that $\sigma$ and $m$ hashed together gives $r$ such that $U = rP$.

Sunday, October 11, 2015

Study group: Provably weak instances of Ring-LWE

After Ryan's study group last week, it was my turn this morning to present my chosen paper, Provably Weak Instances of Ring-LWE.

The authors build up on previous attack on Poly-LWE (EHL) to extend the attack to a larger class of number fields, and show how to apply that to construct an attack on Ring-LWE.

In a nutshell, Ring-LWE instances can be mapped into Poly-LWE ones, and if the distortion is not too large and there is an attack on the Poly-LWE instance, then this can be mapped back. The distortion is governed by the spectral norm of the mapping, which we define later on.

Let us begin by re-stating the Poly-LWE problem. From now on, we take $f(x)$ to be a monic irreducible polynomial in $\mathbb{Z}[x]$ of degree $n$ and $q$ to be a prime such that $f$ factors completely modulo $q$ and has no repeated roots. Let $P = \mathbb{Z}[x]/(f(x))$ and $P_q = P/qP = \mathbb{F}_q[x]/(f(x))$. We denote the uniform distribution by $\mathcal{U}$ and the discrete spherical (with respect to power basis) Gaussian of mean 0 and variance $\sigma^2$ by $\mathcal{G}_\sigma$, both on $P_q$.

Decision Poly-LWE Problem: Let $s(x) \in P_q$ be a secret, drawn from the uniform distribution. The Decision Poly-LWE problem is to distinguish, with non-negligible advantage, between the same number of independent samples in two distributions of $P_q \times P_q$. The first consists of samples of the form $(a(x), b(x)=a(x)s(x)+e(x))$, where $e(x)$ has been drawn from $\mathcal{G}_{\sigma}$ and $a(x)$ from the uniform distribution. The second consists of uniformly random and independent samples from $P_q \times P_q$.

It is worth mentioning that this is slightly non-standard terminology. PLWE differs from RLWE in that RLWE samples the errors in the Minkowski embedding and then pulls these values back to obtain polynomials, whereas PLWE samples the polynomial coefficients directly. Now that we have established the problem, let's break it.

The attack on Poly-LWE is fairly straightforward. It proceeds as such.

1. Map the problem down to $\mathbb{F}_q$ via a homomorphism $\phi$.
2. Loop through all possible guesses of the image of the secret $\phi(s(x))$.
3. Gather the values $\phi(e_i(x))$ under the assumption that the guess is correct.
4. Examine the resulting samples to try and determine whether they are images of a uniform distribution or a Gaussian one.

We recall that in a previous work, an attack was presented in the case when $f$ has a root $\alpha \equiv 1 (\mod q)$ or has a root of small order modulo $q$.

We first map everything down to $\mathbb{F}_q$. Write $f(x) = \prod_{i=o}^{n-1} (x - \alpha_i)$, which we can do by assumption. We then use the Chinese Remainder Theorem to write

$P_q \cong \prod_{i=o}^{n-1}\mathbb{F}_q[x]/(x-\alpha_i) \cong \mathbb{F}^n_q$.

For each root $\alpha_i$ of $f$ (of which we recall there are $n$), there is a ring homomorphism

$\phi : P_q \rightarrow \mathbb{F}_q[x]/(x-\alpha_i) \cong \mathbb{F}_q$,

obtained by simply evaluating $g(x) \in P_q$ at the said root. We fix a root $\alpha = \alpha_i$. We then loop through all $g \in \mathbb{F}_q$, where each $g$ is taken as a guess for the image of the secret $s(\alpha)$. Assuming this is correct, we obtain

$e_i(\alpha) = b_i(\alpha) - a_i(\alpha)g = b_i(\alpha) - a_i(\alpha)s(\alpha)$.

If our samples were of LWE nature, then the collection of ${e_i(\alpha)}_i$ will be distributed as the original errors. Thus this method relies on $\phi(\mathcal{U})$ and $\phi(\mathcal{G}_\sigma$ being distinguishable, as well as $q$ being small enough to allow looping through $\mathbb{F}_q$. We present the first algorithm in the paper; the other, based on the size of the error values, is similar and can be found here. It runs as follows.

We assume that $f$ has a root $\alpha$ of small order $r$ modulo $q$, i.e. $\alpha^r \equiv 1 \mod q$. Then

 $e(\alpha) = \sum e_i\alpha^i = (e_0 + e_r + e_2r + ...) + \alpha(e_1 + e_r+1 +...) + \alpha^r-1(e_r-1 + ...)$.

If $r$ is small enough, then the above sum only takes on a small number of values modulo $q$; which  allow us to efficiently distinguish whether a given value modulo $q$ belongs to that set of values. So let $S$ be the set of possible values of $e(\alpha)$ modulo $q$, notice it has size $(4\sigma n/r)^r$.

  • Let $G$ be an empty list
  • For $g \in \mathbb{F}_q$ from 0 to $q-1$, for (a(x), b(x)) in the collection
    • if $b(\alpha)-ga(\alpha)$ does not equal an element of $S$ then break
    • otherwise append $g$ to $G$
  • If $G$ is empty, return NOT PLWE
  • If $G = {g}$, return $g$
  • If $|G| > 1$ return INSUFFICIENT SAMPLES

To finish this blog post, we explain how the authors move the attack from Poly-LWE to Ring-LWE. Suppose $K$ is a monogenic number field and $R$ its ring of integers, so that $R$ is isomorphic to the polynomial ring $P = \mathbb{Z}[x]/(f(x))$ for some monic irreducible $f$. Let $\Phi(R)$ be its canonical embedding into $\mathbb{R}^n$. Then there is a map which we'll denote by $M_\alpha$ giving an isomorphism $M_\alpha : P \rightarrow \Phi(R)$. We define the spectral norm of this map as the radius of the smallest ball containing the image of the unit ball under $M_\alpha$. This will govern the distortion of the error distribution on $\Phi(R)$. And if it not too large, then a solution to the Poly-LWE problem with the new spherical Gaussian distribution may be possible. In which case, it will give a solution to the original Ring-LWE problem. Loosely speaking, we require that the following conditions are satisfied.

1. $K$ is monogenic.
2. $f$ satisfies $f(1) \equiv 0 \mod q$ (or has a root of small order, as seen).
3. $\rho$ and $\sigma$ are sufficiently small.

If we denote the spectral norm by $\rho$, the main theorem in the paper tells us that if the spectral norm satisfies

$\rho < q/(4 \sqrt{2\pi}\sigma n)$,

then the (non-dual) Ring-LWE problem decision problem can be solved (for parameters) in time $\widetilde{O}(lq)$ with probability $1-2^{-l}$, where we recall $l$ is the number of samples.

Monday, October 5, 2015

Study Group: Attacking PKCS#11 Tokens

This year, the Bristol Cryptography Group is doing something slightly different with our weekly study groups. Each member of the group has been tasked with talking about "the best paper [they've] read this year".

This means that papers we're presenting won't all be particularly recent, but (hopefully) interesting and exciting bits of work.

I kicked things off last week by talking about Attacking and Fixing PKCS#11 Tokens [1], which is from CCS '10. I started my PhD just over a year ago and this was one of the first papers I read. It has stuck with me as a model of good scientific writing: the exposition is very clear, the content is cleanly presented and one doesn't need to have a huge amount of prerequisite knowledge in order to follow the methodology. More importantly, even a non-specialist reader can understand the main result of the paper and appreciate its significance.

In brief: the authors used a formal, symbolic model to analyse a number of commercial security devices implementing a standard key management API called PKCS#11. Of the 17 tokens tested, "9 were vulnerable to attack, while the other 8 had severely restricted functionality." As I said, one doesn't need to have a PhD in Cryptography in order to get the point: real security devices were seriously flawed. In fact, they allowed you to obtain secret keys in plaintext.

A security token is designed to provide access to sensitive material in insecure environments. Your company might want you to access their (sensitive) system from your (potentially insecure) house, so they give you a USB stick (or similar) to bridge the gap. The token will do cryptography for you and your personal laptop, which could be infested with malware, won't be able to directly access the secret keys stored on the token. Instead, there's an API - an interface - between your machine and the token. Moreover, the token is (supposedly) tamper-resistant, so one cannot learn the key values by breaking open the hardware.

The most popular API for security tokens is PKCS#11, which is described in a vast document that was first published in the mid '90s and has been updated very little since then. The trouble is, while the document itself is public [2], the implementation of its contents within each security token is typically unknown to all but the token's manufacturer. So, while attacks on the standard had been known for quite some time prior to the publication of this paper [3, 4], it was unclear whether these attacks could actually be mounted on real devices. Perhaps the manufacturers had been smarter than those who had written the standard and had avoided the many pitfalls by adding extra safeguards?

Of course not. Compliance trumps security every time.

The authors of the paper built a tool that reverse-engineers a token to deduce its functionality, builds a formal model to represent this functionality, finds an attack in the model and then attempts to implement this attack on the real device. While the formal model is quite sophisticated, the attacks are not. Of the five presented in the paper, three are obvious non-compliance with the standard (such as the token just handing you a secret key in plaintext if you ask for it!) and the other two are versions of the famous Wrap/Decrypt attack first found in 2003 [3]. The only tokens that were not vulnerable to these basic attacks were those that disabled key wrapping (encrypting a key under another key) altogether, but this is an important feature of key management APIs.

It is, in fact, possible to do key wrapping in a secure way, but it requires great care. There has been some research in this direction since 2010, including my own ongoing work, but I don't have space to elaborate on this here.

For me, the take-home message from this paper is that, if you don't prove the security of your security device, it's probably insecure. This insecurity makes for fun academic papers, but is also rather worrying when your device is widely used in the real world.

[1] - http://dl.acm.org/citation.cfm?doid=1866307.1866337
[2] - http://docs.oasis-open.org/pkcs11/pkcs11-base/v2.40/pkcs11-base-v2.40.pdf
[3] - http://link.springer.com/chapter/10.1007%2F978-3-540-45238-6_32
[4] - http://www.lsv.ens-cachan.fr/Publis/PAPERS/PDF/DKS-tfit08.pdf