Tuesday, May 31, 2016

Workshop on the Theory and Practice of Secure Multi-Party Computation 2016: Stable Matching

Suppose we have two groups of people, $A$ and $B$, and each person in group $A$ lists each person in group in order of their preference for being partnered with, and vice versa. Is there a way of ‘optimally’ pairing each person in $A$ with a person in $B$? This was the focus of abhi shelat’s talk at the Workshop on the Theory and Practice of Secure Multi-Party Computation 2016 at the University of Aarhus, Denmark.

The problem is known as the Stable Marriage Problem and has wide field of application. Perhaps surprisingly, it can be shown that optimal solutions always exist. In fact, David Gale and Lloyd Shapely came up with an algorithm which constructs this matching (the achievement contributing in part to Shapely’s joint win of the Nobel Prize in Economics in 2012).

There are certain cases where it would be useful for the preferences made by each party to be kept secret. The application to the world of secure MPC then becomes clear. We were provided with two motivating examples.
  • Matching prospective medical residents with hospitals.
  • Matching women to sorority houses.

In these two cases, the data should be kept private. The latter example is based on a real-world instance of the problem in which, to avoid awkward social situations in which sorority houses received women whom they had not preferred, it transpired that one university had exported all of the data comprising lists of preferences to an impartial third-party in Texas, who could sort through it for them and make the assignment obliviously.

To perform the Gale-Shapely algorithm, we must have $O(n^2)$ arrays holding all of the preferences (where $n$ is the number of participants), and also an array holding the current matching. Additionally, loops in the algorithm need $O(n^2)$ time. As such, using garbled circuits turns out to be quite inefficient. The use of oblivious RAM (ORAM - you can think of this as something that interfaces between the CPU and physical RAM in a way that hides the precise data access pattern), enables a straightforward implementation, but again, not an efficient one. In an attempt to improve on this, abhi showed how to optimise the algorithm between 40 and 100 times.

First, we remove the necessity for ORAM arrays, which are used for: (1) checking the preferences, and (2) finding the next unmatched element. Analysing the algorithm and searching for optimisations allows (2) to be done through the use of a queue, which saves a lot in the implementation. The main optimisation, however, comes from a cheap way of performing (1), which in the original algorithm requires $O(n^2)$ space.

The contribution of the presented work consists of three particular insights leading to significant reductions in complexity of the algorithms:
  1. Operations on the matrix of preferences are only ever ‘read-only’ instructions. The ORAM data structure contains extra space in case of rewriting - this is not needed! Noticing this means we can asymptotically reduce the complexity of searching through these preferences.
  2. The preferences are only ever read in a specific order. Noticing this allows us to avoid the recursive lookups into the smaller ORAMs containing the preferences, which is expensive. This observation was made by Craig Gentry (here) for more general data structures.
  3. Asymptotically better initialisation of the data can be achieved. This is through the use of what they define as oblivious multi-lists, which, roughly, are (obliviously) sorted lists of preferences of each pair in $A\times B$. This multi-list allows a much faster matching to be made, and the cost is: $\Theta(n^2 \log^2 n)$ for $n$ sorts on $n$ elements, and then $\Theta(n^2 \log n)$ for the random permutation on $n^2$ elements.

These optimisations means it takes less than 2 minutes for 600 participants for the sorority house example given above, which is at least 40 times faster than a straightforward ORAM application.


In practice, the sizes of the sets $A$ and $B$ will not often be the same. This generalised problem was studied by Roth and Peranson and somewhat complicates things, but can still be dealt with. It was suggested that this could be an interesting avenue for further research.

Sunday, May 29, 2016

37th IEEE Symposium on Security and Privacy

The 37th IEEE Symposium on Security and Privacy took place last week in San Jose, California. Although there wasn't an awful lot to see in San Jose itself (direct quote from the Uber driver who took me to the hotel), the conference was full of people from all different backgrounds and interests, which made networking very interesting indeed.

This was my first big conference I had attended, so I wasn't quite sure what to expect. I definitely didn't expect six hundred people to turn up, which was quite overwhelming at first. It also meant the biscuits at break time went very quickly. However, the atmosphere was very enjoyable and I had a fantastic time listening to all the presentations and speeches from various members of the IEEE community.

The papers covered all different aspects of Security and Privacy, from a paper on the Formal Treatment and Implications for TLS 1.3 [1], to a Survey of Techniques against Telephone Spam [2]. After the three days of the conference, it split into six 'workshops'. I mostly attended MoST - the Mobile Security Technologies Workshop - but went to a couple of talks on the BioStar stream - the Workshop on Bio-inspired Security, Trust, Assurance and Resilience.

One of the most enjoyable talks of the conference was on a paper titled "Users Really Do Plug In USB Drives They Find" [3], which follows academics from the University of Illinois littering USB drives round the University campus, containing labels like 'Top Secret Information', or even 'All Exam Answers' (this experiment took place just before finals). Inside each drive were a variety of 'inconspicuous' documents, all html files in disguise linking to a survey that logged the time and filename to a server when users clicked on one of the files. They dropped 297 USB drives around, and 290 of these were picked up (or rather, no longer in their original position). From these 290 that were picked up, 135 people opened a file on them. This is a big security risk, as these USB drives could have malicious files on them, that could infect the host machine if plugged in and run.

The most interesting talk for me was "Dedup Est Machina: Memory Deduplication as an Advanced Exploitation Vector" [4]. Memory deduplication is a technique (default in Windows 8.1 and 10) that 'maps multiple identical copies of a physical page onto a single shared copy with copy-on-write semantics'. However, as writing to a shared page takes longer than writing to a normal page, this is a target for timing side channel attacks. The paper not only exploits this, but goes on to develop a JavaScript-based attack against the Microsoft Edge browser, that allows arbitrary memory read and write access in the browser using a reliable Rowhammer exploit.

Overall, the conference was rewarding and worthwhile experience, and I recommend the conference to anyone interested in the fields of Security and Privacy, or who want some free t-shirts from the various sponsors.



[1] http://www.ieee-security.org/TC/SP2016/papers/0824a470.pdf
[2] http://www.ieee-security.org/TC/SP2016/papers/0824a320.pdf
[3] http://www.ieee-security.org/TC/SP2016/papers/0824a306.pdf
[4] http://www.ieee-security.org/TC/SP2016/papers/0824a987.pdf

Monday, May 16, 2016

Securing Cryptography Implementations in Embedded Systems

In the afternoon of the second day of EUROCRYPT 2016, Emmanuel Prouff gave a tutorial on how to securely implement cryptographic schemes on embedded devices. The main focus of the tutorial was on the security definitions of side-channel countermeasures, their construction and analyses.

A side-channel observation tends to leak information on the operation being performed at a given point in time and also on the data under manipulation. To secure an implementation against side-channel leakage, we need effective and efficient countermeasures that can be formally analysed in reasonably realistic models of leakage. The need for formal security analysis need not be emphasised given the fact that the side-channel analysis research community has witnessed several ad hoc analyses being invalidated. Practical security evaluation of countermeasures that come with sound theoretical analysis is also necessary for various reasons, the most important of which is to validate the practical relevance of the formal model of leakage adopted.

Side-channel observations are inherently noisy and the efficiency of a side-channel attack depends on the amount of noise in an observation. So a natural way to secure implementations against such attacks is to increase the noise in the observations. One method to achieve this is to secret share sensitive intermediate variables in such a way that probing a few intermediate values would not reveal any information about the secret parameters. For example, a secret key $x$ could be shared as $x = x_0 \oplus \ldots \oplus x_d$. This method belongs to a popular class of countermeasures known as masking, and these type of countermeasures are of algorithmic nature, which is in contrast to physical countermeasures such as hiding. One of the reasons for the popularity of the masking countermeasure is its amenability to a formal security analysis. Though the "crypto theory" community has shown considerable interest to provide effective defenses against side-channel attacks, for instance, the research on leakage-resilient cryptography [MR04,DP08], a practical solution currently seems to be out of reach due to its lack of efficiency. A possible reason for this is that the formal model of leakage often used is too strong compared to that observed in practice.

The first use of the idea of secret sharing as a side-channel countermeasure dates back to the works [GP99] and [CJRR99]. The practical effectiveness of the masking countermeasure can be deduced from a result in [CJRR99] that the number of side-channel traces necessary to distinguish the underlying unmasked bit increases exponentially w.r.t. to the number of shares of the corresponding bit. To formally analyse the security of masking schemes the most popular model of leakage used is the probing model of [ISW03]. In this model, an adversary is allowed to obtain leakage corresponding to a fixed number of wires of a boolean circuit. A more realistic leakage model, called as information bounded model, was proposed in [RP13]. These two models were unified in [DDF14].

Two main issues that need to be addressed while designing a masking scheme are: (1) how to share the sensitive data, and (2) how to securely process the shared data. For the former problem, most often one adopts a linear secret sharing scheme, in particular, the boolean masking. This method is closely related to the problem of constructing error correcting codes with large dual distance. Other alternatives to the linear secret sharing schemes are multiplicative masking, affine masking, modular additive masking, homographic masking, inner product masking, etc.

The latter problem of securely processing on shared data is closely related to the problem of secure multi-party computation. In the context of boolean masking, a well known method to compute in the presence of shares is from [ISW03]. Note that for circuits over the finite field of two elements $\mathbb{F}_2$ (i.e., boolean circuits), processing an $\mathbb{F}_2$-addtion gate (i.e., an xor gate) is straightforward as we just need to add up the corresponding shares of the input to produce the shares of the output of the addition gate. The main difficulty is to process an $\mathbb{F}_2$--multiplication gate (i.e., an and gate). The method proposed in [ISW03] to securely perform multiplication in the presence of shares requires quadratic amount (in the number of shares) of time and randomness, and hence is more expensive compared to performing an addition. The [ISW03] method was generalised to $\mathbb{F}_{2^n}$-arithmetic circuits in [CGPQR12] for improved efficiency in software implementations. In [CGPQR12], the function represented by a given (non-randomised) arithmetic circuit over $\mathbb{F}_{2^n}$ is represented by an univariate polynomial over $\mathbb{F}_{2^n}$, which is then evaluated by a sequence of the following operations: polynomial addition, scalar multiplication, polynomial squaring and multiplication of two distinct polynomials. While the first three are $\mathbb{F}_{2}$-linear operations, and hence relatively cheap to process, only the multiplication operation is non-linear and relatively more expensive. Currently, the heuristic method of [CRV14] is the most efficient polynomial evaluation method over binary finite fields that seeks to minimise the total count of non-linear multiplications. Recently, [CPRR15] proposed an algebraic decomposition method where the target function to be securely evaluated is represented as a composition of quadratic or higher algebraic-degree functions, which in turn are securely implemented more efficiently than by using previously known techniques.
While the probing model is effective against differential power analysis-like attacks, however, they are vulnerable to attacks that exploit the presence of glitches. The security requirements of glitch-resistant side-channel countermeasures are more demanding than that of masking schemes and, as a consequence, are less efficient in practice than masking schemes. Threshold implementations are a well-known class of glitch-resistant countermeasures.

The speaker concluded the tutorial by emphasising the need for algorithmic side-channel countermeasures enabled with formal security analysis, the need for formal models of leakage that suit the physical reality of devices and that enables relatively simple security proofs, and the need for efficient countermeasures.  

References:

[CGPQR12] Claude Carlet, Louis Goubin, Emmanuel Prouff, Michaël Quisquater, Matthieu Rivain: Higher-Order Masking Schemes for S-Boxes. FSE 2012. 

[CJRR99] Suresh Chari, Charanjit S. Jutla, Josyula R. Rao, Pankaj Rohatgi: Towards Sound Approaches to Counteract Power-Analysis Attacks. CRYPTO 1999.

[CPRR15] Claude Carlet, Emmanuel Prouff, Matthieu Rivain, Thomas Roche: Algebraic Decomposition for Probing Security. CRYPTO 2015.

[CRV14] Jean-Sébastien Coron, Arnab Roy, Srinivas Vivek: Fast Evaluation of Polynomials over Binary Finite Fields and Application to Side-Channel Countermeasures. CHES 2014.

[DDF14] Alexandre Duc, Stefan Dziembowski, Sebastian Faust: Unifying Leakage Models: From Probing Attacks to Noisy Leakage. EUROCRYPT 2014.

[DP08] Stefan Dziembowski, Krzysztof Pietrzak: Leakage-Resilient Cryptography. FOCS 2008.

[GP99] Louis Goubin, Jacques Patarin: DES and Differential Power Analysis (The "Duplication" Method). CHES 1999.

[ISW03] Yuval Ishai, Amit Sahai, David Wagner: Private Circuits: Securing Hardware against Probing Attacks. CRYPTO 2003.

[MR04] Silvio Micali, Leonid Reyzin: Physically Observable Cryptography (Extended Abstract). TCC 2004.

[RP13] Emmanuel Prouff, Matthieu Rivain: Masking against Side-Channel Attacks: Masking against Side-Channel Attacks: A Formal Security Proof. EUROCRYPT 2013.

Eurocrypt 2016: All Complete Functionalities are Reversible

In the multi-party computation (MPC) track at Eurocrypt 2016 in Vienna, Austria, Dakshita Khurana rather heroically gave two consecutive, very well-presented talks, the second of which explaining that All Complete Functionalities are Reversible. This second talk is the focus of this blog post.

In the context of MPC, a functionality is an algorithm which performs some operation using two or more parties' inputs while guaranteeing certain requisite security properties. The functionalities covered by this paper are secure function computations (SFEs), in which the parties want to compute some function on their combined inputs in a secure manner.

In the case of functionalities with two parties, $P_A$ and $P_B$, we usually consider one party as the sender and the other as the receiver. For decades, an open question in the field of MPC was, Given a two-party functionality $\mathcal{F}$ in which $P_A$ acts as the sender and $P_B$ as the receiver, is it possible to turn this into a secure scheme in which instead $P_A$ acts as the receiver and $P_B$ as the sender? This reverse functionality is denoted by $\textsf{reverse}(\mathcal{F})$. Intuitively, we think of a functionality allowing this reversal as containing 'enough symmetry' to perform the functionality of $\mathcal{F}$ regardless of which party performs which role.

It was shown by Wolf and Wullschleger that oblivious transfer (OT) can be reversed. OT is a process by which $P_A$ receives one of (at least) two values private to $P_B$ without $P_B$ finding out which $P_A$ chose.

A functionality $\mathcal{F}$ is called complete if both $\mathcal{F}$ and $\textsf{reverse}(\mathcal{F})$ are able to realise OT functionality. In light of Wolf and Wullschleger's result, a natural question to ask is then, Exactly which complete 2PC functionalities can be reversed? This new result shows that they all can be.

The main technical challenge that the authors overcome is that of creating commitments in both directions. This accomplishment is essentially what allows this symmetry property to go through and achieve the result.

As one would hope for in papers on cryptography, this result has practical as well as theoretical appeal: if one party possesses some physical property requiring that it perform a particular role in the functionality (e.g. a computationally 'weak' device), it is essential that this property suffice to compute securely even if roles are reversed. In this way, this paper serves as a good example of how cryptography theory and practice can and should go hand in hand. Like a couple dancing a Viennese waltz?

Thursday, May 12, 2016

Eurocrypt 2016: Message Authentication Codes

Of the many sessions of EuroCrypt 2016, one of the most interesting was on Message Authentication  Codes (MACs). Chaired by Peter Gazi, the session was somewhat different from many provable security sessions in that it focused on better understanding the security of two schemes already in existence (and use).  In each case, the MAC in question is constructed through a mode-of-operation on top of an ideal primitive. This seems to be part of a trend at the moment: a number of recent papers on modes-of-operation papers have been written to tighten the security bounds of known schemes[1].

First up, Stefano Tessaro presented joint work with Mihir Bellare and Dan Bernstein [2] on the security of the MAC function used in the Ed25519 deterministic signature scheme, which they term AMAC (for Augmented MAC). AMAC can be thought of as a simplified version of using HMAC with an MD-based hash function, in which we replace the outer keying with a public output transformation.
The work first generalises AMAC to describe an "Augmented Cascade" by allowing any output transformation. This surfaces a new characterisation for the compression function: a PRF-under-Out-Leakage, where Out is the output transformation in question. That is, the primitive should be secure even when the adversary is given the evaluation of Out on it's inputs (this basically corresponds to length-extension attacks).  Directly targeting multi-user security of the overall scheme (rather than relying on a hybrid argument), they provide a reduction from the from the multi-user augmented cascade to the multi-user PRF-under-Out-leakage security of the compression function.
The work then demonstrates that (in the ideal model) this bound is small, and in doing so deduces the security of the cascade, which directly implies security of AMAC.  It was observed that bound was better than one would have by simply calculating single-user security and applying a hybrid argument.


The second talk of the session was by Atul Luykx, who spoke about PMAC (Parallelizable MAC) and its security bounds [3], based on work with Bart Preneel, Alan Szepieniec and Kan Yasuda. Atul focussed on what a security bound actually means, and why in the concrete world (where we expect in instantiate our schemes by setting real-world parameters) we should be careful about needlessly loose security proofs.  For example, he showed how, if one wishes to have reasonable level of confidence that an adversary cannot create a forgery yet use a small block cipher, birthday bound security can end up as low as a single-digit number of queries.
With this in mind, we looked at the security landscape for first EMAC (encrypted CBC MAC) and then PMAC through a graph of number of queries against maximum query length, with the various security bounds drawn in.  With generic attacks taking out large sectors of each graph, it was still clear that the attacks and bounds did not meet, and the paper focusses on whether these proofs can be modified to be independent of query length. The paper explains how this length-dependence is heavily dependent on the type of masking used internally, and this leads to the main conclusions. The most relevant of these for the real world is that using a Gray code to generate the masks leads to a length-dependent attack.


[1] See for example these papers from FSE.
[2] https://eprint.iacr.org/2016/142.pdf
[3] https://eprint.iacr.org/2016/185.pdf

Eurocrypt 2016: Hacker Ethics on the Rise

Following Phil Rogaway at Asiacrypt, Bart Preneel gave the IACR distinguished lecture on the future of cryptography. He went from outlining the Snowden leaks to proposing concrete ideas for how information technology should be used. It was refreshing to see a perspective that offers some optimism in these times.

Most of the Snowden leaks are already more than two years old, so I will not cover them in detail. However, it is important to repeat that they are less about breaking cryptography (and if so, more about breaking protocols than primitives) and more about breaking software and coercing companies.

Observing that privacy is a fundamental requirement of a free and democratic society, Bart expressed his disappointment over that the fact that most cryptography in use serves corporate interests rather than users' privacy. The largest deployment of cryptography for privacy is WhatsApp, which is of course dwarfed by the deployment in finance and access control.

A key point of the talk was the move of the focus from communication security to computation security. The real-world protocols of secure channels are far from being perfect, but there is continuous improvement, which promises to solve the problem eventually. However, the state of affairs is more critical for the software implementations. While attacks on TLS itself are relatively involved, software bugs tend to be more readily exploitable and require continuous patching, which in turns opens another attack vector. Current mitigations such as firewalls and anti-virus software seems to offer a weak line of defense at best and yet another attack vector at worst.

Moving away a bit from technological details, Bart compared the architecture of IT systems to politics. Just as democracies avoid a single point of control (and thus a single point of failure), IT systems should do the same. He advocated to move away from big data because big data leads to big breaches. In this light, privacy by design (a core of the new EU data protection law) requires data being stored decentrally. Cryptographic protocols such as MPC and FHE offer the capabilities of using decentralized data without any single point of failure.

Another important point of the talk was the observation that transparency is essential. Without open software and hardware it is impossible to assess the security of a system independently. Together with a call for more privacy, this sounds very much like the hacker ethics of keeping private data private and making data of public interest public.

Finally, Bart replied to Phil's point that cryptographers engage too much in crypto for crypto instead of real problem, saying that it is hard to predict which technology will be valuable in the future and quoting Einstein: "If we knew what it was we were doing, it would not be called research, would it?" Essentially, Bart called for the entirety of the cryptography "stack" to be continuously looked at, from the assumptions to the implementations, and for not avoiding hard problems.

He concluded by quoting Kant: "Optimism is a moral duty."

Tuesday, May 10, 2016

Some Reductions Will Always Be Lossy

One of the last sessions at yesterday’s Eurocrypt was an excellent talk by Tibor Jager, who presented his paper On The Impossibility of Tight Cryptographic Reductions, which is joint work with Christoph Bader, Yong Li and Sven Schäge.

Recall that we often prove the security of a public key primitive by constructing an algorithm, called a reduction, that is able to violate some complexity assumption $A$ (like CDH or DDH) if it has access to an adversary that breaks the primitive. So if $A$ is true, then such an adversary cannot exist.

In the setting where there are $n$ users, it is often the case that the probability of the reduction breaking $A$ is $\frac{1}{n}$ times the probability that the adversary breaks the primitive. This is what we mean when we say that the reduction is not “tight”: it is essentially $n$ times easier for an adversary to break the primitive than for the reduction to violate $A$. If $n$ is very large, then we must choose very large parameters to be sure that the primitive is unbreakable with these parameters (assuming $A$ is true). So tight reductions are highly desirable in practice.

The paper presented yesterday shows that there are many public key primitives for which no tight reductions can exist. We will illustrate this here with the example of an encryption scheme, but the result easily generalises to other primitives.

Consider the following multi-user security game $G$ for an encryption scheme: the adversary is given $n$ public keys $pk_1, \dots, pk_n$, outputs an index $i$ between 1 and $n$, receives the secret keys $sk_j$ for $j \in \{1, \dots, n\} \setminus \{ i \}$, and wins the game if it outputs a valid secret key for $pk_i$.

This looks like an odd game and it corresponds to a very weak notion of security, but showing that no tight reduction exists for $G$ implies that no tight reduction exists for a more natural game: $n$-key IND-CPA where the adversary can adaptively corrupt up to $n-1$ secret keys and query its test oracle using a single uncorrupted key.

We will suppose that a reduction from $G$ to an assumption $A$ exists, and use the reduction itself to violate $A$ directly, without accessing an adversary against $G$. Instead, our meta-reduction will simulate an ideal adversary against $G$ and violate $A$ using the reduction's output from interacting with the simulated adversary. To do the simulation, we essentially replay the reduction $n$ times, supplying a different index $i$ each time and storing the secret keys returned by the reduction, provided that they are actually valid for the public keys. Then it is trivial to “win” $G$ by simply outputting one of these stored secret keys, so our meta-reduction behaves like a very powerful adversary against $G$.

One of two things can happen: either the reduction answers honestly for exactly one index $i$, in which case our rewinding strategy won’t work but the reduction isn’t tight; or our meta-reduction perfectly simulates an ideal adversary, so we violate $A$. So either the reduction isn’t tight, or $A$ is false.

Many of the technical details are omitted here, but it’s worth pointing out one of the conditions that is necessary for this argument to go through. Simply forwarding valid secret keys provided by the reduction doesn’t quite simulate an ideal adversary, since these secret keys could be “non-uniform” in some way. So there must be an efficient algorithm that takes a public key $pk$ and a valid secret key $sk$ and uniformly samples from all the valid secret keys for $pk$. The easiest way to achieve this is if there is exactly one valid secret key for each public key: the “resampling” algorithm just returns its input.

I find this result very neat. It represents the intersection of theoretical work and practical concerns; often the two are very different things.

Monday, May 9, 2016

Eurocrypt 2016: Sanitization of FHE Ciphertexts

This year's Eurocrypt kicked off earlier in sunny Vienna. In today's last session, Léo Ducas talked about how to do your laundry homomorphically (and efficiently), which is joint work with Damien Stehlé.

Recall the aims of Fully Homomorphic Encryption: Alice has some data on which she wishes to compute some circuit $C$. However, she doesn't necessarily want to perform the computation herself, so she wants to outsource it. FHE allows her to outsource computations safely, by which we mean we want to preserve data privacy.

In this setting, Alice sends over her ciphertexts $c_i$ along with a circuit $C$. Bob then computes $c = Evaluate_{pk}(C, c_i)$ and sends $c$ back. In their work, the authors add the requirement that both parties preserve privacy - Alice's data is confidential, but so is Bob's circuit. As a motivation, imagine Bob to be a fiscal optimization consultant. The aim of this work is to efficiently obtain circuit privacy from existing FHE; in particular, we do not want the distribution of a ciphertext to depend on the circuit which led to it via homomorphic evaluation. This is assuming Alice to be honest-but-curious, meaning that we assume that the public key $pk$ and the ciphertexts are honestly formed. The authors combine previous techniques of bootstrapping and re-randomization to obtain a sanitization process.

The current method for doing the above is the so-called "flooding" of a ciphertext [Gentry09]. The drawback of this technique is the requirement of aggressive LWE assumptions (sub-exponential approximate factors) and bad asymptotic parameters.

In this work, the authors instead propose the soak-then-spin technique, which consists of bootstrapping several times and injecting some entropy in between each bootstrapping step. This allows for more lenient LWE hardness assumptions and smaller parameters. The downside is the repetitive bootstrapping operations, although the cost of such an operation has decreased with recent work [AP14][DM15].

They define WASH and SANITIZE algorithms and show them to be message preserving. This means that applying the algorithm then decrypting, or decrypting without applying the algorithm leads to the same result. Moreover, if two ciphertexts decrypt to the same plaintext, then the statistical distance between the sanitize (or wash) of the two ciphertexts is exponentially negligible. In terms of practical results, in HELib, they require around 3 bootstrapping operations.

If you haven't had time to read this post and/or see the talk, here is Léo's TL;DR:
  • Put the ciphertext in water
  • Wash it
  • Rinse it
  • Repeat three times!


References:
[AP14] J. Alperin-Sheriff and C. Peikert. Faster bootstrapping with polynomial error. In Proc. of CRYPTO, volume 8616 of LNCS, pages 297–314. Springer, 2014.

[DM15] L. Ducas and D. Micciancio. FHEW: bootstrapping homomorphic encryption in less than a second. In Proc. of EUROCRYPT, volume 9056 of LNCS, pages 617–640. Springer, 2015.

[Gentry09] C. Gentry. A fully homomorphic encryption scheme. PhD thesis, Stanford University, 2009. Manuscript available at http://crypto.stanford.edu/ craig.

Thursday, May 5, 2016

Side channel research in 2015: overview, statistics, and lots of new ideas!

This is the second time that we have produced a research review (thanks to everyone who commented on last year's effort!). Our review covers the many varied contributions to side channel research in the year 2015, and features five research themes:
  • Efficiently extracting all information from leakage traces
  • Furthering our understanding of fundamental statistical methods in the context of leakage attacks
  • Efficient key rank computation and enumeration
  • Attacking commodity devices
  • Lightweight ciphers
I want to give credit to the effort made by my colleagues (RAs and PhD students) who read and reread last year's published papers and helped me put together a (reasonably) comprehensive review of the main themes of last year's research results. Without them this would not have been possible. 

For what it matters, we have also updated citation counts and other quantitative information about papers (if you think that we missed something, then please let us know). 

You can find the review on the SILENT website.

Monday, May 2, 2016

Study group: Crytography with One-Way Communication

The study group this week was on the paper "Cryptography withOne-Way Communication" by Sanjam Garg, Yuval Ishai, Eyal Kushilevitz, Rafail Ostrovsky and Amit Sahai.

In this paper the authors start the study of secure computation over noisy channel in the one-way setting, where only one party speaks. Different types of discrete noisy channels are considered and relationships  among them are studied (see the picture below).


It turns out that in the one-way setting the Binary Erasure Channel (BEC) and the Binary Symmetric Channel (BSC), that are equivalent in the interactive setting, are qualitatively very different. In particular, it is proved that a BEC cannot be constructed given a BSC; moreover while the erasure probability  of a BEC can be arbitrarily manipulated, i.e. can be both increased and reduced, the probability of  correct transmission of a BSC can only be diminished.

It is well known that in the interactive setting, oblivious transfer (OT) can be realized in an unconditionally secure way from almost any discrete memoryless noisy channel (DMC). This is no longer true in the one-way setting. For example,  we can see what happens if we try to realize a ROT channel (where the sender inputs two bit-strings $m_0$ and $m_1$, and the receiver obtains either $m_0$ or $m_1$, both with probability 1/2) from a BEC non-interactively. For realizing a ROT channel, the sender encodes $m_0$ and $m_1$ into some bits, say x=$(x_1, \dots, x_k)$, and send them over a BEC; the receiver obtains some of  these bits,  say y=$(y_1, \dots, y_{k'})$, depending on the  noise of the channel,  and from y they  should be  able to recover only one input string.  Security of the receiver is guaranteed by the fact that the  output string only depends on the  randomness of the channel and not on the sender's choice. On the other hand security of the sender  implies that it is impossible for them to obtain both  $m_0$ and $m_1$  just using the received bits y. However using the fact in a BEC each bit is independently erased, the authors prove that this  happens with large enough probability, i.e. a receiver could be able to extract both $m_0$ and $m_1$. Roughly speaking, security of the sender permits to break security of the receiver.
This result is generalized to any Generalized Erasure Channel (GEC) and BSC, and also to the case of computational security.
However, ROT channel can be realized using other type of discrete channels, for example the perfect bursty channel. It is a special GEC where all the erased bits are contiguous (see the paper for more details).



In the second part of the paper the authors consider one-way secure computation protocols over noisy channels for deterministic and randomized functionalities. More in particular, they prove that both the BEC and BSC are sufficient for securely realizing any deterministic functionality in the one way setting. Roughly, they first notice that ZK suffices for realizing any deterministic functionality in the malicious setting, and then that is possible to realize ZK from GEC and BSC by just sending over the channel multiple independent instances of ZK-PCPs to amplify the soundness.

In  case of randomized functionalities neither the BEC nor BSC can be used.



It is however possible to realize any randomized functionalities from bursty channels (BC), using the reduction from BC to ROT and then using results from [IPS08] or [IKO+11]. While the result on deterministic functionalities permits to obtain the first (highly inefficient) truly non-interactive solution to the ZK problem, this latter result on randomized functionalities  offers a solution (again non-efficient)  to the problem of certification of cryptographic keys in the one-way setting.