Wednesday, August 25, 2010

2nd SHA-3 candidate conference - day 2

We got presentations from two software benchmarking efforts. One is eBASH by the ECRYPT-2 network of excellence. Performance figures are currently predominantly on various desktop processors but with the help of the XBX project, more embedded platforms should be covered soon. The other effort is sphlib which focuses on portable C implementations for embedded platforms.

After some presentations on some specialized software implementations and another round of security analysis presentations, each of the 14 teams had the chance to report on the status of their submission. The arguments on some security observations were carried on in humorous form through the mentioning of so-called "banana attacks" by several presenters (a term that has originated in the discussion on the NIST SHA-3 mailing list). Everybody presented arguments why their submission should make it into the final round, highlighting available security proofs and analyses, but also eventual advantages for implementation in software in hardware.

As there have been no dramatic break of the security of any of the 14 candidates, NIST is certainly facing a though choice for the final round competitors. One point William Burr from NIST made in his closing statement was that the choice will probably aim to select a diverse set of candidates (and not just algorithms based on the same design principle). As we have roughly three categories of candidates (Add-Rotate-XOR based, AES based, Sponge based), one might speculate that we will maybe see six finalists with two from each category (rather than just four or five finalists). In any case, in a couple of weeks we will know for sure.

Tuesday, August 24, 2010

2nd SHA-3 candidate conference - day 1

We just finished the first day's program. While the first half focused on observations of properties of some candidates (or components thereof), the second half was about hardware implementation (which is really my cup of tea). We've had five presentations on groups which studied the hardware efficiency of all 14 round-2 candidates (two for ASIC implementations, three for FPGAs). The general consensus was that this kind of hardware performance benchmarking should be carried out with implementation parameters (e.g. design goals, interfacing, target technology) in agreement as much as possible. Patrick Schaumont and I presented one of the ASIC studies each, while Brian Baldwin, Shin'ichiro Matsuo, and Kris Gaj talked about their FPGA studies. Not all results were at perfect match, but some candidates kept appearing more often in the top lists than others. The Athena framework presented by Kris is especially interesting, as it allows for the benchmarking of designs from various authors and is freely available to anyone. It would be great to get something similar for ASICs, but there the availability of design tools and technology information is more complicated than for FPGAs.

The day concluded with a discussion of the next steps for the SHA-3 competition. William Burr from NIST shed a little light on the upcoming selection process for the finalists in the last quarter of this year and also what the submitters of the finalists should look out for (e.g. any big tweaks might invalidate previous cryptanalysis and be hurtful to a candidate's chances). Akashi Satoh pointed out that hardware evaluation should focus more on the flexibility of candidates rather than on raw speed or low area alone. Adi Shamir felt that the discussion at this point was already too much in favor of efficiency criteria and too little about security evaluation. Richard Schroeppel even pleaded that the SHA-3 competition should go on for an additional year in order to get more security analysis on the candidates done.

Tomorrow, the second day will focus on software implementations and another round of security analysis. The afternoon will be filled with the 14 teams giving updates for their respective candidates.

Sunday, August 22, 2010

CHES 2010 rump session

Short summary of the 2010 CHES rump session:

There will be a Journal of Cryptographic Engineering (JCEN), a new Springer journal, covering the research areas of the CHES Workshop; the first issue will be published in February 2011. Papers can be submitted from 15 September.

CHES will be held in Tokyo next year in the last week of September. For
2012, the CHES organisation is still up for grabs; however, the conference
needs to be held somewhere in Europe.

From an entertainment perspective, the points did go to Jean-Jacques Quisquater: He elaborated on how paul, the world cup octopus, could serve the crypto community as an oracle. After that, Stefan tried to educate the audience of how NOT to compare hardware designs.

Sergei Skorobogatov currently develops new technology for effective side-channel analysis that allows a full break of AES on a real device in less than one second. Results can soon be expected on his personal homepage.

Apparently, the second DPA contest is still accepting submissions and participants will be honoured with a free SASEBO-GII board.

As every year, there was an announcement that a new SASEBO board will be
available soon; in addition, they also offer EM probes for side-channel
analysis. Finally, Josh Jaffe from Cryptography Research explained that now
side-channel attacks that analyse one billion traces are possible.

Saturday, August 21, 2010

Revisiting Garbled Circuits

What happens if a garbled circuit performs a computation on masked data creating masked output and the computation leaks to side channels? Theory says, it does not matter and it has strong points to bolster its claim. Friday, at CHES 2010, Thomas Schneider showed that garbled circuits made another step towards being practical in his talk "Garbled Circuits for Leakage-Resilience: Hardware Implementation and Evaluation of One-Time Programs". (The paper has been written by K. Järvinen, V. Kolesnikov, A.-R. Sadeghi and T. Schneider.)

Their first achievement is that they implemented two architectures to evaluate garbled circuits in a FPGA and thus have the first timing data for garbled circuits on hardware. They obtained their performance figures for a garbled circuit computing AES. It still is rather slow compared to conventionally protected implementations but it is substantially faster than the currently best software implementation.

But they also achieve to make the output verifiable. To do this, they basically wrap a n-out-of-n secret sharing scheme around the garbled circuit. The shares traverse through the circuit and any manipulation of the computation destroys at least one share. In the end, the result is only unmasked if the secret can be reconstructed. The secret sharing has, just like the masking and unmasking of the data, to happen inside trusted hardware. This trusted hardware may be tamper proof devices but one can also imagine a scenario where the trusted hardware is your own computer while you outsource the heavy workload computation to rented, untrusted hardware.

Friday, August 20, 2010

Lessons of two attacks on allegedly tamper proof chips.

Today, at CHES 2010, two interesting talks shed new light on the myth of tamper resistant hardware: Di-Battista showed in his talk "When Failure Analysis Meets Side-Channel Attacks" (paper written by J. Di-Battista, J.-C. Courrège, B. Rouzeyre, L. Torres and P. Perdu) how they used photon emissions of transistors as side channel in a DPA attack to recover keys stored inside an Actel A3PE FPGA which claims to be highly tamper-proof. (The photon emission basically occurs in all chips, no matter whether they're ASICs or FPGAs.)


In the data sheet Actel claims: "The flash cells are located beneath seven metal layers, and many device design and layout techniques have been used to make invasive attacks extremely difficult." The light emissions do not come from the flash cells but from switching transistors; this however is no big difference: While it is indeed difficult to probe through all those layers of metal interconnect (and probably no photon would ever get through it) it is much easier to flip the chip upside down and look at its bottom where it has no interconnect. In the first attack they only use 'natural' photon emissions from the transistors, in the second attack they use laser stimulation to increase the photon emissions. From the observed photons they are able to construct traces for DPA attacks.

One might argue that the attacks of Di-Battista et al. are extremely expensive compared to normal DPA attacks: The equipment needed for their weaker attack costs 500 K$ while the equipment for the stronger attack costs 2 M$ but can be reused and should be available in any semiconductor lab. However, the price argument does not apply to a weaker class of attacks that S. Skorobogatov introduced in an earlier talk today, "Flash Memory 'Bumping' Attacks". Skorobogatov attacks three chips including an Actel A3P250 with relatively low cost laboratory hardware and a little bit of cryptanalysis. (That's the 'bumping'.) He is able to break the security claims of the chips regarding configuration extraction; the configuration of an FPGA is equal to the designer's intellectual property (IP) and therefor a major target whenever FPGA based devices are reverse engineered. However, in the Actel case he only attacked one of multiple options available to secure the IP - to break the other options the attack of Di-Battista et al. is needed.

In summary, both talks showed the dangers of relying on FPGA based security claims to protect IP and, at least in case of the light based DPA attack, any cryptographic operation. It may be assumed, that non-Actel FPGAs and many ASICs suffer similar problems. Very impressive results.

Lightweight cryptography for IC-printing

Yesterday I attended the first day of CHES 2010 conference. The first session brought us new results in lightweight cryptography. Quite interesting talk was given by Alex Poschmann. He proposed a new block cipher for IC-printing called PRINTcipher.

Roughly speaking IC-printing technology allows printing integrated circuits directly on a range of materials and thus the cost of production is very low. In contrast to conventional silicon manufacturing, IC-printing allows changing the printed circuits in each run with little cost. Yet another new fancy technology that should be checked in cryptographic applications especially those aimed for the RFID world.

Alex took well known strategies for designing lightweight cryptography and merge them into new design for IC-printing applications. So what is a recipe for developing a new lightweight block cipher? Lets take an SP-network and S-boxes which are as small as possible. Since we care predominantly about area, our cipher might be as slow as a turtle and iterate internal parts as long as it is needed. In the IC world, storage is costly, so we have to get rid of unnecessarily storage elements. First idea is to minimize key schedule. Yes, we can do this as long as we aim at RFIDs which are likely to have fixed keys. This means that instead of using xor gates for xoring key and data bits, we can directly use inverters and wires. Quite nice, we saved area but what
about reverse engineering and side channel attacks? This fixed key feature is most important for IC-printing. Since we are able to change and print ICs with no additional costs we can produce a bunch of RFID tags with different fixed keys very easily.

All in all, new technology, new design and slightly different approach than in traditional RFID tags. The real world applications? We will see.

Wednesday, August 18, 2010

Incoercibility

Today I attended a rather interesting talk at Crypto 2010 by Dominque Unruh on the paper Universally Composable Incoercibility by D. Unruh and J. Müller-Quade.

Incoercibility is typically studied in the setting of voting systems. For example, say you vote in an election where you are issued with a receipt saying who you voted for. You want to vote for Charlie, but an adversary forces you to vote for Alice, demanding you show the receipt to them afterwards as proof. In this environment you can be coerced, unless of course you can produce a fake voting receipt. Although voting systems are the most studied, there are other situations where incoercibility may be a requirement; this is relevant whenever a protocol leaves traces of its actions. The question of course is how do we analyse this and are there any models to allow us to do this? Unrue and Müller-Quade introduce a general model for this under the universal composability framework.

Briefly, the universal composability framework has two worlds. In the real world, there exists an adversary who interacts with parties participating with the real protocol. The ideal world has a simulator who simulates the actions of the real world adversary, interacting with parties who use an ideal functionality, as opposed to the real protocol. If for all adversaries there exists a simulator then the two worlds are equivalent assuming no environment can distinguish with which world it interacts.

How could you model this incoercibility requirement? One way might be to have the following two worlds:
(1) The adversary coerces a party, and this coerced party simply forwards messages backwards and forwards between the protocol and the adversary.
(2) The adversary interacts with an un-coercible party who does what he wants, not what the adversary wants (while interacting with the protocol).
If these two worlds were indistinguishable, then one might expect us to have the desired result. However this fails. For example, say you have an election with only one voter, then the vote tally will reveal which world you are in.

So the best result that can be achieved is that "every lie possible in the ideal setting should be possible in the protocol setting". Without going into the details, Unruh and Müller-Quade achieve this in the following way. In the ideal world a "Deceiver" party is introduced. Typically an adversary corrupts a party and completely controls that party, but with the addition of the Deceiver, if the Deceiver controls a corrupted party, then this party behaves as demanded by the Deceiver and not the adversary (unbeknown to the adversary). This method then leads to the useful result that any attack possible in the real world is possible in the ideal world, and any lie possible in the ideal world is possible in the real world.

Attacking Mobile Communications

Today our (Stephen Williams and I) favourite paper at Crypto'10 is "A
Practical-Time Related-Key Attack on the KASUMI Cryptosystem in GSM and
3G Telephony" by O. Dunkelman, N. Keller and A. Shamir. It isn't an easy
task to describe differential attacks in a blog entry in the first place
and their new sandwich attack uses differentials in a very artful manner
so we're not going to elaborate on the details of the attack except to
notice that they spread the previously known boomerang attacks over
additional rounds somewhat similar to super s-box attacks. (Except of
course, there are no s-boxes involved, neither in boomerang nor in
sandwich attacks.)

However we would like to point out a couple of things:
  • The authors do not claim to have broken A5/3. They explicitly point out that due to the operation mode of KASUMI in A5/3 they haven't found a way to apply their attack on the entire protocol. We think that this is an important reminder of how much depends on the mode of operation.
  • The attack also reminds us how important it is to take proper care of differentials in the design of ciphers: KASUMI is a simplified version of MISTY and while KASUMI has suffered a serious blow today, MISTY stands untainted.
  • After listening to the talk today, our confidence in A5/3 hasn't grown. We don't know whether ETSI/SAGE (who designed KASUMI) wanted it to be less secure than MISTY but we consider it to be precariously close to being a failure equal to A5/1 and A5/2. NIST has generated a lot more trust in AES by its open design process than ETSI/SAGE managed to generate for A5/3.
  • We greatly appreciate that development of A5/4 is on its way already; at the rump session ETSI/SAGE called for participation in the evaluation of the proposed new algorithms which are based on ZUC. The relevant links are http://zucalg.forumotion.net and the GSM webpage.

Tuesday, August 17, 2010

How to rerandomize Yao Circuits

For today's coverage of CRYPTO 2010 we (Stephen Williams and I) picked the "i-Hop Homomorphic Encryption and Rerandomizable Yao Circuits" paper by C. Gentry, S. Halevi and V. Vaikuntanathan. Not only did Vaikuntanathan give a really good talk but also the ideas of the paper are quite appealing. Suppose you have the following setting:
  • Alice encrypts a message m into an ciphertext c0 and sends c0 to the server 1
  • who has to evaluate a function c1<-F1(c0) such that Dec(c1)=f1(m). He then passes c1 on to server 2
  • who evaluates a similar function c2<-F2(c1) such that Dec(c2)=f2(f1(m)).
  • It goes on like his until the last (xth) Server sends cx to Dora who knows the secret key and computes Dec(cx)=...(f2(f1(m))).
Sounds cool, doesn't it?

Well, there's a nasty little problem which got solved by Gentry, Halevi and Vaikuntanathan (GHV) in their paper: How do you keep the functions fi that were applied to the ciphertexts secret if Alice, some of the servers and Dora work together to reveal the function that has been used? GHV use garbled Yao circuits to implement the functions evaluated by the servers. While these circuits look completely random to Dora, who gets all of them, they don't look as random to the servers who created them. So if Dora gives the entire garbled circuit to the collaborating servers, they are able to de-garble the circuit and the functions of the honest servers that should be protected can be reconstructed by the dishonest ones. However, if each server can re-randomize the previous circuits passed on to him, the previous servers can not derandomize their part of the entire circuit and therefore the honest server protected his function. GHV show in their paper how to achieve this re-randomization using the DDH based encryption scheme from Boneh, Halevi, Hamburg and Ostrovsky (proposed at Crypto'08). That rocks!

The claim that this could actually be of practical relevance is believable. Yao's circuits were once thought to be slow, but recent work by researchers here at Bristol has shown them to be practical. Furthermore, a homomorphic encryption scheme is considered to be "compact" if the ciphertext size remains constant; using the re-randomizable Yao circuits GHV create a "non-compact" homomorphic encryption scheme whose ciphertext size grows after every function applied to the scheme. However, and this is the interesting part, the growth of the ciphertext is polynomial and may thus still be practical. As Vaikuntanathan pointed out in his talk, people should look at how many "hops" (or functions) are going to be applied with the homomorphic encryption scheme; if this number is low, their "non-compact" version may be suitable.

So, why does anyone actually care about this? One possible application (here we slightly generalize Vaikuntanathan's example of e-mail filtering) is searching a database for keywords where the keywords are supplied by multiple parties and neither the database nor the keywords of the other parties should be revealed to any party.

Tuesday, August 10, 2010

P vs NP

So here is the first post, and it is a pretty interesting topic to start...

In the last couple of days rumours have spread around the internet about a mathematician from HP Labs, Vinay Deolalika, having proposed a proof of the conjecture that P does nto equal NP. This proof is currently being reviewed by the experts, and if it is proved to be correct then this will be the next of the Clay Institutes Millennium Prize problems to be solved.

So why is this interesting to at all, and why is it interesting to us cryptographers?

Firstly, as a purely financial justification; the Clay Mathematics Institute is offering a one million dollar prize for a proof of each of seven fundemental problems in mathematics. These seven problems were published in 2000 and consist of the following questions
  1. P vs NP
  2. The Hodge Conjecture
  3. The Poincare Conjecture
  4. The Riemann Hypothesis
  5. Yang-Mills existence and mass gap
  6. Navier-Stokes existence and smoothness
  7. The Birch and Swinnerton-Dyer conjecture
Of these problems the Poincare conjecture is already solved (by  Perelman) in 2003, and of the other six I only really know about three: P vs NP (which I shall return to below), the Riemann Hypothesis (which is at the heart of problems about the distribution of prime numbers) and the Birch and Swinnerton-Dyer conjecture (which is about values of a function attached to an elliptic curve).

However, the P vs NP question is probably the most philosophically interesting. It basically asks whether it is easier to check a mathematical proof than it is to invent it. Intuitively this is born out by our experience in lectures; students find it easier to follow and understand a lecture than come up with the material from scratch on their own.

But it is not just a question about proofs. The two expressions P and NP refer to sets of problems. The set P is the set of problems for which we can decide whether they are true or false efficiently, i.e. in polynomial time (hence the P). These correspond, using the above analogy, to the problems for which we can come up with a proof efficiently. The set NP on the other hand is the set of problems for which given the problem and an additional piece of information called a witness (or proof), we can check the proof efficiently. Why this is called NP is technical and will not be gone into here.

The P vs NP question is whether the set P is equal to the set NP. It is clear that the set P is contained in NP, since we could take the witness to be empty for all problems in P. But if P does equal NP, then all sorts of things will follow: A lot of interesting problems for which we have no efficient algorithms could be efficiently solved. More importantly to us cryptographers, we would all be out of a job. Since all known practical encryption schemes are related to problems in NP.

Vinay Deolalika claims to have a proof that P does not equal NP. If his proof is found to be correct then this is great news for science in general, and cryptographers can breath a sigh of relief for a few more years. However, even if the proof is found to contain some errors it is likely to stimulate new ideas.

We are really living in interesting times

Introduction

This is the start of a blog for the Cryptography Research group at the University of Bristol. In this blog we aim to discuss matters of interest to our research. For example this could be items such as:
  • Security news and our thoughts about it
  • Reports on conferences we have been to
  • Bringing attention to things we have seen and read
  • Discussion of matters cryptographic
Anyway maybe this will die a death and never be used, maybe it will be used a lot. But you dont know until you try.....