Wednesday, November 30, 2011

Study group: more leakage resilient stuff

Leakage, Leakage, Leakage ...

Once more we looked into recent papers attempting to deal with leakage in cryptographic schemes, this time we focused on public key cryptography: signature and encryption schemes in the bounded leakage and continuous leakage model were discussed/examined in our last study group.

After last week's session on definitions and models, and 'fundamentals' such as defining LR-OWFs via SPRs (handy to know as signature schemes can be derived from OWFs) we started off with a paper Boyle et al. (EC' 2011) that gives ''Fully leakage resilient signatures" in the bounded leakage model and then goes on to show how to adapt it to the continuous leakage model. Let's first look more closely at what the title implies/promises.
Fully LR in this context alludes to the idea that an adversary can query a leakage oracle not only on the secret key, but on the entire secret state including any randomness used during signature creation. Further it can tolerate 'a lot of leakage'. This is in contrast to some previous work (referenced in the paper) which only  allowed 'not too much' leakage about the secret key. Clearly this is then a relevant improvement as it is well known that for a wide range of practical signature schemes leakage about the ephemaral keys is what brings those systems down (relevant but not in the paper mentioned articles such as Nguyen and  Shparlinski, "The Insecurity of the Elliptic Curve Digital Signature Algorithm with Partially Known Nonces", DESI 30(2), 2003) show that knowing a handful (quite literally!) of bits of a couple of ephemeral keys allows to easily recover the secret signing key ...

Well what else does this new construction give us: no random oracles used in the proof, no OCLI,  and the authors argue that there are even some instantiations which one could argue are theoretically efficient.  A side remark mid-way through the note reveals though some previously not mentioned restrictions on the leakage: when it comes down to the actual proof there is no leakage assumed for the randomness used in key generation or refreshing (in the continuous leakage case). Especially the latter assumption is rather undesireable for practice ...

Conclusion: a construction that does all what we want in a model which is almost good enough for practice. (Quite a praise as I look at it from the lense of a practitioner).

The second paper we looked at was by Kiltz and Pietrzak (AC' 2010) entitled 'Leakage Resilient El Gamal Encryption'. The authors of this paper took a refreshing approach: rather than cooking up a scheme from 'scratch' they had a (brief?) look at what practitioners already do to secure El Gamal type schemes in practice and attempted to prove some properties of the well known defenses used. So what do practitioners do in practice? Assume a  pair of (secret key, public key)=(x, X=g^x). Then encryption proceeds something like that: the message (represented as group element) is blinded with an ephemeral key y which is transmitted via DH, hence the ciphertext is a pair (c1, c2)=(g^y, mX^y). Decryption recovers m by computing c2/c1^x. In practice an attacker will attempt to reveal x most likely by targetting the exponentiation c1^x as c1 can be observed/chosen and the exponentiation routine will most likely use x in very small chunks. To prevent this practitioners use a technique called blinding (typically applied to both exponent and message) to prevent such attacks.

The reason why I point this fact out explicitely is that in this article the authors explain that 'Even tough side-channel leakage may contain lots of dat, only a small fraction can actually be exploited in each measurement.' Wrong. I'll try to clarify what actually happens: each point (=sample data point, a collection of data points gives a 'trace' which is the actual observation/measurement) contains information about the internal state of the device at this particular point in time. If only a small part of the secret has 'contributed' to the leakage at this point, then only this small part of the secret leaks at this point (in time). As one trace consists of many points (remember we observe 'lots of data'), there is 'lots of leakage' contained within one trace. Especially implementations of exponentiations will produce distinct 'patterns' (i.e. have different leakage characteristics corresponding to 0 and 1 bits in the secret) that will almost certainly allow to extract the entire secret given one single observation. This is a key point, which is often overlooked by the theory community (by assuming that not the entire secret leaks given one query to the leakage oracle). It implies that without countermeasures (including blinding) along the lines developed mainly within the CHES community you don't even need to start considering LR schemes developed by theorists: the basic assumption that not the entire secret leaks in one observation actually requires to do something, it is not 'naturally' guaranteed.

Back to the paper and their statements about the security of blinding schemes. Many schemes are known by practitioners, and on a high level they work like this: rather than working with your secret exponent x directly, share it (depending on your underlying algebraic structure) either multiplicatively or additively (whether to use additive or multiplicative sharing is typically influenced by performance considerations). Additive sharing in the case of El Gamal encryption is not a good idea: if rather than using x directly we were to share it and use x'= x+r_i and x''=x-r_i it would be easy to calculate the the last 'few' bits of x  by knowing the last t bits of x' and x'' as those bits would be given by  x' mod 2^t +  x'' mod 2^t (minus a fudge factor if there was an overflow mod p). In practice people use what was suggested already by Kocher in "Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS, and Other Systems": one chosses random r_i, computes its inverse invr_i and shares x such that x=x'*x'' with x'=x*r_i and x''=rinv_i. This can be done reasonably efficiently, and one can even update r_i and rinv_i easily (by e.g. just squaring them) such that they are not re-used in subsequent encryption rounds (Would the proof in the paper by Kiltz and Pietrzak extend to this way of updating/refreshing shares?).

The cool thing about the paper is that they are able to prove (in case of bilinear groups, using the generic group model) that the above outlined strategy (minus the efficient update of shares mentioned at the end) is actually CCA1. 








Friday, November 25, 2011

ECRYPT2 Vampire2 Retreat

Leakage, Leakage, Leakage!

We were hosting again a Vampire research retreat here in Bristol. Roughly 25 people interested in the physical security of cryptography came together to discuss and share ideas about practically relevant attacks and countermeasures. We had a really fantastic crowd coming from all over Europe (France, Luxembourg, Belgium, Germany, UK and Austria), with a wide range of interests. We ended up discussing possible new avenues to attacks recent schemes (masking, re-keying), tried to make progress w.r.t difficult practical challenges such as working with very noisy data, as well as work on fundamental questions related to target function properties (aka cipher properties) and DPA resistance.

A big thanks here from us organisers to all participants' motivation, interest, and contributions. Another equally big thanks goes the the EC for their commitment to ECRYPT which enables us to come together in this way!

Hope to see many of this retreat's participants also on our next one in March in Leuven!

Wednesday, November 23, 2011

Study group: Definitions of leakage-resilience in theory and practice

Leakage, leakage, leakage.
Consider a leaky device that's supposed to handle a secret key for encryption, what can we say about its security?
If the device leaks the whole secret key, obviously there's no security in the encryption, so we somehow need to restrict what leakage an adversary can observe if we want to give any positive guarantees.

At this point the views of theoreticians and practitioners start to diverge. The gist of today's talk was that theoreticians work in models based on the state of the art at the time the model is created but take some time to catch up when practicioners find new results.

What leakage?

There are several models floating around in the cryptographic theory.
The main distinction is between bounding the type of leakage an adversary can observe (i.e. what functions of the secret key) and the amount of leakage.
If we look at bounding the amount of leakage, there's three main models.
First, the "relative leakage" model which bounds the amount of leakage (in bits) compared to the size of the secret. Secondly, the "bounded retrieval" model in which the total amount of leakage the adversary can observe is bounded.
Finally, the "continuous leakage" model in which the factor of time comes into play: the device in question runs in a sequence of time periods and the leakage that the adversary can observe in each period is bounded, but there's no overall bound. This means the device must refresh its keys as it transitions from one period to the next.

Criticism of the Models

All these models have one practical drawback: they don't really reflect what's going on in practice. On the one hand they're too strong, the adversary's observations being usually limited by a constraint such as "polynomial size leakage". Practitioners can only dream of observing that much leakage and if they could, none of today's devices could stand up to such a powerful adversary.

On the other hand, the models miss out the main challenges in today's practice, designing more efficient ways to observe and evaluate leakage. In practice, the success of an attack will depend on the quality of the statistical model used to analyse the leakage, the "noise" in the leakage, the number and quality of observations one can make and so on; current research involves ways to improve these factors or find trade-offs between them.
Theoretical leakage models on the other hand are all-or-nothing, you get the leakage in one go or you don't.

OCLI - really?

Many years ago, when power analysis first appeared on scene, it was noticed that CMOS logic required very little power to "hold" a state but what caused power spikes was actually the changing of states, whether from 0 to 1 or vice versa.
And so it was proposed to base theoretical models on the assumption that one could only observe leakage from data that is being computed on, as opposed to just sitting around in memory - Only Computation Leaks Information (OCLI).
Unfortunately today's practice does not support this assumption.
Shamir's cold-boot attack (pour liquid gas into laptop to freeze RAM, tear it out, copy contents before it thaws) is an attack on data just sitting in memory.
There are many other types of side channel beside power consumption - a computer virus could be considered a form of leakage to the adversary.
Even defining what is "data being computed upon" isn't that easy: at the hardware level with all the buffers, caches, pipelines and other optimisations in today's processors even a chunk of x86 assembly code won't give you a simple answer to when which gate is switched.
At the software level, things get even worse. The abstraction offered by an operating system may shift memory pages around in a manner mostly oblivious to applications running on it, not to speak of several processes running in parallel.
Finally with today's nanoscale logic, even the principle that power consumption is primarily related to changes in state does not necessarily hold anymore.

Some theoretical examples

The theory of leakage resilience is not all useless, though. We looked at two examples.
If f is a one-way function, what happens if a few bits of an input are leaked?
It's fairly easy to see that if there's a preimage-extractor given a few (at most logarithmically many) bits of input then one can just guess or brute-force these bits to construct a preimage-extractor that doesn't require any leakage.
More interestingly, if f is second-preimage resistant then f will be leakage resilient one-way if the leakage is small in the compression ratio.
Suppose you're given a preimage and want to find a second one that maps to the same image as the first, and you have a preimage-extractor that needs some leakage to work.
You can feed it the image of your challenge and answer its leakage requirements with the preimage that you already know.
If you do all this formally and get the math right the result is that you have a decent probability of the extractor giving you a different preimage to the one you know.

A real example

Finally we'll look at an example of a deployed protocol in set-top boxes that was designed to be leakage resilient.
There are many "common sense" components to reduce the exposure to known attacks: a root key is used only to derive message-keys using a hash function tree construction.
This prevents an attacker from getting too many traces on the root key itself.
A public, fresh random message id is chosen for each mesage to encrypt. This prevents an attacker from repeatedly encrypting with the same parameters.
Different constants are attached to each hash input in the construction to prevent correlations between different hash calls.
The message is encrypted in blocks, or rather chunks of blocks that just about fit in a smartcard's memory and the key is refreshed after each chunk.

There's no proof of this construction using today's models and it seems unlikely that there will be one any time soon but we're not aware of any published break of this particular protocol.

Conclusion

There's a big gap between theory and practice in side-channel attacks and praticioners (like today's speaker) sometimes wonder if the theory bears much relation to their work at all.
We still don't have a satisfactory theoretical model that adequately reflects what goes on in practical attacks.

Tuesday, November 15, 2011

Study Group: Polly Cracker, Revisited

Today along came Polly and it was cracking! In other words, our study group for this week was dedicated to Polly Cracker, Revisited by Albrecht, Farshim, Faugère, and Perret. The paper will be presented at Asiacrypt 2011 (LNCS 7073) later this year.

Polly Cracker schemes use ideals of multivariate polynomial rings and they are based on the presumed hardness of computing good (Gröbner) bases for these ideals. The first schemes were introduced in the early nineties (just when I went to University) and there has been a steady amount of work on them, though they never really gained the widespread acceptance that for instance factoring, discrete log, or recently lattice based cryptosystem acquired.

We started our study group with a review of the relevant mathematics. David gave us an excellent turbo tutorial on rings, ideals, monomial orderings and finally Gröbner bases. It was a very useful reminder of the basics and inevitably I had to think back of times long gone by, when, as part of a coursework, I had to prove the Pythagorean theorem using Gröbner bases (definitely one of the geometrically least insightful ways of going about proving this theorem).

After this refresher, Jake took over to discuss the Polly Cracker, Revisited paper. The first part of the paper (and our discussion) deals with noiseless version of Polly Cracker. This can be considered the more classical problem. Three different hardness assumptions can be considered, all based on an adversary with access to an oracle returning randomly sampled elements in the ideal: in the first version (GB), the adversary has to reconstruct a Gröbner basis; in the second version (IR) the adversary needs to compute the remainder (modulo the ideal) of a random challenge (in the ring); and in the third version (IM) the adversary just needs to decide whether some challenge element is in the ideal or not. Of course, in the paper all the relevant probability distributions are worked out properly. While it is quite obvious that the ability to compute Gröbner basis suffices to solve the other two problems, the other direction is less straightforward and in fact, Albrecht et al. only provide a conditional implication. They also show how to create a symmetric encryption scheme that satisfies a bounded version of IND-CPA security (here the adversary has only a limited number of calls to its left-or-right and encryption oracles).

The interesting stuff appears afterwards, when the authors introduce errors or perturbations in analogy of the LWE and approximate GCD problems. Indeed, they even show that their approach is a proper generalization of these two settings. Unsurprisingly, this allows the creation of IND-CPA secure scheme. More exciting is the fact that it also shows that Regev's scheme based on LWE allows for a fixed number of multiplications.

We did not discuss the paper in great detail, but Jake did mention one interesting avenue for continued research. Given that this new approach allows one to cast both LWE and approximate GCD in the same framework, can one also capture ring-LWE. If so, this might enable a better comparison of the various fully homomorphic encryption (FHE) schemes out there. The hope expressed by Jake was that this might allow a reduction to standard LWE (for the current batch of ring-LWE schemes), which would boost our confidence in those schemes.

Wednesday, November 9, 2011

Study Group: Physical security and FPGAs - Part II

In this week's study group we resumed the topic of FPGA Security and discussed issues covered by 4 papers:

Amir Moradi and Alessandro Barenghi and Timo Kasper and Christof Paar
“On the Vulnerability of FPGA Bitstream Encryption against Power Analysis Attacks – Extracting Keys from Xilinx Virtex-II FPGAs” [pdf]

Amir Moradi and Markus Kasper and Christof Paar “On the Portability of Side-Channel Attacks – An Analysis of the Xilinx Virtex 4, Virtex 5, and Spartan 6 Bitstream Encryption Mechanism” [pdf]

Tim Güneysu and Amir Moradi “Generic Side-Channel Countermeasures for Reconfigurable Devices” [pdf]

G. Canivet, P. Maistri, R. Leveugle, J. Clédière, F. Valette and M. Renaudin
“Glitch and Laser Fault Attacks onto a Secure AES Implementation on a SRAM-Based FPGA” [pdf]

The study was moderated by Dan and Simon. Dan started his talk with a short introduction to FPGAs and described the main building blocks in SRAM-based FPGAs such as LUTs, embedded memory blocks, dedicated blocks for multiplication or additions. He also briefly described the potential weaknesses in terms of security.

In short, FPGA could be used to implement third party design and those designs sometimes have to be protected against counterfeiting. One mechanism that tries to solve this problem is bitstream encryption, where the vendor tool encrypts a bitstream using a symmetric key KBIT: once loaded to the FPGA another internal mechanism decrypts it using the (previously loaded) same symmetric key KBIT. The bitstream itself can consist of a variety of designs but one of them might be, e.g., Advanced Encryption Standard (AES) used as a cryptographic primitive. To perform an operation on AES this cryptographic primitive should be supplied by a symmetric key KAES.

There were three main questions set at the beginning of the Study Group that both Dan and Simon tried to answer based on the above-mentioned papers.


  • Can we recover the symmetric key used in the cryptographic primitive KAES?
  • Can we recover the symmetric KBIT used in bitstream protection via Differential Power Analysis?
  • What kind of countermeasures can we use to prevent DPA attacks that reveal KAES?


The first presented paper is addressed at question 1. Authors describe a fault injector platform for SRAM-based FPGA devices. Faults can be induced in the experiment by voltage glitches or laser attacks. Apart from practical issues, another interesting feature of the paper is the fault model for SRAM-based FPGA devices. In the presented model, authors described the following types of errors: no effect: the internal state of the circuit was not modified at all, silent errors: the state of the circuit is modified, but the result is correct and no error is detected, false positives: there actually is an alteration, and an alarm is raised, but the result is nonetheless correct, detected errors: the fault led to an unexpected result and it was successfully detected, undetected errors: no error is detected, but the cipher is not the expected value.

There are few techniques available in the literature to implement error detection. One of them is based on a temporal redundancy, where computation is performed twice (or more) and then the results are compared. Authors implemented AES as a target algorithm and used Dual Data Rate (DDR) as a fault detection scheme. The DDR implementation uses rising and falling edge to update a different set of registers and thus allows data to be parsed twice for each clock cycle. Then “saved” time can be used to compute results again and check them against faults. This architecture has been attacked by laser-based fault injections during runtime. Results showed that although DDR AES implementation is very resistant to transient faults, FPGA could be vulnerable to faults that modify configuration bitstream.

Another interesting paper presented (by Simon) during the Study Group dealt with countermeasures against KAES leakage from an FPGA device; i.e., what can we do with off-the-shelf SRAM-based FPGA fabric to prevent DPA? Authors focus in this case on a Viretx-5 device from Xilinx and combine a few techniques to make a DPA (especially first-order) more difficult, e.g. generating Gaussian Noise (using Shift Register LUTs, BRAM Write Collisions, Short Circuits in Switch Boxes), clock randomization (using Digital Clock Managers – DCMs), preventing clock frequency manipulation and block memory content scrabbling. As a result, authors present an FPGA armed with countermeasures ready to be used by non-side-channel-aware engineers. During a case study evaluation, authors showed once again that, in general, increasing a noise level is not a good idea to make your design secure against power attacks. More useful was an idea to use a Digital Clock Manager (DCM) available on FPGA chip and modified their output to achieve a pseudorandom clock behavior. In other words, the rising edge of a clock used in a design is “floating” in some range, which makes the power consumption more unpredictable. The strength of this approach was increased by preventing clock frequency manipulation method to mitigate known workarounds for clock frequency randomization techniques. The combination of all applied countermeasures increased the total number of traces measurements required to at least 100 million and even that did not guarantee the success of first-order DPA.

The last two papers presented by Simon describe practical attacks on the bitstream configuration mechanism. These are the first attacks described in the open literature that aim to reveal the bitstream protection key using side-channel methods, and according to authors it was successful for Virtex II, Virex-4 and Virex-5 devices. Because of the black-box approach (which means that the attacker has very limited knowledge about an internal design) authors had to answer a few additional questions to become familiar with the overall architecture, e.g., what type algorithm has been used, what is the mode of operation, is it a pipeline, loop or software implementation. A number of additional steps (for example, improving the S/N (signal to noise ratio) via filters and alignment techniques) had to be performed as well. Interestingly, in the case of Virtex-4 and Virex-5, GPUs have been used to speed up computation; with increasing complexity of DPA attacks this approach might be very useful for future studies.

Friday, November 4, 2011

Study group: Models of Malware

The study group topic for discussion this week was on formal methods of modeling malware and viruses. In the late 80's Fred Cohen, at the time a student of Leonard Adleman, described an abstract definition of viruses and used it to prove that there is no algorithm that can perfectly detect all possible viruses (text). Adleman himself published a paper a year later, using an abstract definition of a computer virus to answer questions on the detection of, and protection against viruses (text). Since the late 80's the capability of malware designers has increased and the complexity of malware itself has evolved, resulting in a renewed interest in abstract definitions and modeling of malware and viruses. In the study group Theo introduced some more recent research that follows on from Cohen and Adleman's earlier work; "Toward an Abstract Computer Virology" by G. Bonfante, M. Kaczmarek, and J.-Y. Marion ((pdf) and "Math on Malware" by Henk-Jan van der Molen (pdf).

Bonfante et al. suggest a newer definition for viruses that encapsulates Adleman's previous definitions, amongst others, and use the new definition to make an interesting observation on virus detection and to propose a countermeasure against a particular type of virus propagation. A virus is a program with a propagation mechanism that can be described in terms of some computable function B, that selects from a list of available programs and then replicates the virus inside the target. In other words, B is the delivery mechanism; some functional property or flaw in the programming language used by the virus to infect the system. A common approach from previous attempts at formalising viruses has been to define a virus as a self-replicating device; a virus has the capability to act on a description of itself. This leads to a definition that uses Kleene's recursion theorem to construct the viral mechanism. The self-reproduction is provided by a duplication function that copies a virus into a program in different ways.

There are three particular types of duplication function to mimic three types of virus behaviour. A crushing is simply a virus that has some behaviour involving copying itself, e.g. to a directory. A cloning is a virus that copies itself into some parts of a program environment, without increasing the overall program size (typically by overwriting honest code). Finally an ecto-symbiotic virus essentially attaches itself to a host program, without overwriting. Further definitions that cover polymorphic viruses (viruses that in some sense mutate when they reproduce) are discussed in the paper. The authors also show how these definitions can encapsulate Adleman's original definitions by translating it into the new formalism. An interesting result is that it can be proven, under the definitions that are approximately described above, that there are at least some propagation mechanisms B for which it is decidable whether a particular program is a virus.

A slightly different question is to ask how malware or viruses propagate throughout a network once they are introduced into the environment. Henk-Han van der Molen focuses on using ideas from epidemiology and network theory to study how infection rates change over time once a network is compromised. In particular, a highly useful metric would be to quantify how impactful particular countermeasures and security methods are at reducing infection rates (and increasing clean-up rates). For instance, potential tools for alleviation such as deployment of anti-virus software, periodic re-imaging, increased user education, and incident management/disaster recovery procedures have varying economic costs, so knowing their real effectiveness as countermeasures can increase efficiency.

The particular network model chosen for analysis is described as the "Susceptible-Infected-Susceptible" (SIS) model. From a biological standpoint, it captures a scenario whereby after contracting a disease, and recovery, humans are still susceptible to re-acquiring the condition later on. This is perhaps the best-fit for modeling a computer network, as it may be that machines that are cleaned/re-imaged/patched still have potential vulnerabilities. Without going into the mathematical description too heavily, the SIS model starts by assuming that in the early phase, infection rates grow slowly because there are few infected computers to spread the malware. The number of infected computers that are returned to the susceptible state is proportional to the number of infected computers. Two other additional factors are the probability of resuscitation, which models the effectiveness of the countermeasures in cleaning an infected node back to its susceptible state, and the contamination factor, which models the probability an infected node can infect a susceptible neighbour.

The SIS model is analysed in three particular scenarios. The first is the 'corporate' scenario, with statistics from malware infections in organisations used to estimate the resuscitation and contamination factors. In the 'practice' scenario, the goal of the malware is assumed to be to infect as many computers as quickly as possible. In this instance, the parameter estimation is difficult because there are few sources of reliable statistics available. In the final 'cyberwarfare' scenario, the malware needs to remain undetected for as long as possible to keep target nodes in the infected state. One interesting result drawn from the model is that with a periodic reset of software, in the corporate scenario the steady state of malware infections drops almost to zero. With the increasing proliferation of virtualisation techniques in business, executing this strategy frequently may become more and more viable.

Another mitigating factor in the spread of malware is the diversity of the software installed on a network. A fully Windows-based network could potentially be entirely compromised by a single Windows-based virus, but a network comprised of a mix of Mac OS, Linux and Windows could not be infected to the same extent by just one such virus. An analysis of the SIS model seems to support this hypothesis, with increasing diversity resulting in slower infection rates. However the practical and economic implications of this strategy are much harder to capture: the support costs for supporting diverse software are higher, typically vendors try to encourage lock-in, a reliance on software supporting shared data standards is introduced, and so on.

The research poses a challenging yet important problem. Through modeling it will be possible to improve our understanding of virus and malware propagation and the effectiveness of tools and procedures as countermeasures, but a further understanding of the implications and costs of the mitigating factors is also required to have a complete solution.