Thursday, September 13, 2012

CHES 2012 - Day 3

The third and final day of CHES started slowly, and fuelled by a lot of coffee, after a fantastic conference dinner the night before at the KU Leuven Faculty Club. Although the sessions covered a lot more (e.g., constructive and destructive aspects of RSA, and hardware implementations in general), the two most interesting aspects for me were the first a session on lightweight cryptography and the second, high-octane invited talk.

The session on lightweight cryptography was interested in the sense that it posed a question: what actually does lightweight mean? Before I would have said block ciphers of this type simply have a reduced security margin, and are compact (in memory or gates, depending on your taste) in design. The session did a good job of covering (at least) two other valid metrics in power consumption and latency, but one of the shared problems seemed to be providing a sane comparison of what is a large number of diverse candidate designs and techniques. For me, Miroslav Knezevic made the most interesting observation: when evaluating fully unrolled block cipher implementations in hardware, an odd behaviour occurred when placing constraints on the synthesis tools. More specifically, earlier rounds were larger than later ones: he explained this as a result of earlier rounds driving more later logic, and hence being instantiated with different cell types (but didn't elaborate further). The interesting part was the idea that to best cater for this, one might consider designs with heterogeneous rounds (the later more complex than the former) to balance latency. I wondered whether block cipher designers would be delighted or appalled at this concept!

After lunch Christof Tarnovsky of Flylogic Engineering gave an invited talk; a renowned figure in the area of reverse engineering and physical attacks, I'd guess like me this was what most people were most exited by in the whole CHES program. The tone was set from the start, with Chris (roughly) saying "I've met some nice people from chip vendors at CHES, so I altered my talk to focus on your chips". On one hand, the talk was absorbing: as well as the clear technical prowess that underpins his depackaging and probing attacks, purely the chip imagery on some of his slides was really interesting. On the other hand, it was hard to take too much away from the talk. I guess if you employ Flylogic as a consultant you might get more insight into countermeasures or secure design technique, but the focus was more or less entirely on attacks which seemed a bit of a shame. One very clear message was that protective meshes (active or not) are currently either ill conceived or poorly implemented, but basically weak and quite pointless against an adversary as skilled as he.

Tuesday, September 11, 2012

CHES 2012 - Day 2

The second day of CHES 2012 brings us a number of interesting sessions, among them a session devoted to Physically Unclonable Functions. PUF primitives have become popular in the research community and each year CHES attendees are able to hear about the state-of-the-art results. That happens this time as well, with a PUF session containing four papers, ranging from theoretical analysis of PUFs as basic building blocks for cryptographic schemes to practical evaluation of PUF implementations in ASICs.

In the first talk of the PUFs session, Ulrich Ruhrmair presented, firstly, an attack on two oblivious transfer and bit commitment protocols introduced at CRYPTO 2011 and secondly, as follow up, countermeasures that could be applied to mitigate those security issues.  

The most practical paper of the PUF session was “PUFs: Myth, Fact or Busted? A Security Evaluation of Physically Unclonable Functions (PUFs) Poured in Silicon” by Stefan Katzenbeisser et al. In the paper authors evaluate five different types of intrinsic Physically Unclonable Functions such as arbiter, ring oscillator, SRAM, flip-flop, latch PUFs. All aforementioned types were implemented in a TMSC 65 nm CMOS process technology. In total, 96 different AISC chips have been evaluated at different ambient temperatures and varying chip core voltages. Apart from practical results, the second contribution of the paper is the introduction of a PUF evaluation framework. More details, plots and full analysis are in the paper. The only missing part is an ageing test, where ASIC chips are stressed in different environmental conditions and thus ageing effect can be achieved. This ageing test will be carried out by authors in future work.

An interesting paper that concludes the PUF session has been presented by Anthony Van Herrewege from KU Leuven. The title of said presentation was “PUFKY: a fully functional PUF-based cryptographic key generator”. The authors successfully evaluated a practical and modular design for a key generation which uses PUFs primitive on FPGAs. In their work, the authors have used ring oscillator PUFs, but the modular approach easily allows to integrate other types of PUFs with a microprocessor. This flexibility allows to deploy prepared IP without much hassle.

CHES 2012 - Day 1

This year's CHES in Leuven is seeing its highest number of attendees ever (well over 400). Monday's sessions focused on side-channel attacks and countermeasures.

The first session covered intrusive attacks and countermeasures. Jean-Michel Cioranesco presented an approach to build a 3D Hamiltonian cage around a cryptographic circuit in order to thwart probing attacks. They propose to use multiple metal layers and the connecting vias (and possible die stacking). The signal on this Hamiltonian path is passed through various MAC units in order to check its integrity to be able to detect physical manipulations of the cage which could indicate a probing attempt. Sergei Skorobogatov talked about the analysis of a Flash FPGA which contains backdoors/undocumented features in its JTAG port. Employing a novel , (and patented) power analysis technique called Pipeline Emission Analysis (PEA), which uses analog preprocessing of power traces, the authors were able learn various embedded keys of the device which allowed them to activate the hidden features and circumvent many security mechanisms. For example they were able to read back the FPGA's configuration which should not be possible in normal operation. The third talk by Alexander Schloesser featured the results of light emission analysis of an AES chip, manifested as fascianating pictures highlighting the activity of the chip and allowing to pinpoint the exact location of single bits.

The second session about masking featured Dan Page's talk bout the compiler-assisted masking techniques which have been developed at Bristol. Starting from an initial annotation of variables as low or high security by the programmer, the compiler performs type inference in order to deal with potential security problems, e.g. a high-security variable addressing a low-security array). This approach hopes to eliminate common errors which occur when programmers implement masking by hand. Begul Bilgin presented threshold implementations (i.e. robustly masked hardware functions) of all S-boxes of size 3x3 and 4x4. Finally, Amir Moradi talked about their practical security evaluation of different variants of the so-called generalized look-up table (GLUT) approach, which can be used to secure S-box lookups. His concluding message was that designers of countermeasures should take more models into account.

The two afternoon sessions were about improved fault attacks and side channel analysis. The first paper by Banik et al. featued a differential fault attack on the Grain stream cipher family. Yossef Oren presented a refinement of algebraic side-channel attacks, which integrated the template phase and algebraic phase by feeding the a-posteriori probabilities from the template decoder directly into the solver/optimizer. This method can be used both for TASCA and Set ASCA, yielding "Template TASCA" and "Template Set ASCA", respectively. Oscar Reparaz showed how calculating the mutual information of tuples of points from power traces can be employed to optimize the selection of samples for multivariate DPA attacks. In a practical evaluation, they showed that an attack speedup of around 100x can be achieved by using only 5x more power traces. Benoit Gerard proposed to interpret linear collision attacks as a decoding problem in order to create a general framework for these settings.

The day ended with the rump session, whose talks were complemented by a swinging Big Band conducted by Bart :-)

Wednesday, September 5, 2012


This year's workshop on Foundations Of Security Analysis and Design is taking place at Bertinoro, as usual - a great location, in my opinion all security conferences should be held in castles. The only downside is it's been raining for most of the week so far, not helped by Mark Ryan from Brimingham starting his talk with the forecast of sunshine back home!

Mark's talk, spread out over two days, went into details of the ProVerif system and its aapplications. ProVerif is a protocol verifier that uses formal methods to try and prove security properties of protocols.

Day one covered the theory part, the syntax and as much semantics as you can fit into a slideshow. The basic construct is a predicate knows(-) to model data that the adversary could potentially discover from a protocol.
All operations take place on abstract data types specified by an equational theory. For example, the equation for public-key encryption is

dec(sk, enc(pk(sk), m, r)) = m

 ProVerif turns these equations into Horn clauses to construct a model of the system in first order logic.
For encryption, one clause is

knows(sk) & knows(enc(pk(sk), m, r) -> knows(m)

which states that if the adversary gets hold of a ciphertext and its key, then he can compute the message.

A type of security property can be modeled by checking that knows(s) is false for some secret s. ProVerif will try and derive knows(s) from the equational theory and present either an attack (a derivation of knows(s) ) or conclude that no such derivation exists (i.e. the protocol has the security property) - or, it will do something else.

ProVerif deliberately overestimates the adversary's capabilities to guarantee that if it terminates without finding a derivation, then none exists.
Apart from proving or disproving security, ProVerif can also turn up false positives - attacks that arise from its oversetimation of the adversary but that can't be launched in practice. If this happens, you need to rewrite your formalisation to exclude these attacks; Mark gave some tips and tricks for how to do this.
Finally, ProVerif can fail to terminate. This is in fact the usual case (at least when "hasn't completed when I come back at the end of my lunch break" is taken as evidence of non-termination), which is not so surprising when you consider that ProVerif is essentially trying to solve an undecidable problem,  hoping to hit on a decidable edge case.

In his second talk, Mark Ryan gave two applications to e-voting and the TPM. Both of these involve complex protocols and while a ProVerif "proof", like any other security "proof", has its limitations, it's definitely a good thing to consider to provide some level of assurance that a protocol achieves a security objective.

Saturday, September 1, 2012

FPL2012: Security, FPGAs and the Clouds

One of the FPL 2012 security track talks in Oslo focused on FPGAs platforms, security and clouds. The paper “FPGAs for Trusted Cloud Computing” written by Ken Eguro, Ramarathnam Venkatesan both from Microsoft Research addressed aforementioned issues. The authors focused on application of FPGAs in the cloud, e.g., how to use a protected bitstream to create a root of trust and thus how to secure client data and computation from potential attackers both externally and internally. There is no doubt that cloud security is a painful problem that is not easy to solve especially with a software-only approach and thus the authors investigated a hardware solution (FPGA) to mitigate some of security issues. Why FPGAs? According to authors … “FPGAs offer a substantially smaller and more well-defined attack surface as compared to traditional software-based systems. This allows us to make stronger security guarantees under more robust attack models“. So it seems that apart from high performance computing, real time computations, flexibility, power to performance ratio e.g. all well-known areas where FPGAs work well, we also have applications where FPGAs acts as a root of trust. It is worth to mention that hardware like dedicated Hardware Security Modules (HSMs) can be used in similar application, but they are usually fixed to handle specific application and thus building “virtual” HSM using FPGA to extended applicability of HSMs and mitigate the fixed-application problem by dynamically swap bit-streams whenever needed might be justified. The biggest disadvantage of this solution is that target software oriented clients, which means that in most cases hardware decryption language is not they favorite way of solving problems. Investigation how to interface a high-level language compiler is left as a future work.