I have attempted to provide some of my own musing from the most recent real world crypto in nyc. These are not necessarily complete (or correct!) since some of the summaries were recalled in retrospect rather than based on notes taken at the time. I haven't included all of the talks, mainly the ones that I understood the most/wasn't too stricken with cold to follow. I've included the header [simplified] if I can't remember key aspects of the talk - I'll try to update these in the future. If you prefer bullet points you can go to my original notes here (stream of consciousness, probably unintelligible).
I know that David Wong wrote up his own notes so it may be worth comparing mine with his for a full breakdown of what went on. The talks will also be posted online eventually.
Software engineering and OpenSSL -- Rich Salz
Rich Salz of Akamai and OpenSSL gave a talk that detailed the state of the project OpenSSL both before and after heartbleed. The talk served to show that bloated, ill-maintained projects can rise from the ashes to become agile and useful once again. As such, it is the quintessential American success story (complete with a cast of old white guys who save the world from the evil villain "CVE-2014-0160").
Before heartbleed occurred it seems that it would not be unfair to say that development of the project was complacent. The community was under-developed and consisted essentially of two main contributors. The code-base was archaic and hard to maintain/contribute to. Also, only $2000 of funding was received per year (though this figure has since been contested). Critical releases and design procedures were not well-documented and often were made as surprise announcements.
The heartbleed vulnerability lead to increased scepticism of OpenSSL both internally and externally. This served to bring the community closer together as they attempted to develop a workflow that allowed them to be more agile in the face of critical failures in their code. In part, this was helped by semi-regular community meet-ups and socialising. Additionally, coding standards were decided upon to make the base easier to read and operate over.
Since heartbleed there have been no critical CVEs received and nearly all have been addressed in appropriate time (although no real explicit figures were given).
Project Wycheproof -- Thai Duong
Talk announced a library of unit tests that can be ran on major crypto libraries. Highlighted where news bugs have been discovered in libraries such as BouncyCastle using the tests. The team were looking to expand their tests to other major crypto libraries such as the one used in Go.
X.509 in Practice -- L Jean Camp
This study characterised the response made by websites using TLS certificates following the heartbleed vulnerability (common theme).
The key points were:
- 90% of certificates revoked only after 2 years
- Probably down to expiration rather than action
- Some funny/worrying statistics:
- People changed certificates but used the same old keys
- Numerous certificates were downgraded (e.g. SHA256 --> SHA-1, SHA-1 --> MD5)
- MD5 was still being seen in 2014 despite popular belief
The main moral take-home for the talk was that it will probably be ages before SHA-1 is truly phased out. What are the implications of these findings for PQ crypto?
The Q&A was also notable for the disagreement between the speaker and Adam Langley over whether it was dangerous that phishing websites were serving their own certificates. This ended with Adam declaring that they (whoever they are) are doing away with the green padlock in the browser.
Cryptanalysis of go-jose -- Quan Nguyen [simplified]
go-jose is a library for interacting with JSON Web Tokens/Keys/Signatures (JWT/JWK/JWS) standards, RFCs 7519/7517/7515. They basically allow for construction of standardised messages for sending cryptographic data between parties.
Numerous libraries for interacting with these standards exist, though the go-jose is used quite regularly (I've used it myself, gulp).
The cryptanalysis focused on errors in verification of the header fields in JWTs due to the information that was included in the signature. JWTs also come with a signature that provides integrity over a header (essentially containing metadata for the token) and the token payload.
The second part of the cryptanalysis showed that the ability to create multiple signing objects allowed verification to pass if only one of the array of signers returned a success. That is, if a previous verification attempt failed, this was silenced by a subsequent success.
NSEC5 -- Sharon Goldberg
Talk focusing on securing the transmissions between users and DNS servers. Focused primarily on ensuring that a DNS server responds to queries honestly, but also preventing DNS enumeration-style attacks.
NSEC5 is a new proposal for securely maintaining that the DNS server responds correctly when a specific zone is queried for. That is, if I ask for "b.com" then the server only responds "no" if it doesn't own "b.com". The technique for doing this is by providing covering subset ranges of domains that it does own and showing that the queried domain does not exist since it exists in that range.
Unfortunately these ranges can lead to enumeration attacks on the entire set of domains held in the server by a malicious user. The technique employed by NSEC5 involves some extra server-side computation but essentially amounts to providing the server with the ability to obliviously evaluate PRFs that take held domains as input. If the domain is held by the DNS server, then it is unable to incorrectly say that it does not own it since it is unable to generate new covering sets since this would require extra keys. This is clearly heavily simplified -- I think a paper exists if you're interested. This improves on the previous NSEC3 solution which was unable to prevent enumeration-style attacks.
Cryptographically securing NTP -- Daniel Franke
NTP is a protocol that allows clients to synchronise times with a server. The protocol is unsecured by its very nature and so it would be easy for an incorrect time to be propagated to clients by a malicious server or via a mitm attack. In particular, no cryptographic enforcement of integrity is done during the protocol. Failures are witnessed regularly in the real-world when times are incorrect (mostly in time-sensitive computation such as GPS tracking).
The talk discussed the work done in NTS, competing with the Roughtime proposed by Adam Langley et al. I can't recall the exact details but you can read the internet draft here. In principal, it seems like Roughtime seems more viable right now but it is an interesting piece of work nonetheless. There was also some blind authentication stuff that needed to be done during the NTS work that could be solved using techniques from the Cloudflare CAPTCHA stuff (see below).
Levchin Prize -- Winners: Joan Daemen, Signal creators (Moxie Marlinspike, Trevor Perrin)
- Joan Daemen is known as one of the creators of both Rijndael and Keccak (AES and SHA-3 standards).
- Moxie and Trevor are the key developers in the Signal messaging protocol that is used by over a billion people worldwide
Joan's talk was rather prosaic and briefly went over his major achievements -- particularly in developing solutions to NIST open challenges.
Moxie provided a much more imaginative piece. He drew parallels between the significance of tech pioneers and "agents of history" such as previous Soviet leaders. He also quoted from Slavoj Zizek and so his political leanings are not really in question. It was quite cool to hear someone (a self-described anarchist) of significant standing in the tech industry talk with such a political conviction. It was during this stuff that he dropped the "I could have killed Mark Zuckerberg" comment (lack of context alert).
Unfortunately, he still had time to delve into full tech mode (talking about scaling to billions and developing apps that real people can use). I imagined him to be more isolationist in his anarchist philosophies, although these comments seem to suggest that he is not. Maybe leading the security team at Twitter inhibits your ability to forget about scale. And money.
Security assessment of White-box crypto -- Joppe Bos
Analysis of how white-box cryptography is very hard due to the leakage that it allows. A memorable piece of the talk showed that it is easy for someone to distinguish white-box implementations of DES against AES. This is fine, but it doesn't really fit in with how white-box crypto is viewed in a theoretical sense.
In fact, the definition that the speaker was looking for (e.g. virtual black box obfuscation) is impossible to instantiate for all circuits. The weaker notion iO requires circuit equivalence -- this would amount to distinguishing two implementations of AES, for example.
Notable for Q&A for the inquiry into moral implications of work. Basically the work does not help users, it helps people to protect their DRM. It is not easy to see how the work has any moral foundations in a practical sense. I'm involved in research into theoretical applications of obfuscation so this was an interesting question that I've been thinking about since. In some sense, theoretical results are divorced from practical realisation (esp. in crypto). However if a natural application of a piece of work can be used for potentially immoral activity then maybe the author admits some culpability as well.
NIST PQ challenge -- Rene Peralta
- The NIST challenge has been officially announced, deadline: end of 2017
- It will take 7 years in total to decide the winner, is this too long? I think so.
CRYSTAL -- Tancréde Lépoint [simplified]
This talk presented the first of a set of submissions for the NIST challenge. CRYSTAL is a suite that has been developed for implementing lattice-based constructions such as key exchange and KEM.
The suite uses a variation of Ring-LWE known as Module-LWE. I haven't read too much about it before but it seems that you merely concatenate multiple RLWE samples into a matrix and perform parallel computations over these values (is this right?). For the KEM this increases communication over the RLWE alternatives such as NewHope. However, it allows for deciding upon an entire key in a much more efficient way.
I cannot remember the exact comparisons of parameter sizes and run times/bandwidth exertion. I'll try to update once the talk is posted online.
Frodo -- Valeria Nikolaenko
Frodo presents a PQ KEM/Key exchange based on LWE rather than RLWE. The advantage of using an LWE based key exchange is that the security properties inherited from LWE are much better understood. The LWE problem has worst-case reductions to problems in ideal lattices that have been studied for hundreds of years. I can't exactly remember the reductions for RLWE but they are certainly weaker and less time has been converted into cryptanalysing the assumption.
However, the reason that RLWE is used so much is because of the efficiency gains that it provides. For example, in AS + E we can express each term as a vector of coefficients corresponding to some polynomial ring. In LWE the A term is a uniform matrix. As such, in Frodo since eventually some AS + E samples are transmitted across the wire and thus communication is much larger than NewHope. Computation time is also increased slightly -- not sure why, maybe something to do with how reconciliation is done? Fortunately, the increased overheads is not completely detrimental and it appears that the protocol may still be fine for being performed in a TLS handshake (again, can't remember exact figures but you can read the paper here.
Supersingular Isogeny DH -- Michael Naehrig [heavily simplified]
I'm not going to pretend that I understand elliptic curves or isogenies so basically the summary is:
- Communication much better than lattice-based proposals
- Computation times very high (almost impractical)
Represents a more DH-like approach to PQ key exchange.
Caveat: I was quite ill throughout the whole three days but the second day was by far the worst. Expect incorrect analyses/hallucinatory recall of what happened in the talks.
High-throughput, secure 3-party computation -- Yehuda Lindell
This work takes the secret sharing based techniques used in previous MPC protocols and adapts them into a brutally efficient way of performing 3PC. In particular, the secret sharing tactic addresses high-throughput scenarios for computationally restricted devices. Alternative methods using garbled circuits can be used when bandwidth is restricted.
Their main protocol was hugely fast, the semi-honest variant was able to compute > 7 billion AND gates per second. The malicious-security version of the can also compute > 1 billion gates. Previous attempts (by work such as Sharemind) failed to break the 1 billion AND gate barrier. In fact the number of AES computations computed by this protocol is two orders of magnitude greater than the number in the nearest competitor.
I went to the talk at CCS16 in Vienna detailing this work where I was equally impressed. In that talk someone from Sharemind attempted to decry the results, however this was swiftly refuted. As such, it seems like this protocol is by far the fastest available.
MPC at Google -- Ben Kreuter [simplified]
Ben spoke about the work that Google are doing in using MPC to tackle problems they are facing. Firstly, he mentioned that the general techniques such as using garbled circuits.
One particular application that I can remember was targeting private set intersection (PSI). Ben mentioned that they use additively homomorphic encryption-based techniques to solve this. This was actually quite strange as there is a lot of literature focusing on performing using very quick methods (e.g. via oblivious transfer or using data structures such as Bloom filters) but he did not mention this. In fact maybe he should take a look at this...
E2E in Messenger -- Jon Millican [simplified]
Jon spoke about the design procedure behind the choices that were made for the private messaging service built into the messenger app by facebook. Some of the key features are that they went for one-device-per-chat rather than multi-device as it was harder (although achieved by Signal?).
One of the new concepts introduced was that of "message franking". This allows a stamp of authenticity to be placed on a message by facebook without being able to read the contents.
There were some criticisms of the work in the Q&A due to the fact that the protocol is not truly E2E -- since facebook sees metadata of each of the messages.
Snapchat (Memories) -- Moti Yung [heavily simplified]
New service opened up by Snapchat. Essentially secure cloud storage for your pictures and videos. Main caveat is that if you lose your password then all "memories" are gone. Not hugely interesting, I feel like secure cloud storage has been done to death.
DMCA -- Mich Stoltz
Interesting talk detailing how innocent (academic) research can be declared unlawful in countries such as the US and the UK. In particular, research that displays faults in encryption schemes can be declared unlawful under the DMCA.
Also highlighted how laws have attempted to be reworded to allow more research. However, the rewording is such that the original interpretation can still be inferred (can't remember the exact laws, sorry). Cases brought forward by prominent figures such as Dr Matthew Green to try and achieve clarity in these laws.
Mich works for the EFF and he directly represents researchers whose work clashes with these laws.
Message Encryption -- Trevor Perrin
The first half of this talk focused on a broad historical overview of the work that lead to the proliferation of secure messaging apps; such as Signal. The speaker focused on message encryption as this has been a fundamental goal for communication over the past centuries -- including up to the present day.
He also gave a brief rundown of the design choices behind the signal protocol. For example:
- DH ratcheting for forward-secrecy
- Establishing trust between users using auth checks and trusted directories
- Brief explanation of how multi-device, multi-person interactions work
Proof of Signal protocol -- Luke Garratt
This talk followed the previous talk nicely by showing that a formal analysis of the signal protocol reinforces their security claims. This was the first analysis of its kind, a purely theoretical analysis establishing provable guarantees on the protocol (this was questioned by some formal methods guy afterwards).
The security definitions that were chosen for the analysis were that of "forward secrecy" and a supposedly new one known as "post-compromise security" (this was also questioned afterwards). Post-compromise is an interesting notion and specifies that a protocol must be able to recover
after a compromise is detected, therefore it is stronger than forward secrecy.
Signal achieves both of these security notions in the model they consider. We attempted to cover the proof in a reading group at RHUL but it is hugely complex - Luke claimed otherwise during the talk. There are some limitations of the analysis, keys are assumed to be distributed honestly and certain keys are also considered to be authentic in all situations. The paper will be presented at Euro S&P.
Is Password Insecurity Inevitable? (SPHINX) -- Hugo Krawczyk
In this talk, Hugo focused on the usage of password managers with low-power devices. The focus of the work is to allow a phone submit random passwords generated by a password manager without submitting the master secret key in the clear. Using random passwords is acknowledged as the safest way of preventing your accounts from being stolen.
The construction he showed involved the usage of an oblivious PRF (OPRF) that is evaluated by the password manager. Can't remember the explicit construction but the master secret key was built into the OPRF evaluation. The protocol was called SPHINX and I believe the plan is to release it as an app on the android app store. You can read the papers here
Solving the Cloudflare CAPTCHA -- George Tankersley
This talk involved myself as a co-author (I interned at Cloudflare in summer 2016) I'll probably write something about the idea specifically in the near future.
- Good, meaningful discussions on work
- Interest from prominent members of community, including new solutions not necessarily based on Chaumian signatures
- One question from the floor didn't seem to understand why Tor IPs were blocked in the first place
- Another asked if Cloudflare customers would be happy with the change
(Tired from illness from day before so I was committed to some sessions more than others...)
Physics of building a quantum computer -- Evan Jeffrey
This was an interesting talk but I am not a physician in any sense of the word so a lot of it was lost on me. These were the main points that I got (from a crypto perspective):
- Current quantum chips that have been tested at Google have 9 qubits with 99.5% accuracy in computation
- Computation only takes place in very precise conditions (e.g. very cold, huge basins)
- Factoring is way off (Google aren't bothered about this apparently...)
- Require better fault-tolerance before any meaningful headway can be made
- We're about 10-15 years away from a viable quantum computer
Erasing Secrets from RAM
Cold boot attacks can be used to extract secret material from computers that is temporarily kept in the RAM of the machine. As such, it would be useful if coding practises only used functions that weren't likely to store information in memory in this way. Current code analysis techniques (e.g. static/dynamic) do not pick up these nuances in code design.
The authors have developed a plugin named secretgrind for clang/LLVM that analyses software. The plugin outputs functions that are known to interact with RAM so that the developers can see which functions may be unsafe to use (if they are handling sensitive key material, for example).
DAA + TPM2.0 -- Anja Lehmann
I have never encountered Direct Anonymous Attestation (DAA) before but it seems to be a concept used by TPMs to sign attestations detailing correct usage of sensitive material. I'm not exactly sure who is verifying the certification of the TPM but presumably it is done in some protocol.
TPM2.0 introduces a new API that previous security models for DAA are not compatible with. The talk described a way of adapting the API to remove static DH oracles so that a revised protocol could be proven secure with respect to a new DAA security model.
(This may be garbage -- I can't exactly remember the talk and I haven't read over the slides again)
What is Revealed by ORE -- David Cash
Order-revealing encryption (ORE) allows a user to encrypt values whereby there exists a "compare" function that outputs which of two values is larger in value. ORE has some interesting security models since it is not clear how much the value comparison leaks about the underlying plaintext data. This talk attempts to quantify the leakage -- especially in the case where plaintexts are not random. Most security proofs for ORE schemes are done assuming that these are random.
The basis of the talk assumed that two columns in a database were correlated in some way. Their examples amounted to analysing geographic data e.g. (x,y) coordinates -- their examples were road intersections in California and phone GPS data in Germany. Using ORE to encrypt the points the idea of the work was to show that by using the "compare" function on the ciphertexts it was possible to recreate an accurate representation of the data. Indeed, this was the case, the compare function leaked a noisy version of the original data such that key points of interest were still detectable. The attacks were not considered with extra knowledge, but if so they could potentially be devastating.
The attacks are pretty powerful since they affect ORE for even the strongest security definitions. Also highlights how assuming uniform plaintexts does not translate to security in real life. Not sure if it means ORE is useless or whether you should just be careful what you encrypt.
Breaking Web Apps Built Upon Encrypted Data -- Paul Grubbs
I've seen this talk before but essentially it was the first part of a big showdown stemming from a valid break of the Mylar encrypted web app. Oddly the attack went first before the construction but I guess this is so the original author had a chance to answer the criticisms.
The attack itself uses searchable encryption leakage (alongside leakage acquired from meta data) to attack Mylar. Mylar is a web app that lies upon an encrypted database and allows users to retrieve their secured files without allowing the server storage system to read them.
The attack shows that if a user is corrupted at some point, then all honest users sharing keys with this user immediately surrender privacy of their files to the server. This attack fits into the adversarial model that was defined in the original Mylar paper. The most puzzling aspect of this work was that the Mylar authors changed their paper to remove some of these claims retrospectively without acknowledging any mistakes.
Building Web Apps Upon Encrypted Data -- Raluca Ada Popa
This talk mainly focused on the end goal of producing secure web apps on encrypted databases. Their was an initial comment that their work is on the "constructive" side of cryptography (presumably rather than attacking). The speaker talked fairly extensively about Mylar and its successor Verena but the focus of the talk was how building systems such as these progresses towards an end goal. In particular, there was an advertisement for a later work known as Opaque that will be published this year that meets more stringent goals.
For Mylar the author, specified that the attacks by Grubbs et al. were out of the scope of their attack model. This is not true given what was said in their original paper. There was no acknowledgement of their original security claims, you can read more about this dispute here, here and here.
There was some consternation at the fact that the speaker declared that the meta data issues had to be prevented by users themselves. I kind of see where she is coming from -- but security people like shooting off about users so this was a no-go area.
I personally declared the TLS and bitcoin sessions out of scope for myself, hence the lack of content.
Re-thinking internet scale consensus -- Elaine Shi [simplified]
I hate bitcoin, smart contracts and ledgers -- is that okay? I'm not sure, but what I do know is that a lot of people keep talking about it so maybe I'm in the minority.
This talk focused on the notion of "robustness" and showed that bitcoin (with tweaks to the hash proof-of-work?) is a robust cryptocurrency.
The best part of this talk is when bitcoin was referred to as 'the honeybadger of cryptocurrencies'; because, of course, honeybadgers are very robust...
Ripple talk -- Pedro Moreno-Sanchez [simplified]
This talk focused on credit networks rather smart contracts. These networks define the movement of credit between some topology of entities. Credit is subtly different to currency since it does not involve the movement of actual money -- rather it designates an IOU graph over a network of individuals.
One credit network built in cyberspace is Ripple, from what I can remember, this talk just showed that Ripple was inherently insecure. There was probably significantly more to the work than that but I was finished at this point.
Overall, the conference was pretty cool. A lot of good cryptographers attended and I managed to speak to interesting people about work I was doing. Also had some interesting discussions about the Cloudflare CAPTCHA stuff which was great.
It seems to me that RWC is fast becoming the biggest crypto conference in existence (past crypto, eurocrypt etc.). The talks were generally of a really high standard and their was a variety of different interesting topics covered.
My main criticism is that the vegetarian food options at lunch were atrocious. It would be cool if this were fixed. Having said that my PhD finishes in Jan 2019 so I potentially only have time for one more in RWC (in Zurich next year). Not sure if I'll go yet or not but I'm tempted given that this one was pretty successful.
Final score: 8.5/10 (It would have been a 9 if the food was good)