The breaches in Adobe’s databases, which were exposed by Hold Security and publicized by security journalist Brian Krebs have continued to have significant impacts beyond the company itself. In addition to the public release of extensive amounts of source code for flagship Adobe products such as CloudFlare, the usernames, passwords and password hints of upwards of 150 million users were exposed. This exposure is especially problematic because instead of using a one way hash with individual salts (which is the industry standard method of securing password data within a database), Adobe encrypted the entire password database with Triple DES, and did the entire database with the same key. What this means is that anyone can assemble this database for themselves, and sort by the encrypted password to find groups of users that used the same password, then use the groups of associated password hints to crack the passwords of entire groups of users.
Eventually, once enough of the plaintext password data is known, it may be possible to mount a “known plaintext attack” and recover the Triple-DES key, exposing the rest of the passwords. It is also possible that the original hackers who scooped the database were able to obtain the key, given that they successfully overcame many other security features within Adobe’s network. This would potentially release an unprecedented number of currently used passwords into the public domain, but even if the key is not recovered cryptoanalytically, the addition of password hint data to the database has potentially exposed millions of users to having their passwords found out. In addition to this, the release of so many organically created passwords into the public sphere means that password crackers suddenly have much more information for their attack dictionaries, further improving their position vis-a-vis login security.
Of course, after the breach Adobe required all users of their site and services to change their passwords. However, since so many people reuse password and login credentials across multiple sites, Adobe is not the only provider that has had to deal with the results of their truly epic blunder. Already Facebook, Diapers.com and Soap.com have analyzed the breach and informed users that were using the same login credentials on Adobe that their accounts have been compromised and that they must change their passwords.
This incredible security failure has inspired much-warranted derision within the computing world, with comics luminary XKCD describing it as “The Greatest Crossword Puzzle in the History of the World”
Dan Gifford – MCySec Media Manager
The Onion Router has long been thought to be one of the best methods for maintaining anonymity of internet traffic, and has even been assailed by the NSA as a hard problem, leading them to use workarounds to circumvent the network and attack specific users However, new research presented by a team from the US Naval Research Lab and Georgetown University has found that with specific methods they designed:
“Tor faces even greater risks from trafﬁc correlation than previous studies suggested. An adversary that provides no more bandwidth than some volunteers do today can deanonymize any given user within three months of regular Tor use with over 50% probability and within six months with over 80% probability.”
Traffic correlation and nodes controlled by malicious actors have both been considered as a major risk to TOR for a significant amount of time. This new research quantifies the problem and the danger to users of the service, and with any luck may lead to changes in the system to mitigate said risks.
In a recent paper compiling a few years of ongoing research, an international team has described the methods they used to find the cryptographic keys of 184 out of 2 million smart card certificates issued to the Taiwanese public by their government. More than a hundred of the keys shared prime numbers used in their generation with at least one other key, While this may seem like a trivial number of failures for a program of this size, the algorithm used to generate the keys, 1024 bit RSA, can randomly choose between more than 2^502 different prime numbers when building a key. Even in a sample size as large as 2 million, any prime sharing indicates deep seated failure in the employment of the cryptographic system. The researchers used regular desktop computers to find the keys, in operations that should have taken millions of years of processing time had the cryptosystems been implemented correctly.
The cards were issued by the Taiwanese government to enable citizens to authenticate themselves to the government when using online services, such as paying taxes. The vulnerable cards were all using RSA 1024, while most of the cards issued now use RSA 2048. The government has also attempted to reach out to the citizens whose cards are cryptographically compromised in order to replace them.
Problematically, the system and the smart cards had been certified as cryptographically safe by a number of agencies. This failure will certainly raise more doubt about the current effectiveness of certification agencies for cryptography. In the wake of the remaining questions regarding the DUAL_EC_DRBG fiasco at the US’s NIST (National Institute of Science and Technology), the old question of “Quis custodiet ipsos custodes?” or “Who watches the watchmen?” still stands.
Dan Gifford – MCySec Media Manager
RSA, an internet security firm, has warned customers against using the DUAL_EC_DRBG random number generation algorithm which they distributed with some of their products. The warning comes after the algorithm has been singled out as compromised by the NSA in the course of Project Bullrun. The problem is that the random numbers generated by the piece of code are actually not random in specific ways that make them vulnerable to exploitation by specific actors, which could lead to those actors obtaining the cryptographic keys of users.
Matthew Green, a cryptography researcher at Johns Hopkins University, has published an excellent series of posts on the vulnerabilities of the algorithm and the issues around it on his blog.
Dan Gifford- MCySec Media Manager
Four researchers from the United States, the Netherlands, Switzerland and Germany have published a paper establishing the feasibility of creating difficult to detect hardware trojans. The trojan is made during the manufacturing process by failing to properly dope a portion of the semiconductor chip used to generate random numbers for cryptography. Unlike previously understood hardware trojans, a practice known colloquially as “Chipping”, no extra hardware must be added to the computer chip in order for the exploit to work. This means that visual inspection of the chip will not be an effective countermeasure in these cases. Additionally, the chips that the researchers altered in this way still passed operational standards, meaning that detection of an affected system will be very difficult.
The result of the exploit is that the encryption codes generated by the hardware are trivially easy for an adversary to crack, potentially exposing sensitive data. This development poses major problems for organizations and nations that rely on distributed and international supply chains to construct their sensitive electronic devices. Much like Project BULLRUN this research demonstrates that the creation of sufficiently random numbers remains a central problem of encryption, and a major area of exposure to outside attack.
Dan Gifford- MCySec Media Manager
Recent documents released by NSA leaker Edward Snowden have revealed the existence of a classified NSA program, codenamed Bullrun, which purports to be be able to defeat the encryption standards, such as SSL, that underlie commerce and confidentiality on the world wide web. The exact methods of the program remain unclear, though there are tantalizing indicators that the root problems may lie with the methods used to generate random numbers for cryptographic keys; specifically an algorithm known as Dual_EC_DRBG which was inserted into the standard at the insistence of the NSA. Bullrun, and the related GCHQ program Edgehill, appear to have operated by ensuring through government pressure that vulnerabilities were inserted into the standards used to develop cryptographic systems.
Somewhat disturbingly, the programs are both named for the first battles in their respective nation’s civil wars. The irony here is that these programs have almost certainly permanently damaged the relationship between government security agencies and government and civilian groups responsible for creating technology standards. And while we are not yet at the point of brother fighting against brother, it is obvious that any future cyber-security recommendations made by the NSA will be regarded as highly suspect.
- Dan Gifford, MCySec Media Manager/ Graduate Research Assistant
A more technical analysis:
Bruce Schneier’s advice on maintaining security in light of these developments: